Hacker Newsnew | past | comments | ask | show | jobs | submit | fasthands9's commentslogin

I do think Apple needs a better Siri, but I think ultimately they were smart not to plow tons of money into it trying to do it themselves.

A better Siri is an expense to keep up the premium brand, not something that they will monetize. For particular uses of AI people will just want particular apps.


Fun idea.

The memes are very funny, but I also am unsure what to make of it. Is the "unusually busy" feature mean like once a year business or one out of every ten days?


Google maps marks data out of typical traffic for that day and time in the week, basically meaning that if on a Thursday, at 3 pm, there is a wait time reported of 2 hours, where there is typically a reported wait time of 30 minutes during that same weekday and time, then it is marked as above usual traffic (and there are other factors google marks data with - a larger range than just saying "above usual traffic").


I am not bearish on AI by any means, but don't really get why Apple needs the best AI in-house other than to improve Siri. That doesn't take 500B.

As long as you can download apps and set permissions for communication between apps/devices, then it seems like 3rd party developers will make Apple devices more valuable for them.


I definitely read less than I used to. I think in the past some books didn't need to be the length they are. I still like narrative non-fiction and for those the level of detail is the point.

It is harder to get back into reading, but think if you force yourself a little at first your brain gets "rewired" back to it just fine.


Yes, unless it's diagnosed adhd, it's a skill that can be improved. There are serious exercises to train the attention, which help with reading attention. but they're not fun...


>The provision was squeezed into the bill, nicknamed the “Big Beautiful Bill,” in May. It is designed to prohibit states from “[enforcing] any law or regulation regulating [AI] models, [AI] systems, or automated decision systems” for a decade.

I still am a bit confused by this. States require certain tasks be done by license, like writing prescriptions or having a degree to teach in public schools. It seems like licensing rules like that could still be passed?

Though still makes me nervous. I admit I would like some type of safety inspection of the top labs, even though that sounds heavy handed. I just feel like it would be good for making them dedicate a minimum amount of resources to safety.


I think this is mostly right, but also I'm not sure I agree completely with the premise. Humans have years of conversations they've heard before they attempt to read or write. They already have a concept of what a 'dog' is before they see the word, and know what it is likely to do. Not the same with something that only sees text.


I agree with you 100% and I'm not sure if it contradicts my point that humans have a natural advantage over LLMs in the way I tried to illustrate.

My initial comment was going to make an abstract reference to how human beings are pretty much wired for reasoning from the time that they're being breastfed, or at least reared in the clutch of their mother. It has something to do with the impression I've picked up of how the inheritance of a language, and subsequently literacy, starts with your mom—in ideal cases.

I don't know if this is a strike against humans in the whole argument for efficiency. But I don't think it does.

Computers don't have Moms. Go Moms.


Yeah one thing I’ve wondered (and maybe they do this) but find ways to cross encode different kinds of data, words yes, but auditory and visual data too. The algorithms to do this might be complicated (or incomprehensible) but for sure lots of creativity say comes from the interrelationship between senses, combine that with emotion as well, and I imagine it partially comes down to, our writing ability isn’t limited to the collection of what we’ve read.

Then maybe the other thing is that rules and relationships must be encoded in a special way. In LLM’s I assume rules are emergent, but maybe we have a specific rules engine that gets trained based on the emotional salience of what we read/hear.

Maybe another reason is what’s encoded in our DNA, which might imagine our brain structure is fundamentally designed for some of this stuff.


If anything I imagine law and cases to get more complex. Less lawyers may be needed for any one case, but that should just make it more possible for lawsuits for smaller amounts of damages to be brought.

The blog post brings up the Daniel Penny trial. That trial took six weeks, even though it was on video. I imagine criminal cases have gotten longer and more complex now, because each case has more potential information than it ever did.


This is my favorite type of art.


It is interesting. My two theories would be a) people at top schools follow these meta-discussions more and are therefore are more likely to be ahead of trend b) someone who could do comp-sci at Stanford could easily become something like a doctor, but at small schools there's not an obvious second option that's also high earning.


That’s interesting, so maybe - Stanford enrolls say 50k people and lets the chips fall where they may in terms of majors and doesn’t like prescriptively try and recruit x numbers of comp sci majors. — and the chips are falling away from CS.

Or they are recruiting for growth, but as students progress the natural rate of major change is not as biased towards CS.


>...Stanford enrolls say 50k people...

Class of 2028 Profile: 1,693 students enrolled

https://facts.stanford.edu/academics/freshmen-class-profile/


Haha, wow was I off, but my point was more about the shape of how things get distributed :-)


I've long thought that if AI can start solving unsolved math problems we are in for a very weird (or bad) future, since at that point it really can come up with things that no one has thought of before. And perhaps, it would also be able to find new physics or new physical mechanisms very fast.

That said, it sounds like teams are guiding the areas of exploration pretty directly. If I was more cynical, I'd say that Google has a lot of financial incentive to claim it was integral to any breakthroughs the team comes up with.


Deepmind team is trying to solve the problem. Not their AI alone. The novel creativity AIs can produce atm are hallucinations.

Id wager quantum computers even in their infancy are more capable of generating valid solutions humans are incapable of.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: