Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The definition of AGI is diffuse enough to make it an argued point - until we can mostly agree it's already happened. For now, the stats are improving well enough across the industry to maintain investor attention. Will it all come crashing down a-la the .com bubble? It's seeming more likely by the quarter.

Like the digital economy post .com burst, I think AI will survive and grow far beyond its current market of chat bots and agents. The weakest will die, but the market will be better off for it in the long run.

The next big problem for AI is time horizons. Frontier AI has roughly doctorate level knowledge across many domains, but it needs to be able to stay on task well/long enough to apply it without a human hand holding it. People are going to have to get used to feeding the AI detailed and accurate plans just like humans, unless we can leverage an expanded form of leading questions like GPT-5 does before executing "deep research". Anthropic feels best positioned to do this on a technical level, but I feel OpenAI will beat them on the product level. I am confident that enough data can be amassed to push time horizons at least in coding, which itself will unlock more capability outside that domain.

I feel it's very different from Tesla, because while Tesla barely ever got closer to their promises the AI industry is at least making visible progress.



> The definition of AGI is diffuse enough to make it an argued point

This hits the nail on the head. 2-3 years ago when the current round of AGI hype started everyone came up with their own definitions of what it meant. Sam Altman et al made it clear that it meant people not needing to work anymore, and spun it in as positive a way as they could.

Now we're all realising that everyone has a different definition, and the Sam Altmans of the world are nit picking over exactly what they mean now so that they can claim success while not actually delivering what everyone expected. No one actually believes that AGI means beating humans on some specific maths olympiad, but that's what we'll likely get. At least this round.

LLMs will become normalised, everyone will see them for the 2x-3x improvement they are (once all externalities are accounted) for, rather than the 10x-100x we were promised, just like every round of disruption beforehand, and we'll wait another 10-20 years to get the next big AI leap.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: