Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Definitely, the grifters and hypesters are always spoiling things, but even with a sober look it felt like AGI _could_ be around the corner. All these novel and somewhat unexpected emerging capabilities as we pushed more data through training, you'd think maybe that's enough? It wasn't and test time compute alone isn't either, but that's also hindsight to a degree.

Either way, AGI or not, LLMs are pretty magical.



If you've been around long enough to witness a previous hype bubble (and we've literally just come out of the crypto bubble), you should really know better by now. Pets.com, literally an online shop selling pet food, almost IPOd for $300M in early 2000, just before the whole dot-com bubble burst.

And yeah, LLMs are awesome. But you can't predict scientific discovery, and all future AI capabilities are literally still a research project.

I've had this on my HN user page since 2017, and it's just as true as ever: In the real world, exponentials are actually early stage sigmoids, or even gaussians.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: