Of course, that's my point. Again, I think it's great that OpenAI swung for the fences. My beef is again with these "thought leaders" who would write this blather about AGI being just around the corner in the most uncritical manner possible (e.g. https://news.ycombinator.com/item?id=40576324). These folks tended to be in one of two buckets:
1. "AGI cultists" as I called them, the "we're entering a new phase of human evolution"-type people.
2. People who had a motive to try and sell something.
And it's not about one side or the other being "right" or "wrong" after the fact, it's that so much of this just sounded like magical thinking and unwarranted extrapolations from the get go. The actual experts in the area, if they were free to be honest, were much, much more cautious in their pronouncements.
Definitely, the grifters and hypesters are always spoiling things, but even with a sober look it felt like AGI _could_ be around the corner. All these novel and somewhat unexpected emerging capabilities as we pushed more data through training, you'd think maybe that's enough? It wasn't and test time compute alone isn't either, but that's also hindsight to a degree.
If you've been around long enough to witness a previous hype bubble (and we've literally just come out of the crypto bubble), you should really know better by now. Pets.com, literally an online shop selling pet food, almost IPOd for $300M in early 2000, just before the whole dot-com bubble burst.
And yeah, LLMs are awesome. But you can't predict scientific discovery, and all future AI capabilities are literally still a research project.
I've had this on my HN user page since 2017, and it's just as true as ever:
In the real world, exponentials are actually early stage sigmoids, or even gaussians.