>"I also agree with researchers like Yann LeCun or François Chollet that deep learning doesn't allow models to generalize properly to out-of-distribution data—and that is precisely what we need to build artificial general intelligence."
I think "generalize properly to out-of-distribution data" is too weak of criteria for general intelligence (GI). GI model should be able to get interested about some particular area, research all the known facts, derive new knowledge / create theories based upon said fact. If there is not enough of those to be conclusive: propose and conduct experiments and use the results to prove / disprove / improve theories.
And it should be doing this constantly in real time on bazillion of "ideas". Basically model our whole society. Fat chance of anything like this happening in foreseeable future.
Excluding the realtime-iness, humans do at least possess the capacity to do so.
Besides, humans are capable of rigorous logic (which I believe is the most crucial aspect of intelligence) which I don’t think an agent without a proof system can do.
Uh, if we do finally invent AGI (I am quite skeptical, LLMs feel like the chatbots of old. Invented to solve an issue, never really solving that issue, just the symptoms, and also the issues were never really understood to begin with), it will be able to do all of the above, at the same time, far better than humans ever could.
Current LLMs are a waste and quite a bit of a step back compared to older Machine Learning models IMO. I wouldn't necessarily have a huge beef with them if billions of dollars weren't being used to shove them down our throats.
LLMs actually do have usefulness, but none of the pitched stuff really does them justice.
Example: Imagine knowing you had the cure for Cancer, but instead discovered you can make way more money by declaring it to solve all of humanity, then imagine you shoved that part down everyones' throats and ignored the cancer cure part...
Out of curiosity, what timeframe are you talking about? The recent LLM explosion, or the decades long AI research?
I consider myself an AI skeptic and as soon as the hype train went full steam, I assumed a crash/bubble burst was inevitable. Still do.
With the rare exception, I don’t know of anyone who has expected the bubble to burst so quickly (within two years). 10 times in the last 2 years would be every two and a half months — maybe I’m blinded by my own bias but I don’t see anyone calling out that many dates