This is the fundamental error I see people making. LLMs can’t operate independently today, not on substantive problems. A lot of people are assuming that they will some day be able to, but the fact is that, today, they cannot.
The AI bubble has been driven by people seeing the beginning of an S-curve and combining it with their science-fiction fantasies about what AI is capable of. Maybe they’re right, but I’m skeptical, and I think the capabilities we see today are close to as good as LLMs are going to get. And today, it’s not good enough.
Yes but why did ChatGPT work on math Olympiad problems? Because it got a prompt giving it the instruction and context etc.
Why did GPT5 help Terence Tao solve a math problem, because he gave it a prompt and the context etc.
None of these models are useful without a human prompting them and giving it tasks, goals, context etc, they don't operate independently, they don't get ideas of work to be done, they don't operate over long time horizons, they can't accept long term goals and sub-divide those goals into sub goals, and sub tasks etc.
They are useless without humans telling them what to do.
This is the fundamental error I see people making. LLMs can’t operate independently today, not on substantive problems. A lot of people are assuming that they will some day be able to, but the fact is that, today, they cannot.
The AI bubble has been driven by people seeing the beginning of an S-curve and combining it with their science-fiction fantasies about what AI is capable of. Maybe they’re right, but I’m skeptical, and I think the capabilities we see today are close to as good as LLMs are going to get. And today, it’s not good enough.