The problem is that there is really like no middle ground. You either get essentially very fancy search engines which is the current slew of models (along with manually coded processing loops in the form of agents), which all fall into the same valley of explicit development and patching, which solves for known issues.
Or you get something that can actually reason, which means it can solve for unknown issues, which means it can be very powerful. But this is something that we aren't even close to figuring out.
There is a limit to power though - in general it seems that reality is full of non computationally reducible processes, which means that an AI will have to simulate reality faster than reality in parallel. So all powerful all knowing AGI is likely impossible.
But something that can reason is going to be very useful because it can figure things out that haven't been explicitly trained on.
This is a common misunderstanding of LLMs.
The major, qualitative difference is that LLMs represent their knowledge in a latent space that is composable and can be interpolated.
For a significant class of programming problems this is industry changing.
E.g. "solve problem X for which there is copious training data, subject to constraints Y for which there is also copious training data" can actually solve a lot of engineering problems for combinations of X and Y that never previously existed, and instead would take many hours of assembling code from a patchwork of tutorials and StackOverflow posts.
This leaves the unknown issues that require deeper reasoning to established software engineers, but so much of the technology industry is using well known stacks to implement CRUD and moving bytes from A to B for different business needs.
This is what LLMs basically turbocharge.
To be a competent engineer in 2010s, all you really had to do was understand fundamental and be good enough at google searching to find out what the problem is, either for stack overflow posts, github code examples, or documentation.
Now, you still have to be competent enough to formulate the right questions, but the LLMs do all the other stuff for you including copy and paste.
I don’t know… Travis Kalanick said he’s doing “vibe physics” sessions with MechaHitler approaching the boundaries of quantum physics.
"I'll go down this thread with GPT or Grok and I'll start to get to the edge of what's known in quantum physics and then I'm doing the equivalent of vibe coding, except it's vibe physics"
How would he even know? I mean he's not a published academic in any field let alone in quantum physics. I feel the same when I read one of Carlos Ravelli's pop-sci books, but I have fewer followers.
Or you get something that can actually reason, which means it can solve for unknown issues, which means it can be very powerful. But this is something that we aren't even close to figuring out.
There is a limit to power though - in general it seems that reality is full of non computationally reducible processes, which means that an AI will have to simulate reality faster than reality in parallel. So all powerful all knowing AGI is likely impossible.
But something that can reason is going to be very useful because it can figure things out that haven't been explicitly trained on.