I don't know where we are on the LLM innovation S-curve, but I'm not convinced that plateau is going to be high enough for that. Even if we get an AI that could do what you describe, it won't necessarily be able to do it efficiently. It probably makes more sense to have the AI write some traditional computer code once which can be used again and again, at least until requirements change.
The alternative is basically https://ai-2027.com/ which obviously some people think is going to happen, but it's not the future I'm planning for, if only because it would make most of my current work and learning meaningless. If that happens, great, but I'd rather be prepared than caught off guard.
That leads to a kind of fluid distinction similar to interpreted vs. compiled languages.
You tell the AI what you want it to do. The AI does what you want. It might process the requests itself, working at the "code level" of your input, which is the prompt. It might also generate some specific bytecode, taking time an effort which is made up for by more efficiently processing inputs. You could have something like JIT, where the AI decides which program to use for the given request, occasionally making and caching a new one if none fit.
Yeah AI at least now is so energy inefficient. There is only so much sun hitting earth we can be stupid with. Using AI for everything makes electron apps seem efficient! Once the hype runs out and you pay full price much AI today will be unattractive. Hopefully that leads to more efficient AI. (Which I suspect is more interesting)
The alternative is basically https://ai-2027.com/ which obviously some people think is going to happen, but it's not the future I'm planning for, if only because it would make most of my current work and learning meaningless. If that happens, great, but I'd rather be prepared than caught off guard.