In some sense I am hopeful that AI will be able to just write everything directly in binary. Everything written ideally, with no abstraction, fluff or bumpers for human brains. Computers don't talk in any high level programming language, they talk in binary. If anything we should probably be focusing LLMs on getting good at that.
I can only imagine that the amount of energy wasted on CPU cycles from layers of bloated programming languages makes stuff like bitcoin mining look like a rounding error.
Not sure that’s always a good thing - see the occasionally erratic behavior of Tesla’s autopilot. It directly speaks the language of the systems it connects with, and also occasionally steers into dividers, killing people - and nobody knows why. We need to be able to verify correctness of what the models generate.
So you can't even debug or check the LLM's output for correctness? Good luck with that.
If anything, the best use for LLMs is to produce code in very strong languages like Lean, where correctness of the code can be established. Just trusting machine code output would be irresponsible.
I can only imagine that the amount of energy wasted on CPU cycles from layers of bloated programming languages makes stuff like bitcoin mining look like a rounding error.