To be fair, newer research is demonstrating that smaller more power efficient models with the same performance are possible, so the hope is that these giant LLMs are just a stepping stone to a less energy hungry place. In contrast, proof of work fundamentally needs more energy then bigger the network gets. It's no guarantee but we can at least see some hope that as energy impact drops and increasing value is found that 'AI' will cross the threshold of being worth the energy.
Edit: although yes I do agree that the 'value' part is tricky. If internet spam can generate more 'value' for some people than doing science, then when intelligence is cheap we are in for a rough time.
To be clear, I'm not against AI or LLM as a technology in general. What I'm against is the unethical way how these LLMs trained and how people are dismissive of the damage they're doing and saying "we're doing something amazing, we need no permission".
Also, I'm very aware that there are many smaller models in production which can run real-time with negligible power and memory requirements (i.e. see human/animal detection models in mirrorless cameras, esp. Sony and Fuji).
However, to be honest I didn't see the same research on LLMs yet. Can you share if you have any, because I'd be glad to read them.
Lastly, I'm aware that AI is not something only covers object detection, NLP, etc. You can create very useful and light AI systems for many problems, but how LLMs pumped with that unstopping hype machine bothers me a lot.
Edit: although yes I do agree that the 'value' part is tricky. If internet spam can generate more 'value' for some people than doing science, then when intelligence is cheap we are in for a rough time.