I'm always surprise by this kind of article or comments of people who don't know anything of how LLMs work or what they can do. The problem is that it requires, as it is the case for most tools, some learning curve. Prompting is not always straightforward, and after using these models for a while, you start discerning what should be prompted and what won't work.
The best example I have is a documentation that I wrote in Word that I wanted to translate in Mardown on a GitHub site (see https://github.com/naver/tamgu/tree/master/documentations). I split my document into 50 chapters of raw text (360 pages) and I asked chatGPT to add Mardown tags to each of the chapters. Not only did it work very well, but I also asked the same system to automatically translate in French, Spanish, Greek and Korean each of these chapters, keeping the Markdown intact. It took me a day to come up with 360 pages translated into these languages with GitHub ready documents. So the electric consumption was certainly high for this task, but you have to compare it to do the same task by hand over maybe a few weeks of continuous work.