You've moved the goalposts from AI to LLM's. Fair enough, we've been doing AI since the 50's, and this is the second AI boom in a decade.
Those "stochastic parrots" have still proven that they are immensely useful. You might not personally find value out of coding assistants, but many many people do (as an example). People are (allegedly) turning to LLM's rather than StackOverflow for help [0]. They work well for boilerplate where you're an SME and able to validate the output - I can review 10x the amount of code I can write for example. They work (remarkably) well for summarising input text. An example - I semi occasionally (3-4x per year) have to deal with a few hundred GB of audio files that need cleanup. The cleanup tasks are "run FFMPEG with parameters", except I can not ever remember the parameters (they're different for different things). I can: read https://ffmpeg.org/ffmpeg.html or I can ask ChatGPT to write a script to clip the silence and add a 0.5 second intro fade to every file in a specified folder, and the entire task is done before I've even thought about it. I get to focus on what I want to, rather than munging data around.
If you expand your definition from LLMs to Transformers, then you get Whisper as a stand out example of something awesome. There's definitely negatives, but things like Diffusion are being used outside of image generation for drug discovery. We're not going to yolo AI generated drugs into human testing, but we can save an awful lot of screwing around to find something viable.
> And you spend gigawatts of power just to train this thing which selects and prints words based on probability and some randomness.
> That doesn't solve any problems.
I disagree, it does solve problems. A very fair question to ask is "is it worth the cost" and I would agree that it's not worth the cost. That doesn't mean it doesn't solve real problems.
Those "stochastic parrots" have still proven that they are immensely useful. You might not personally find value out of coding assistants, but many many people do (as an example). People are (allegedly) turning to LLM's rather than StackOverflow for help [0]. They work well for boilerplate where you're an SME and able to validate the output - I can review 10x the amount of code I can write for example. They work (remarkably) well for summarising input text. An example - I semi occasionally (3-4x per year) have to deal with a few hundred GB of audio files that need cleanup. The cleanup tasks are "run FFMPEG with parameters", except I can not ever remember the parameters (they're different for different things). I can: read https://ffmpeg.org/ffmpeg.html or I can ask ChatGPT to write a script to clip the silence and add a 0.5 second intro fade to every file in a specified folder, and the entire task is done before I've even thought about it. I get to focus on what I want to, rather than munging data around.
If you expand your definition from LLMs to Transformers, then you get Whisper as a stand out example of something awesome. There's definitely negatives, but things like Diffusion are being used outside of image generation for drug discovery. We're not going to yolo AI generated drugs into human testing, but we can save an awful lot of screwing around to find something viable.
> And you spend gigawatts of power just to train this thing which selects and prints words based on probability and some randomness. > That doesn't solve any problems.
I disagree, it does solve problems. A very fair question to ask is "is it worth the cost" and I would agree that it's not worth the cost. That doesn't mean it doesn't solve real problems.
[0] https://meta.stackexchange.com/questions/387278/has-stack-ex...