It is pretty profound. AI / deep learning failed to solve self-driving, but they’re good at moving text and code around, which humans still have to check
It’s arguable whether that’s “doing”
I’d say it’s more spreading knowledge around, which is super valuable, but not what is being advertised
The problem with self driving is that bad decisions can kill you, before anyone can check if it was a bad decision
How is it spreading knowledge around? A lot of times it gives half backed answers and a lot of beginners are using it while learning. That's not a good mix in my opinion.
I've been helping someone who's learning programming and I've had a look at their code. All of it is vibe coded. And the vibes are nightmarish, I don't think the AI is helping at all.
The only thing it's useful for is sparing expert programmers some tedious work, that's my perception as a programmer.
Well, if you tell me that many people are using LLMs poorly, and in a way that won't benefit them or their team in the long term, then I wouldn't be too surprised.
There are probably more ways to use them poorly than ways to use them well.
And AI companies are pushing usage patterns that may make you dependent on them.
---
But I mention 4 ways that LLMs helped me recently here
i.e. VimScript, SQL, login shells, Linux container syscalls -- with some results you can see
I mention that "give me your best argument against X" is a good prompt -- they can do that
And I also don't use them to edit code for me (right now) -- I type the code myself, TEST it, and internalize it
So for those cases, and many others, they are "spreading knowledge" to me, simply because I can directly query them without reading the manual (or suffering through slow web pages with ads on them)
The end game might be ads, which is depressing. But actually it's remarkable that you can run high quality models locally, whereas you could have NEVER run Google locally. I use LLMs as a better Google, as a sophisticated text calculator. But they are significantly more than that too
I have definitely run into cases where LLMs slow me down, but I now avoid those usage patterns
Yeah, the other day a front end dev created a branch in some elixir code. They added a pile of tests, and asked a (new hire) back end dev to finish off the work. The tests were 100% vibe coded. I knew the code well, and after looking realized that the tests could never ever pass. The tests were rubbish.
Crap part was, the new BE dev was totally lost for a long time trying to figure out how to make them pass. Vibe killed his afternoon.
It’s arguable whether that’s “doing”
I’d say it’s more spreading knowledge around, which is super valuable, but not what is being advertised
The problem with self driving is that bad decisions can kill you, before anyone can check if it was a bad decision