I use LLMs several times a day, and I think for me the issue is that verification is typically much faster than learning/writing. For example, I've never spent much time getting good at scripting. Sure, probably a gap I should resolve, but I feel like LLMs do a great job at it. And what I need to script is typically easy to verify, I don't need to spend time learning how to do things like, "move the files of this extension to this folder, but rewrite them so that the name begins with a three digit number based on the date when it was created, with the oldest starting with 001" -- or stuff like that. Sometimes it'll have a little bug, but one that I can debug quickly.
Scripting assistance by itself is worth the price of admission.
The other thing I've found it good at is giving me an English description of code I didn't write... I'm sure it sometimes hallucinates, but never in a way that has been so wrong that its been apparent to me.
I think you and the parent comment are onto something. I also feel like the parent since I find it relatively difficult to read code that someone else wrote. My brain easily gets biased into thinking that the cases that the code is covering are the only possible ones. On the flip side, if I were writing the code, I am more likely to determine the corner cases.
In other words, writing code helps me think, reading just biases me. This makes it extremely slow to review a LLM's code at which point I'd just write it myself.
Very good for throwaway code though, for example a PoC which won't really be going to production (hopefully xD).
Maybe it’s bc I’ve been programming since I was young or because I mainly learned by doing code-along books, but writing the code is where my thinking gets done.
I don’t usually plan, then write code. I write code, understand the problem space, then write better code.
I’ve known friends and coworkers who liked to plan out a change in psudocode or some notes before getting into coding.
Maybe these different approaches benefit from AI differently.
Your script example is a good one, but the nice thing about scripting is when you learn the semantic of it. Like the general pattern of find -> filter/transform -> select -> action. It’s very easy to come up with a one liner that can be trivially modified to adapt it to another context. More often than not, I find LLMs generate overly complicated scripts.
It's astounding how often I ask an LLM to generate some thing, do a little more research, come back and I'm ready to use the code it generated and I realize, no, it's selected the wrong flags entirely.
Although most recently I caught it because I fed it into both gpt-4o and o1 and o1 had the correct flags. Then I asked 4o to expand the flags from the short form to the long form and explain them so I could double-check my reasoning as to why o1 was correct.
Scripting assistance by itself is worth the price of admission.
The other thing I've found it good at is giving me an English description of code I didn't write... I'm sure it sometimes hallucinates, but never in a way that has been so wrong that its been apparent to me.