No way. A linter is deterministic. ChatGPT is all over the place and, for me, it’s wrong almost every time I ask it anything. I wouldn’t trust it to tell me the ingredients in a pepperoni pizza. I’m definitely not letting it give me programming advice.
Ironically, I had a lot of trouble with a particular recipe (pan-fried gyoza with a crispy bottom), and it was only GPT 4 that gave me a working recipe!
The lack of determinism can be considered a type of strength. Run it multiple times! It might find different bugs each time.
Humans are the same, by the way. If you show a random set of programmers random snippets of code, you'll get a non-deterministic result. They won't all find all of the bugs.
> The lack of determinism can be considered a type of strength. Run it multiple times! It might find different bugs each time.
I’ve only tried it with code a little bit, but what I find is that it gives me hallucinations that I need to spend time figuring out. I don’t know what it’s saying, because it’s gibberish, and then I have to spend my time figuring out it’s not accurate. I don’t want to run on that treadmill.
I’m guessing software will continue the trend of getting less reliable as more people are willing to generate it via an AI.
No way. A linter is deterministic. ChatGPT is all over the place and, for me, it’s wrong almost every time I ask it anything. I wouldn’t trust it to tell me the ingredients in a pepperoni pizza. I’m definitely not letting it give me programming advice.