Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Good experienced devs will be able to make better software

I lowkey disagree. I think good experienced devs will be pressured to write worse software or be bottlenecked by having to deal with bad software. Depends on company and culture of course. But consider that you as expereinced dev now have to explain things that go completely over the head of the junior devs, and most likely the manager/PO, so you become the bottleneck, and all pressure will come down on you. You will hear all kinds of stuff like "80% there is enough" and "dont let perfect be the enemy of good" and "youre blocking the team, we have a deadline" and that will become even worse. Unless you're lucky enough to work in a place with actually good engineering culture.



I think the recent post about the Cloudflare engineer who built an OAuth implementation, https://news.ycombinator.com/item?id=44159166, shows otherwise (note the Cloudflare engineer, kentonv, comments a bunch in the discussion). The author, who is clearly an expert, said it took him days to complete what would have taken him weeks or months to write manually.

I love that thread because it clearly shows both the benefits and pitfalls of AI codegen. It saved this expert a ton of time, but the AI also created a bunch of "game over" bugs that a more junior engineer probably would have checked in without a second thought.


There was also a review of that code about a week later [0] which highlights the problems with LLM-generated code.

Even looking strictly at coding, the hard thing about programming is not writing the code. It is understanding the problem and figuring out an elegant and correct solution, and LLM can't replace that process. They can help with ideas though.

[0] https://news.ycombinator.com/item?id=44215667


> There was also a review of that code about a week later [0] which highlights the problems with LLM-generated code.

Not really. This "review" was stretching to find things to criticize in the code, and exaggerated the issues he found. I responded to some of it: https://news.ycombinator.com/item?id=44217254

Unfortunately I think a lot of people commenting on this topic come in with a conclusion they want to reach. It's hard to find people who are objectively looking at the evidence and drawing conclusions with an open mind.


Thanks for responding. I read that dude's review, and it kind of pissed me off in an "akshually I am very smart" sort of way.

Like his first argument was that you didn't have a test case covering every single MUST and MUST NOT in the spec?? I would like to introduce him to the real world - but more to the point, there was nothing in his comments that specifically dinged the AI, and it was just a couple pages of unwarranted shade that was mostly opinion with 0 actual examples of "this part is broken".

> Unfortunately I think a lot of people commenting on this topic come in with a conclusion they want to reach. It's hard to find people who are objectively looking at the evidence and drawing conclusions with an open mind.

Couldn't agree more, which is why I really appreciated the fact that you went to the trouble to document all of the prompts and make them publicly available.


Thank you for answering, I haven't seen your rebuke before. It does seem that any issues, even if there would be any (your arguments about CORS headers sound convincing to me, but I'm not an expert on the subject - I study them every time I need to deal with this) were not a result of using LLM but a conscious decision. So either way, LLM has helped you achieve this result without introducing any bugs that you missed and Mr. Madden found in his review, which sounds impressive.

I won't say that you have converted me, but maybe I'll give LLMs a shot and judge for myself if they can be useful to me. Thanks, and good luck!


To be fair, there was a pretty dumb CVE (which had already been found and fixed by the time the project made the rounds on HN):

https://github.com/cloudflare/workers-oauth-provider/securit...

You can certainly make the argument that this demonstrates risks of AI.

But I kind of feel like the same bug could very easily have been made by a human coder too, and this is why we have code reviews and security reviews. This exact bug was actually on my list of things to check for in review, I even feel like I remember checking for it, and yet, evidently, I did not, which is pretty embarrassing for me.


You touch on exactly the point that I try to make to the AI-will-replace-XXX-profession crowd: You have to already be an expert in XXX to get the most out of AI. Cf. Gell-Mann Amnesia.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: