Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

And the article is overstating it as well. My confidence in the LLM's ability to reduce the "how" part is primarily based on "am I doing something the LLM is good at?". If I'm doing HTML React where there's a million examples of existing stuff, then great. The more novel what I'm asking is, the less useful the LLM becomes and the more stuff I need to handcode is. Just as with self-driving cars, this is a challenge because switching from "the LLM is generating the code" to "I'm hand-tweaking things" is equivalent to self-driving disconnecting randomly and telling you "you drive" in the middle of the highway. Oh and I've caught the LLM randomly inserting vulnerabilities for absolutely no reason (e.g. adding permissions to the iframe sandbox attribute when I complained about UI layout issues).

It's a useful tool that can accelerate certain tasks, but it has a lot of sharp edges.



> If I'm doing HTML React where there's a million examples of existing stuff, then great

I do quite a bit of this and even here LLMs seem extremely hit and miss, leaning towards the miss side more often than not


I think React is one of those areas where consistency is more important than individual decisions. With a lot of front-end webdev there's many right answers, but they're only right if they are aligned with the other design decisions. If you've ever had to edit a web page with three different approaches to laying out the CSS you know what I mean.

LLMs _can_ do consistency, they're pretty good continuing a pattern...if they can see it. Which can be hard if it's scattered around the codebase.


> one of those areas where consistency is more important than individual decisions

This describes any codebase in any programming language

This is why "programming patterns" exist as a concept

The fact that LLMs are bad at this is a pretty big mark against them


Consistency is why frameworks and libraries exists. Once you start to see consistency in your UI views, that's a good sign to further refine it into components and eliminate boilerplate.


> LLMs _can_ do consistency

They won't even consistently provide the same answer to the same input. Occasional consistency is inconsistency.


My favorite is when I ask it to fix a bug, and in the part of the code that doesn't change it still slightly rewords a comment.


> Oh and I've caught the LLM randomly inserting vulnerabilities for absolutely no reason (e.g. adding permissions to the iframe sandbox attribute when I complained about UI layout issues).

Yeah, now get the LLM to write C++ for a public facing service, what could possibly go wrong?


This is the real reason why it's an amplifier and not a replacer.


LLMs are MUCH better generating Python or SQL than APL.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: