Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm seeing it trying to catch blind exceptions in Python all the time. I see it in my colleagues code all the time, it's driving me nuts.


The training loop asked the model to one-shot working code for the given problems without being able to iterate. If you had to write code that had to work on the first try, and where a partially correct answer was better than complete failure, I bet your code would look like that too.

In any case, it knows what good code looks like. You can say "take this code and remove spurious comments and prefer narrow exception handling over catch-all", and it'll do just fine (in a way it wouldn't do just fine if your prompt told it to write it that way the first time, writing new code and editing existing code are different tasks).


It's only an example, there's pretty of irrelevant stuff that LLMs default to which is pretty bad Python. I'm not saying it's always bad but there's a ton of not so nice code or subtly wrong code generated (for example file and path manipulation).


There are a bunch of stupid behaviors of LLM coding that will be fixed by more awareness pretty soon. Imagine putting the docs and code for all of your libraries into the context window so it can understand what exceptions might be thrown!


Copilot and the likes have been around for 4 years, and we’ve been hearing this all along. I’m bullish on LLM assistants (not vibe coding) but I’d love to see some of these things actually start to happen.


I feel like it has gotten better over time, but I don't have any metrics to confirm this. And it may also depend on what type of you language/libraries that you use.


I feel like there was a huge jump when cursor et al appeared, and things have been “changing” since then rather than improving.


It just feels to me like trying to derive correct behavior without a proper spec so I don't see how it'll get that much better. Maybe we'll collectively remove the pathological code but otherwise I'm not seeing it.


Well, at least now we know who to blame for the training data :)


What’s a blind exception?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: