Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The issue of LLMs here is the proliferation of people not understanding the code they produce.

Let's ignore the code quality or code understanding: these juniors are opening PRs, according to the previous user, that simply do not meet the acceptance criteria for some desired behavior of the system.

This is a process, not tools issue.

I too have AI-native juniors (they learned to code along copilot or cursor or chatgpt) and they would never ever dare opening a PR that doesn't work or does not meet the requirements. They may miss some edge case? Sure, so do I. That's acceptable.

If OP's are, they have not been taught that they have to ask for feedback when their version of the system does what it needs to.



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: