Hacker News new | past | comments | ask | show | jobs | submit login

With Agentic RL training and sufficient data, AI operating at the level of average senior engineers should become plausible in a couple to a few years.

Top-tier engineers who integrate a deep understanding of business and user needs into technical design will likely be safe until we get full-fledged AGI.






On the other hand I’m pretry sure you will need senior engineers not only for designing but debugging. You don’t want to hit a wall when your Agentic coder hits a bug that it just won’t fix.

There’s a recent article with experiments suggesting LLMs are better at bug fixing than coding, iirc. It’s from a company with a relevant product though.

Why do you expect AIs to learn programming, but not debugging?

1) Debugging is much harder than writing code that works

2) AIs are demonstrably much, much worse at debugging code than writing fresh code

Ex: "Oh, I see the problem! Let me fix that" -> proceeds to create a new bug while not fixing the old one


Debugging is harder for humans, too.

Why in a few years? What training data is missing that we can’t have senior level agents today?

Training data, esp interaction data from agentic coding tools, are important for that. See also: Windsurf acquisition.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: