Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

And the sad/crazy thing is that AI assisted coding/refactoring does seem to stray substantially over the line of observation vs participation to the point that the experience is largely defined by "learning by watching"

:/

Prompt engineering (to the extent that it results in the difference between applied LLM success vs failure) requires the human to truly grok what they're getting help on - the current top comment notes that the model applied more or less exactly the same diff they would have done, based on deep understanding of the exact codebase being modified - and also have some level of intuition about how LLMs work.

Someone just treating the model and context like an obscure TV remote and pressing all the buttons to see what they do, will get a result, and it might seem interesting, but will it be ontologically correct? Good question!



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: