Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This feels very similar to me as the "Tutorial Hell" effect. Where I can watch videos/read books, and fully feel like I understand everything. However, when hand touches keyboard I realize I didn't really retain any of it. I think that's something that makes AI code gen so dangerous. Even if you think you understand and can troubleshoot the output. Is your perception accurate?


> Where I can watch videos/read books, and fully feel like I understand everything. However, when hand touches keyboard I realize I didn't really retain any of it.

Always type everything down from a tutorial when you follow it. Don't even copy and paste, literally type it down. And make small adjustments here and there, according your personal taste and experience.


That doesn't even really work for me. I can type on autopilot. I've found the best way for me is to implement the tutorial thing in a different programming language. Something about translating between languages requires just enough mental work to help make it stick.


I've called this out before around LLMs - when the act of development becomes passive (versus active) - there is a significant risk around not fully being cognizant of the code.

Even copy pasta from Stack Overflow would require more active effort around grabbing exactly what you need, replacing variable names, etc. to integrate the solution into your project.


Yeah I never really learn something until I actually hack away at it, and even then I need to really understand things on a granular level.


I have done that too much. When learning now, when I read the solution, I always make sure that I am able to implement myself, else I don't consider I learned it. I apply the same for LLM code.


The Torturial




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: