Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is bad advice to people with the correct posture.

If you want to learn: don't use these models to do the things for you. Do use these models to learn.

LLMs might not the best teachers in the world but they're available right there and then when you have a question. Don't take their answers for granted and test them. Ask to teach you the correct terms for things you don't know yet, so you can ask better questions.

There has never been a better time to learn. You don't have to learn in the same way your predecessors did. You can learn faster and better, but must be vigilant you're not fooling yourself.



I feel bad for anyone learning to code now. The temptation to just LLM it must be sooooo high.

But at the same time, having personalised StackOverflow without the negative attitude at your fingertips is super helpful, provided you also do the work of learning.


> personalised StackOverflow without the negative attitude

Phrased in that way, it does sound very tempting. Over the past few years it's become pretty much a waste of time to post on SO (well, in my experience, anyway).


But wouldn't you learn if you actually have to enter and test that code, even if it's LLM generated, every day? Maybe you learn bad models, which can happen from SO as well, but you do learn. I'm more worried that the juniors won't have this opportunity anymore, or rather, they won't be needed anymore. So when I retire, what? Unless AI gets better and replaces everybody, then it won't matter at all what and how you learned.


The non-curious/non-skeptics people might have better luck learning by staying with books and coding offline.

For the curious/skeptics there's never been a better moment to pick anything up. I don't know how we make more people more curious and skeptic.


Don't feel bad: LLMs make it so much easier to learn thinks without getting stuck at frustrating nonsense, yet there remain enough hurdles so that you need to develop resilience.


The errors and inefficiencies LLMs make are very subtle. You’re also just stuck with whatever it’s trained at. I echo OP, learn from documentation and code. This is as true now as back when Stackoverflow was used for everything.


I had this argument with Mark Cuban recently and he admitted he was wrong, my example was using a calculator to learn calculus....

We are forgetting what general purpose name means to learning and real practical usage as a tool.

BTW the best AI and Computer Science discussions are happening on bluesky


Who are these people on Bluesky?


I can't tell what were your respective positions. In many ways

    LLMs : computer science = [opposite of calculator] : calculus
A calculator will not let you discover terms for you to BFS/DFS knowledge on.

It will not help find the terms of art for fuzzy concepts you encountered and can barely describe.

It is not a learning accelerator.


Having seen the code they make, don't learn from these models. Remember they are trained on a lot of code, not a lot of good code.


They don't neccesarily do, but you can get them pretty far, one of the most interesting parts of LLM's (I guess chat based, haven't tried copilot style ones well enough) is that smallish rewrites are really low cost

Don't like the way it did something convoluted or didn't do early returns? say it, it will do it, chain as many requests as you need, it won't get fed up with you, and if you see it losing detail because memory, use those requests for a significantly more polished prompt on a new chat for a cleaner starting point

Don't just accept what it gives you either


Use documentation and real world code to learn, not slop.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: