Hacker News new | past | comments | ask | show | jobs | submit login

If you took a forklift to the gym, you'd come out of the experience not only very good at "lifting weights", but having learned a whole lot more about the nature and physics of weightlifting from a very different angle.

Sure, you should lift them yourself too. But using an AI teaches you a shit-ton more about any field than your own tired brain was going to uncover. It's a very different but powerful educational experience.






> But using an AI teaches you a shit-ton more about any field than your own tired brain was going to uncover.

If you never learn to research, sure. Otherwise, you should be worried about accuracy, up to date information, opinionated takes, and outright lies/misinformation. The tool you use doesn't change these factors.


No but it increases the speed and ease at which you can check any of those - making a lot of those steps practical when they were a slog before. If people aren't double-checking LLM claims against sources then they were never on guard for those without an LLM either.

Besides, those are incredibly short-term concerns. Recent models are a whole lot more trustworthy and can search for and cite sources accurately.


Does it? You google a query, get results, compare a few alternative results. You ask a prompt and what? Compare outputs to each other? Or just defer back to googling for alternative sources.

Firstly, these prompts tend to be shockingly close in behavior. Secondly, Google tends to rank reputable or self curated sites which have some accountability. It can be wrong but you know thr big news sites tend to at least defer to interviews to back up facts. Wikipedia has an overly strict process to prevent blatant, source less information.

There's room for error, but there's at least more accountability compared to what an LLM is going through.

> Recent models are a whole lot more trustworthy and can search for and cite sources accurately.

Lastly, prompts are still treated as black boxes, which is a whole other issue. For the above reasons I still would simply defer to human curated resources. That's what LLMs are doing anyway without transparency.

People want to give up transparency for speed? It seems completely counter to hacker culture.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: