Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I find it _extremely_ disheartening that human beings — already verifiably prone to forming, and more impactfully acting upon, dangerously stupid and usually suicidally short-sighted positions on low-to-no evidence and hopelessly malformed arguments — are deploying LLMs at all, especially in front of people prone to think that having a tool that can at best tell you the statistically least interesting next word will foster “creativity” (whatever that is, especially in light of something as unthinking as a LLM being able to convincingly mimic it).

If we are going to be so stupid as to use these things, I at least hope we’re willing to restrain them from parroting back the very worst of our impulses, as the available training corpus is _definitely_ not predisposed towards the good.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: