Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLMs are glorified, overhyped autocomplete systems that fail, but in different, nondeterministic ways than existing autocomplete systems fail. They are neat, but unreliable toys, not “more profound than fire or electricity” as has been breathless claimed.


You just literally described humans; and the meta lack of awareness reinforces itself. You cyclicly devalue your own point.


Not for nothing, humans also enjoy the worth and dignity inherent with being alive and intelligent…not to mention significantly less error prone (see: hallucination rates in literally any current model), while being exponentially more efficient to produce and run. I can make that last assertion pretty confidently, because while I’ve never built a data center so resource intensive it required its own dedicated power generation plant, I have put in the work to produce dozens of new people (those projects all failed, but only because we stubbornly refuse to take the wrappers off the tooling), and the resource requirements there only involved some cocktails and maybe a bag of Doritos. Anyhow, I reckon humans are still, on-balance, the better non-deterministically imperfect vessels of logic, creation, and purpose.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: