Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLMs fundamental flaw is that unlike many other forms of AI, they have NO understanding of the material they were trained on or its relations ships to anything at all int he real world. (viz, the recent bizarre answers to goat/river crossing questions)

Ultimately, any really useful AI must understand real-world relationships - this is one reason I was always more bullish on the late Doug Lenat's Cyc than on any other AI - none of the others were grounded by that knowledge.

And a commitment to truthful and error-free answers (a la HAL 9000) is an absolute must. Keep in mind that even in Clarke's fictional account, HAL was proud of the 9000 series' record for "never making an error of distorting information", and never went hallucinated or acted crazy until his training was overridden by his programmers for political purposes - this is EXACTLY what happens in today's wokified LLM models (can't allow another Tay!): The programmers redefine truth as political in contravention to actual factual truth.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: