Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Vision and everyday-physics models are the answer: hallucinations will stop when the models stop thinking in words and start thinking in physical reality.


The way you phrased it reveals how these model providers have framed the conversation in their favor. Models don’t “think.”


They had easy access to a large corpus of writing to train on, way larger than any human being trained their own language model on. I can't see where they are going to find a large corpus of physical interaction with reality to train that kind of model.


cameras are fairly cheap these days.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: