Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>I think this is both right and wrong. There was a good book that came out probably 15 years ago about how technology never stops in aggregate, but individual technologies tend to grow quickly and then stall. Airplane jets were one example in the book. The reason why I partially note this as wrong is that even in the 70s people recognized that supersonic travel had real concrete issues with no solution in sight. I don't think LLMs share that characteristic today.

I don't see any solution to hallucinations, nor do I see any solution in sight. I think that could count as a concrete issue that would stop them.



Vision and everyday-physics models are the answer: hallucinations will stop when the models stop thinking in words and start thinking in physical reality.


The way you phrased it reveals how these model providers have framed the conversation in their favor. Models don’t “think.”


They had easy access to a large corpus of writing to train on, way larger than any human being trained their own language model on. I can't see where they are going to find a large corpus of physical interaction with reality to train that kind of model.


cameras are fairly cheap these days.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: