Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Here’s the issue: humans do the same thing: the brain builds up a model of the world but the model is not the world. It is a virtual approximation or interpretation based on training data: passed experiences, perceptions, etc.

A human can tell you the sky is blue based on its model. So can any LLM. The sky is blue. So the output from both models is truthy.



> A human can tell you the sky is blue based on its model. So can any LLM. The sky is blue. So the output from both models is truthy.

But a human can also tell you that the sky is blue based looking at the sky, without engaging in any model-based inference. An LLM cannot do that, and can only rely on its model.

Humans can engage in both empirical observation and stochastic inference. An LLM can only engage in stochastic inference. So while both can be truthy, only humans currently have the capacity to be truthful.

It's also worth pointing out that even if human minds worked the same way as LLMs, our training data consists of an aggregation of exactly those empirical observation -- we are tokenizing and correlating our actual experiences of reality, and only subsequently representing the output of our inferences with words. The LLM, on the other hand, is trained only on that second-order data -- the words -- without having access to the much more thorough primary data that it represents.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: