Hacker News new | past | comments | ask | show | jobs | submit login

A similar kind of question about "understanding" is asking whether a house cat understands the physics of leaping up onto a countertop. When you see the cat preparing to jump, it take a moment and gazes upward to its target. Then it wiggles its rump, shifts its tail, and springs up into the air.

Do you think there are components of the cat's brain that calculate forces and trajectories, incorporating the gravitational constant and the cat's static mass?

Probably not.

So, does a cat "understand" the physics of jumping?

The cat's knowledge about jumping comes from trial and error, and their brain builds a neural network that encodes the important details about successful and unsuccessful jumping parameters. Even if the cat has no direct cognitive access to those parameters.

So the cat can "understand" jumping without having a "meta-understanding" about their understanding. When a cat "thinks" about jumping, and prepares to leap, they aren't rehearsing their understanding of the physics, but repeating the ritual that has historically lead them to perform successful jumps in the past.

I think the theory of mind of an LLM is like that. In my interactions with LLMs, I think "thinking" is a reasonable word to describe what they're doing. And I don't think it will be very long before I'd also use the word "consciousness" to describe the architecture of their thought processes.




That’s interesting. I thought your cat analogy (which I really liked) was going to be an example of how LLMs do not have understanding the way a cat understands the skill of jumping. But then you went the other way.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: