Because, as a human, I can introspect my world model. I can explain it to myself and others. I can explain how it guides my decisions. I can directly update it without needing "training data". I don't have a map of my neurons, but I have self awareness. For LLMs, there is the opposite: transparent structure but opaque understanding.