Hacker News new | past | comments | ask | show | jobs | submit login

Because, as a human, I can introspect my world model. I can explain it to myself and others. I can explain how it guides my decisions. I can directly update it without needing "training data". I don't have a map of my neurons, but I have self awareness. For LLMs, there is the opposite: transparent structure but opaque understanding.



Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: