Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> When an LLM is trained, it essentially compresses the knowledge of the training data corpus into a world model

No, you added an extra 'l'. It's not a world model, it's a word model. LLMs tokenize and correlate objects that are already second-order symbolic representations of empirical reality. They're not producing a model of the world, but rather a model of another model.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: