> When an LLM is trained, it essentially compresses the knowledge of the training data corpus into a world model
No, you added an extra 'l'. It's not a world model, it's a word model. LLMs tokenize and correlate objects that are already second-order symbolic representations of empirical reality. They're not producing a model of the world, but rather a model of another model.
No, you added an extra 'l'. It's not a world model, it's a word model. LLMs tokenize and correlate objects that are already second-order symbolic representations of empirical reality. They're not producing a model of the world, but rather a model of another model.