Hacker News new | past | comments | ask | show | jobs | submit login

> So you assert that parent doesn’t have an internal model of chess?

I think it's an interesting question, I find it more likely we have an imperfect ability to perceive the whole chess board, whatever the hell that might mean.

In any case it's irrelevant to an LLM which has perfect information from the input set.

So perhaps we rephrase the question, I think the parent, if given a list of individual chess moves (in whatever notation pleases them), should be able to tell if a given move is valid or not for the given piece. If the parent failed to do that, I would say they do not have a valid model of chess, ya.




> If the parent failed to do that, I would say they do not have a valid model of chess

Surely even the world's foremost chess experts will occasionally make errors in such a situation, so what's the point of holding language models to an intelligence standard that even humans can't meet?

I see two ways of interpreting this: either language models do approximately learn world models, or else world models aren't necessary for human intelligence

EDIT: I see from your other posts in this thread you're probably more inclined to believe the latter statement, which wasn't obvious to me from your first post and I think that explains why there is so much contention in this thread.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: