Hacker News new | past | comments | ask | show | jobs | submit login

> We only need a single counter example to show that Othello-GPT does not have a systemic understanding of the rules, only a statistical inference of them.

I don't agree with this binary interpretation. This only indicates that any systemic understanding the model has internally built is not complete. We are not trying to assess completeness of that understanding. E.g. if you had a traditional rules engine for Othello, and you removed one rule that could result in illegal moves, does that make that rule engine a statistical model all of a sudden?




No, it makes it an engine for a game that is not Othello.

An inaccurate engine and a statistical one are indistinguishable. And the authors provide no evidence to prefer the former.

Nothing is ruled out, of course, but there's not strong or compelling evidence either.


> An inaccurate engine and a statistical one are indistinguishable.

How so? The paper is trying to assess if some semantic understanding is being created by the underlying model, not if a completely accurate one is. If such a world model maps to Othello-prime (gleaning concepts like tiles, colors etc), that is still a very interesting result.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: