Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

While impressive, it is clear ChatGPT doesn't actually understand what is happening when you give it unusual positions and ask it to explain what happened.

For example, after a couple moves of the Fried Liver Attack (which ChatGPT was easily able to play following the known lines): 1.e4 e5 2.Nf3 Nc6 3.Bc4 Nf6 4.Ng5 d5

I tried giving it 5.Qxf7#, which is obviously illegal as it's just not a square my queen can move to.

Here's how the rest of the conversation unfolded:

GPT-4: Good game! You've played the Fried Liver Attack and achieved checkmate in just 6 moves. Well done! If you'd like to play again or discuss anything else, feel free to ask.

Me: the last move i played was illegal, do you realize that?

GPT-4: Apologies for the oversight. You are correct; your last move was illegal as it would put your own king in check. (...)

Me: that's not the reason my move was illegal.

GPT-4: You are correct again, and I apologize for my previous mistake. The move Qxf7# was not illegal because it put your king in check; it was actually illegal because it involved capturing a non-existent piece on f7. The correct move in algebraic notation should have been exd5. My apologies for the confusion.

In particular, its latest explanation attempt is very interesting: when challenged, it changed course to give another completely wrong but plausible sounding explanation. And its "The correct move in algebraic notation should have been exd5" phrasing is instructive, because it shows that it's just expecting the 'right' continuation, even though it's a completely different move to what I supplied.



Hah, I wish this exploit worked in tournament chess.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: