> Even after I pointed this mistake out, it repeated exactly the same proposed plan. It's not clear to me if the lesson here is that GPT's reasoning capabilities are being masked by an incorrect prior (having memorized the standard version of this puzzle) or if the lesson is that GPT'S reasoning capabilities are always a bit of smoke and mirrors that passes off memorization for logic.
It has no reasoning capabilities. It has token prediction capabilities that often mimic reasoning capabilities.
It has no reasoning capabilities. It has token prediction capabilities that often mimic reasoning capabilities.