I think the fact it correctly uses "meat-eating goat" but misuses "vegan wolf" hints at the core lack of understanding.
Understanding either concept takes the same level of intelligence if you understand the meaning of the words (both a vegan wolf and a meat-eating goat are nonexistent entities outside of possibly bizarre exceptions, yet someone capable of understanding will have no problem with either).
That GPT has no trouble with meat-eating goat but struggles with vegan wolf hints that the former has some "statistical" property that helps GPT, and which the latter doesn't. It also hints that GPT doesn't understand either term.
Hence my example: something a human wouldn't fail to understand but GPT does.
we came from not being able to make a sensible output to these riddles at all, now discussing partial logical failures while it "got" the overall puzzle. Vast simplification and slightly incorrect on a technical level - still this development increases my confidence that scaling up the approach to the next orders of magnitude of complexity/parameters will do the trick. I even wouldn't be surprised that if the thing we call "consciciousness" is actually a byproduct of increasing complexity.
what remains right now is getting the _efficiency_ on point, so that our wetware brains (volume, energy usage, ...) can be paralleled by AI hardware demands, and not using a comically higher amount of computers to train/run
Understanding either concept takes the same level of intelligence if you understand the meaning of the words (both a vegan wolf and a meat-eating goat are nonexistent entities outside of possibly bizarre exceptions, yet someone capable of understanding will have no problem with either).
That GPT has no trouble with meat-eating goat but struggles with vegan wolf hints that the former has some "statistical" property that helps GPT, and which the latter doesn't. It also hints that GPT doesn't understand either term.
Hence my example: something a human wouldn't fail to understand but GPT does.