It seems like GPT-4 does something that's similar to what we do too yes!
But when people do this mistake - just spit out an answer because we think we recognize this situation - in colloquial language this behavior is called "answering without thinking(!)".
If you "think" about it, then you activate some much more careful, slower reasoning. In this mode you can even do meta reasoning, you realize what you need to know in order to answer, or you maybe realize that you have to think very hard to get the right answer. Seems like we're veering into Kahneman's "Thinking fast and thinking slow" here.
But when people do this mistake - just spit out an answer because we think we recognize this situation - in colloquial language this behavior is called "answering without thinking(!)".
If you "think" about it, then you activate some much more careful, slower reasoning. In this mode you can even do meta reasoning, you realize what you need to know in order to answer, or you maybe realize that you have to think very hard to get the right answer. Seems like we're veering into Kahneman's "Thinking fast and thinking slow" here.