GPT is a token prediction engine. It predicts what the next token is, and it does that very well. Its logical abilities are emergent and are limited by the design of the network. Transformers are constant-time computations: they compute a fixed number of steps and then they stop and produce a result. This is very different to how humans think, we can expend more time on a difficult task (sometimes years!), or give an instant answer to an easy task. And we have a conception of when a task is done, or when we have to think more.
> This is very different to how humans think, we can expend more time on a difficult task (sometimes years!)
When we do that, we maintain a chain of thought. It's absolutely possible to get ChatGPT (for instance) to maintain a chain of thought by asking it to plan steps and describe plans before following them. It can allow it to tackle more difficult problems with better results.
I don't think we know enough yet about how humans think to be confident in saying that "This is very different to how humans think".