There's likely a connection. Either way, I like to describe AIs like ChatGPT / diffusion models, etc. as operating 100% on intuition. It gives people a better intuition of their weaknesses...
For GPT you can kind of prompt it to do chain-of-thought reasoning, but it doesn't work very well; not if you compare it to what humans do.
Once again it seems like what we thought was hard, is easy; what we thought was easy and computer-like turns out to be hard.