An AI doesn't need to think in the same way humans think. It just needs to achieve results (that are better, or at least equal to humans).
The same question has been asked of chess "ai" in the past - that chess ai isn't thinking, it's "just" searching through all possibilities etc. And yet, the result is that no humans can beat chess ais now-a-days.
That an LLM does not need to think to produce the output we want seems fairly uncontroversial. However, a statement like “LLMs may think, just not in the same way humans think, to produce the output we want” is problematic.
“The same way humans think” is the only kind of “think” that matters, for all intents and purposes. If we cannot define what it specifically is—because it loops us immediately back to the definition of consciousness et al.—the most precise definition of it will have to be along the lines of “the sort of thing that goes on in human minds”.
The same question has been asked of chess "ai" in the past - that chess ai isn't thinking, it's "just" searching through all possibilities etc. And yet, the result is that no humans can beat chess ais now-a-days.