this is the question that the greeks wrestled with over 2000 years ago. at the time there were the sophists (modern llm equivalents) that could speak persuasively like a politician.
over time this question has been debated by philosophers, scientists, and anyone who wanted to have better cognition in general.
Because we know what LLM's do. We know how they produce output. It's just good enough at mimicking human text/speech that people are mystified and stupified by it. But I disagree that "reasoning" is so poorly defined that we're unable to say an LLM doesn't do it. It doesn't need to be a perfect or complete definition. Where there is fuzziness and uncertainty is with humans. We still don't really know how the human brain works, how human consciousness and cognition works. But we can pretty confidently say that an LLM does not reason or think.
Now if it quacks like a duck in 95% of cases, who cares if it's not really a duck? But Google still claims that water isn't frozen at 32 degrees Fahrenheit, so I don't think we're there yet.
I think the third worst part of the GenAI hype era is that every other CS grad now thinks not only is a humanities/liberal arts degree meaningless but now also they're pretty sure they have a handle on the human condition and neurology enough to make judgment calls on what's sentient. If people with those backgrounds ever attempted to broach software development topics they'd be met with disgust by the same people.
Somehow it always seems to end up at eugenics and white supremacy for those people.
math arose firstly as a language and formalism in which statements could be made with no room for doubt. the sciences took it further and said that not only should the statements be free of doubt, but also that they should be testable in the real world via well defined actions which anyone could carry out. all of this has given us the gadgets we use today.
llm, meanwhile, is putting out plausible tokens which is consistent with its training set.