That's actually the point I was making. There's an assumption that the LLM is working differently because there's a statistical model but we lack the understanding of our own intelligence to be able to say this is indeed a difference.
I know but I didn't claim they were the same, I simply questioned the position that they were different. The fact is we don't know, so it seems like a poor basis for building off of
To me a more interesting observation, one that is already discussed a lot, is that if eventually we cannot tell the difference between a machine and a human in terms of output, then when do we accept that "thinking" has subjective, rather than objective?