This is not a useful diversion, it's like arguing if a submarine swims.
LLMs are simple, it doesn't take much more than high school math to explain their building blocks.
What's interesting is that they can remix tasks they've been trained very flexibly, creating new combinations they weren't directly trained on: compare this to earlier smaller models like T5 that had a few set prefixes per task.
They have underlying flaws. Your example is more about the limitations of tokens than "understanding", for example. But those don't keep them from being useful.
They do stop it from being intelligent though. Being able to spit out cool and useful stuff is a great achievement. Actual understanding is required for AGI and this demonstrably isn't that, right?
LLMs are simple, it doesn't take much more than high school math to explain their building blocks.
What's interesting is that they can remix tasks they've been trained very flexibly, creating new combinations they weren't directly trained on: compare this to earlier smaller models like T5 that had a few set prefixes per task.
They have underlying flaws. Your example is more about the limitations of tokens than "understanding", for example. But those don't keep them from being useful.