By all means, tell us how statistically weighted answers to "what's the next word" correlates to understanding.
By all means, tell me what makes you so certain you're not arguing with an LLM right now. And if you were, what would you do about it, except type a series of words that depend on the previous ones you typed, and the ones that you read just prior to that?
A human, you can correct in real time and they'll (try to) internalize that information in future interactions. Not so with LLMs.
Not so with version 1.0, anyway. This is like whining that your Commodore 64 doesn't run Crysis.
By all means, tell me what makes you so certain you're not arguing with an LLM right now. And if you were, what would you do about it, except type a series of words that depend on the previous ones you typed, and the ones that you read just prior to that?
A human, you can correct in real time and they'll (try to) internalize that information in future interactions. Not so with LLMs.
Not so with version 1.0, anyway. This is like whining that your Commodore 64 doesn't run Crysis.