It is probably exactly because he spent a career considering the cognition behind language that he is not as impressed by LLMs as many others are. I'll readily admit to being expert in neither language and linguistics nor AI, but I am skeptical that anything going on inside an LLM is properly described as "cognition."
does it really matter if it can be described as cognition or not? to me these models are useful for how effective they are, and that's literally it. the processes going on within them are extremely complex and at times very impressive, and whether some arbitrarily undefined word applies or not does not really matter. I think sometimes people forget that words are not maths or logic. when words come into language, no one sits down and makes sure that they're 100% logically and philosophically sound, they just start to be used, usually based on a feeling, and slowly gather and lose meaning over time. perhaps when dictionaries were first written there was some effort to do this, but for lots of words its probably impossible or incredibly difficult even now, never mind 200 years ago, if they could even be bothered in the first place.
to give an example, a quite boring "philosophy question" that's bandied around, usually by children, is "if a tree falls in the forest and no one hears it, does it make a sound?". the answer is that "sound" is a word without a commonly-accepted, logically-derived meaning, for the reasons given above. so if to you the word sound is something human, then the answer is no, but if to you a sound is not something human, then the answer is yes. there's nothing particularly interesting or complex about the thought experiment, it's just a poorly defined word
does it really matter if it can be described as cognition or not?
Yes...it does. "AI" aka modern flavor LLMs as we understand them today are just doing certain things thats humans can do but orders of magnitude faster. What exactly is impressive about it being able to succinctly sum up any topic under the Sun aside from the speed? It will never create a new genre of music. It will never create a new style of art from the ground up. It lacks the human spark of ingenuity. To even suggest that what it does anything close to human cognition is egregiously insulting.
isn't it funny that half the time when you see criticism of LLMs it's almost like the words have been stolen from someone else?
the opinion you're parroting here completely misses the point of LLMs. their purpose is not to start artistic movements or liberally think for themselves and no one is claiming it is. their purpose is to accelerate information retrieval and translation and programming tasks, which they by and large are incredible at. even if they had the capacity to invent artistic movements, which in theory they most certainly do, starting an artistic movement is pretty much intrinsically a human thing, and it requires desire, inclination, trust and a grounding in the real world, such as it is. your "spark of ingenuity" is not lacking because of some issue or lack of creativity, it's lacking because it's not the point and no one wants it to be.
whether it is "cognition" or not is completely irrelevant to their purpose and use, and its a complete waste of time trying to litigate if it is or not because the word in itself is poorly defined. if you're trying to figure out if j=k but you can't define j or k, and you do know that k isn't a big factor in the usefulness of the system, then what is the point? is it jealousy? fear? I assure you, LLMs are not a threat to the special ingenuity of your mind
this opinion is the equivalent of watching the invention of the pocket calculator and complaining that it can't write calculus equations on a black board
their purpose is not to start artistic movements or liberally think for themselves and no one is claiming it is.
your "spark of ingenuity" is not lacking because of some issue or lack of creativity, it's lacking because it's not the point and no one wants it to be.
There are plenty of people/communities online that want it to be exactly that and want to remove the pesky human element from the equation. Dismissing them because it doesn't fit your argument doesn't mean they don't exist.
Re: "does it really matter if it can be described as cognition or not?"
To Chomsky? He'd have to speak for himself, but I suspect the answer is "yes, obviously, at least to be of interest to me."
Note that I'm not saying LLMs are useless or even that what they do is usefully described as "plagiarism."
But it seems entirely unsurprising to me that Chomsky would be unimpressed and uninterested -- even to the point of dismissiveness, he's pretty much like that -- precisely because they are unrelated to "cognition."
I suspect the disappointment wasn't about whenever LLMs exhibit cognitive-like properties or not, but rather about the negative connotations tied to the word "plagiarism". Yea, they replicate patterns from their training data. So do we (ok, to be fair I have no idea about others but I believe I know that I do), and that's normal.