Yes, Chomsky's earlier positions include the idea that recursive natural grammars cannot be learnt by machines because of the "poverty of the stimulus." But this result is only true if you ignore probabilistic grammars. (See the Norvig article for some footnotes linking to the relevant papers.)
And of course, LLMs generate perfectly reasonable natural language without any "universal grammar", or indeed, much structure beyond "predict the next token using a lot of transformer layers."
I'm pretty sure that most of Chomsky's theoretical model is dead at this point, but that's a long discussion and he's unlikely to agree.
Chomsky had a stroke awhile back, which apparently left him unable to speak. But I guarantee that there are many linguists who would not agree that his model is dead.
As for LLMs, at present they require orders of magnitude more training data than children are exposed to, so it's unclear they have anything to say about how humans learn language.