Yes, although there are other theories. Some of those other theories actually retain some of Chomsky's ideas, such as our innate capability to learn language (which seems to be the idea most commonly disagreed with). Two examples are Lexical Functional Grammar (LFG) and Head-driven Phrase Structure Grammar (HPSG).
Why would that idea be disagreed with? Even if language learning started out piggybacking on some general mechanism, that general mechanism would have biases, and evolution would, I think, tend to optimize/specialize it over time.
I can't really speak for why it is disagreed with--I personally find the claim that language is innate completely convincing. But one reason I've heard for disagreeing is the notion that some animals have language (not true unless you water down the definition of "language"). Others seem to think that general learning mechanisms explain the acquisition of language. I find that outrageously naive, since virtually all children acquire language (and if they're raised bilingually, more than one language), whereas even good linguists struggle to define all the rules of a language.
That looks like a disagreement with him on the utility of machine learning; while I don’t know whether Chomsky is right about his more fundamental ideas on linguistics, that doesn’t seem relevant.
Yes, Chomsky's earlier positions include the idea that recursive natural grammars cannot be learnt by machines because of the "poverty of the stimulus." But this result is only true if you ignore probabilistic grammars. (See the Norvig article for some footnotes linking to the relevant papers.)
And of course, LLMs generate perfectly reasonable natural language without any "universal grammar", or indeed, much structure beyond "predict the next token using a lot of transformer layers."
I'm pretty sure that most of Chomsky's theoretical model is dead at this point, but that's a long discussion and he's unlikely to agree.
Chomsky had a stroke awhile back, which apparently left him unable to speak. But I guarantee that there are many linguists who would not agree that his model is dead.
As for LLMs, at present they require orders of magnitude more training data than children are exposed to, so it's unclear they have anything to say about how humans learn language.