"By design" LLMs lack: initiative, emotion, creativity, curiosity, opinions, beliefs, self-reflection, or even logical reasoning. All they can do is predict the next token - which is still an extremely powerful building block on its own, but nothing like the above.
You've made a reasonable argument that LLMs cannot on their own be an implementation of GAI. But GP's claim was stronger: that LLMs won't even be a component (or "building block") of the first GAI.
One might reasonably ask a frontier how to generate the source code for an agent based system that exhibits examples of initiative, emotion, creativity, curiosity, opinions, beliefs, self-reflection, or even logical reasoning.
I believe at that point we would have to seriously ask ourselves about the definition of "reasoning" or "intelligence"; humans have an intuitive understanding, LLMs don't - would an LLM be able to evaluate the output of an LLM, or would we have to involve a "human in the loop"[1]?