I believe LLMs are both data and processing, but even humans reasoning is based in strong ways on existing knowledge. However, for the goal of the post, indeed it is the memorization that is the key value, and the fact that likely in the future sampling such models can be used to transfer the same knowledge to bigger LLMs, even if the source data is lost.
I'm not saying there is no latent reasoning capability. It's there. It just seems to be that the memory and lookup component is much more useful and powerful.
To me intelligence describes something much more capable than what I see in these things, even the bleeding edge ones. At least so far.
I offer a POV that is in the middle: reasoning is powerful to evaluate which solution is better among N in the context. Memorization allows sampling of many competing ideas from the problem space, than the LLM picks the best, making chain of thoughts so effective. Of course zero shot reasoning also is a part of the story but somewhat weaker, exactly like we are not often able to spit the best solution before evaluation of the space (unless we are very accustomed to the specific problem).
That's the problem with the term "intelligence". Everyone has their own definition, we don't even know what makes us humans intelligent and more often than not it's a moving goalpost as these models get better.