LLMs are auto regressive, meaning, to generate multi-word output, you take the old output, feed it in as input and get the next word. This loop provides all the basis for recursive thinking.
Perhaps you can say that current context-limited approaches limit the total memory of such a thing. Obviously that is true, but not functionally different than brains or computers.