> and wonder how an intelligent person can still think this, can be so absolute about it. What is "actual" reasoning here?
Large language models excel at processing and generating text, but they fundamentally operate on existing knowledge. Their creativity appears limited to recombining known information in novel ways, rather than generating truly original insights.
True reasoning capability would involve the ability to analyze complex situations and generate entirely new solutions, independent of existing patterns or combinations. This kind of deep reasoning ability seems to be beyond the scope of current language models, as it would require a fundamentally different approach—what we might call a reasoning model. Currently, it's unclear to me whether such models exist or if they could be effectively integrated with large language models.
> True reasoning capability would involve the ability to analyze complex situations and generate entirely new solutions, independent of existing patterns or combinations.
"Their creativity appears limited to recombining known information"
There are some theories that this is true for humans also.
There are no human created images that weren't observed first in nature in some way.
For example, Devils/Demons/Angels were described in terms of human body parts, or 'goats' with horns. Once we got microscopes and started drawing insects then art got a lot weirder, but not before images were observed from reality. Then humans could re-combine them.
Humans can suddenly "jump" cognitive levels to see higher-order patterns.
Gödel seeing that mathematics could describe mathematics itself.
This isn't combining existing patterns, but seeing entirely new levels of abstraction.
The human brain excels at taking complex systems and creating simpler mental models.
Newton seeing planetary motion and falling apples as the same phenomenon.
This compression isn't recombination - it's finding the hidden simplicity.
Recombination adds elements together.
Insight often removes elements to reveal core principles. This requires understanding and reasoning.
Large language models excel at processing and generating text, but they fundamentally operate on existing knowledge. Their creativity appears limited to recombining known information in novel ways, rather than generating truly original insights.
True reasoning capability would involve the ability to analyze complex situations and generate entirely new solutions, independent of existing patterns or combinations. This kind of deep reasoning ability seems to be beyond the scope of current language models, as it would require a fundamentally different approach—what we might call a reasoning model. Currently, it's unclear to me whether such models exist or if they could be effectively integrated with large language models.