I'm still not on board with the (seemingly prevalent) notion that LLM's can't reason. What's reasoning, anyway? I'm not actively advocating for any side, but the arguments against reasoning always felt very tautological to me.
The burden of proof is on the argument that they _are_ reasoning, and I have seen very little evidence that they do.
It's also immediately clear to me when I look at the architecture of transformers that reasoning is not in the cards. I could be convinced otherwise if, again, someone showed me an indication of reasoning behavior. Since there is no such evidence and the systems theory approach tells me it does not reasonably reason, I have a pretty darn good reason not to believe it's reasoning.
That's not a tautology. That's the summary of the argument itself. If you want to know more, then a good reason why it can't be reasoning is that there is no evaluation of the truth value of any statement at any point, only the likelihood of the statement being found in the training set. This evaluation has no relationship with truth.
If no statement is ever evaluated, it's not logical reasoning, because logical reasoning requires the evaluation of truth values of statements.
> there is no evaluation of the truth value of any statement at any point
You could argue that the attention part of the network is some form of truth validation of the next predicted token? But indeed, the current chat interfaces don't change their previous text output retroactively.
Still, I am not really convinced. If we assume a human can reason, what does "evaluation of the truth value" mean? Thinking about something? This is still performed with our implicit mental model of the world, coming from shadows on a cave's wall, right?
I'm still not on board with the (seemingly prevalent) notion that LLM's can't reason. What's reasoning, anyway? I'm not actively advocating for any side, but the arguments against reasoning always felt very tautological to me.