>However I still stand by the statement that their inductive and deductive reasoning is weaker than abductive
Technically abductive is the weakest form of reasoning in the sense of the reasoning type likeliest to form incorrect conclusions. The conclusions are wrong if you decide on the wrong rule. In the example, there are other rules that could make grass wet other than rain. It could be a sprinkler.
However, having a good sense of what rule to pick for conclusions ? I agree it is the hardest to replicate in an artificial system by far.
>GPT is remarkable, but it’s not reasoning in any meaningful sense. It’s not structuring logical constructs and drawing conclusions. I’d hold by my assertion that they are abductively simulating inductive and deductive reasoning.
Seeing output from GPT that demonstrates intelligence, reasoning, or whatever, and saying it is not real reasoning/Intelligence etc, is like looking at a plane soar and saying that the plane is fake flying. And this isn't even a nature versus artificial thing either. The origin point is entirely arbitrary.
You could just as easily move the origin to Bees and say, "oh, birds aren't really flying". You could move it to planes and say, "oh, helicopters aren't really flying." It's basically a meaningless statement.
If it can do and say things demonstrating induction or deduction then it is performing induction or deduction.
>It’s not structuring logical constructs and drawing conclusions
I don't think people are structuring logical constructs with every deduction they make
I don’t think people always do deductive reasoning when they attempt to do it. In fact I think people largely do abductive reasoning, even when they attempt deductive reasoning. Machines are better at deductive reasoning because sans some special purpose approach they can do nothing but follow the rules.
This is specifically why I think LLMs are so enchanting to humans, because it’s behavior and logic is more less sterile and more human in nature precisely because it’s a “most likely” based on its training data approach. With lots of examples of deductive reasoning it can structure a response that is deductively reasoned - until it doesn’t. The fact it can fail in the process of deductive reasoning shows it’s not actually deductively reasoning. This doesn’t mean it can’t produce results that are deductive - it’s literally unable to formulate a sense of rules and application of those rules in sequence to arrive at a conclusion based on the premise. It formulates a series of most likely tokens based on its training and context, so while it may quite often arrive at a conclusion that is deductive it never actually deduced anything.
I feel like you feel I’m somehow denigrating the output of the models. I’m not. I’m in fact saying we already have amazing deductive solvers and other reasoning systems that can do impressive proofs far beyond the capability of any human or LLM. But we have never built something that can abductively reason over an abstract semantic space, and that is AMAZING. Making LLMs perform rigorous deductive reasoning IMO is a non goal. Making a system of models and techniques that leverages best of breed and firmly plants the LLM in the space of abstract semantic abductive reasoning as the glue that unites everything is what we should be focused on. Then instead of spending 10 years making an LLM that can beat a high school chess champion, we can spend two months integrating world class chess AI into a system that can delegate to the AI chess solver when it plays chess.
Technically abductive is the weakest form of reasoning in the sense of the reasoning type likeliest to form incorrect conclusions. The conclusions are wrong if you decide on the wrong rule. In the example, there are other rules that could make grass wet other than rain. It could be a sprinkler.
However, having a good sense of what rule to pick for conclusions ? I agree it is the hardest to replicate in an artificial system by far.
>GPT is remarkable, but it’s not reasoning in any meaningful sense. It’s not structuring logical constructs and drawing conclusions. I’d hold by my assertion that they are abductively simulating inductive and deductive reasoning.
Seeing output from GPT that demonstrates intelligence, reasoning, or whatever, and saying it is not real reasoning/Intelligence etc, is like looking at a plane soar and saying that the plane is fake flying. And this isn't even a nature versus artificial thing either. The origin point is entirely arbitrary.
You could just as easily move the origin to Bees and say, "oh, birds aren't really flying". You could move it to planes and say, "oh, helicopters aren't really flying." It's basically a meaningless statement.
If it can do and say things demonstrating induction or deduction then it is performing induction or deduction.
>It’s not structuring logical constructs and drawing conclusions
I don't think people are structuring logical constructs with every deduction they make