I define a new word, the podition. A podition is any object that can fit on a podium. Is a computer a podition ? Why ?
ChatGPT:
The term "podition" is a newly coined word, and its definition as "any object that can fit on a podium" is quite specific. Whether or not a computer can be considered a "podition" depends on the size and design of the podium in question, as well as the specific characteristics of the computer.
If the podium is large enough and designed to accommodate objects of various sizes, then a computer could potentially be considered a podition if it fits on the podium without exceeding its space or causing any issues. However, if the podium is small or not designed to hold objects like a computer, then it may not qualify as a podition.
In summary, whether a computer is a podition or not depends on the podium's size and design and whether the computer can comfortably fit on it without causing any problems. The term "podition" itself is a new and specific concept, so its application may vary depending on context.
User:
Using common sense definitions of a computer and a podium, is a computer a podition?
ChatGPT:
Using common sense definitions of a computer and a podium, a computer is typically not considered a "podition." A computer is an electronic device used for processing and storing data, while a podium is a raised platform or stand typically used for holding notes, books, or supporting a speaker during a presentation. These two objects serve very different purposes and have distinct characteristics.
A computer is not designed to be placed on a podium, and it would generally be impractical to do so. A podium is meant for supporting objects like papers, books, or notes during a presentation, not electronic devices like computers. Therefore, in common usage and understanding, a computer would not be classified as a "podition."
Yes and it's honestly pretty funny that you think this proves otherwise. The error is mine. a computer is not always small enough to fit on a podium. It's not about a "common sense" definition or not. and GPT caught that.
Go ahead and replace computer with any object that is always small enough to fit on a podium.
The mental gymnastics are beautiful and entertaining.
Google image "podium". Google image "computer". The vast majority of computers shown can be placed on the vast majority of podiums shown.
But again, as the other commenter points out, this is abductive reasoning.
In addition to getting lost in the irrelevant weeds of "these two objects serve very different purposes and have distinct characteristics", ChatGPT also fails to point out that there are 2 common definitions for podium - one made for lecturing, on which you could place most modern computers that aren't 30kg hunks of metal (all of my university lecturers definitely had computers on the podium). And one made for humans, on which you could definitely place any bulky desktop computer short of a mainframe.
What this really shows though is that LLM output is a mirror in which people will read mostly what they want to read which matches their narrative.
>The mental gymnastics are beautiful and entertaining.
Sure
>But again, as the other commenter points out, this is abductive reasoning.
Deduction is divining the effect from rule and cause. The rule is the definition of the podition, the cause is that a computer is small enough to fit a podium and the effect is that a computer is a podition.
>Google image "podium". Google image "computer". The vast majority of computers shown can be placed on the vast majority of podiums shown.
Deduction does not deal with probable statements. The rule and cause must both be true. My cause is not always true.
Like i said, i genuinely do think it's funny that you read all that and decided it was absence of reasoning. Just kind of shows the state of things.
The example i gave is deductive. i.e divining the effect from rule and cause. The rule is the definition of the podition, the cause is that a computer is small enough to fit a podium and the effect is that a computer is a podition.
Induction is divining the rule from the cause and effect. It can definitely fail. There could be exceptions to the general rule that aren't captured in observation. It is the basis of experimental science.
abduction is divining the cause from the rule and effect.
i.e When it rains, the grass gets wet (rule). The grass is wet (effect). Therefore, I predict that it just rained (cause).
Both induction and abduction deal with probable statements. Only deduction is guaranteed to be true (provided the rule and cause are true)
I’m sorry you’re right, I dashed off a quick reply and wrote the wrong word as I was distracted.
However I still stand by the statement that their inductive and deductive reasoning is weaker than abductive. This is why they so easily hallucinate - that the very nature of choosing the semantically likely next token is at its root abductive.
GPT is remarkable, but it’s not reasoning in any meaningful sense. It’s not structuring logical constructs and drawing conclusions. I’d hold by my assertion that they are abductively simulating inductive and deductive reasoning.
>However I still stand by the statement that their inductive and deductive reasoning is weaker than abductive
Technically abductive is the weakest form of reasoning in the sense of the reasoning type likeliest to form incorrect conclusions. The conclusions are wrong if you decide on the wrong rule. In the example, there are other rules that could make grass wet other than rain. It could be a sprinkler.
However, having a good sense of what rule to pick for conclusions ? I agree it is the hardest to replicate in an artificial system by far.
>GPT is remarkable, but it’s not reasoning in any meaningful sense. It’s not structuring logical constructs and drawing conclusions. I’d hold by my assertion that they are abductively simulating inductive and deductive reasoning.
Seeing output from GPT that demonstrates intelligence, reasoning, or whatever, and saying it is not real reasoning/Intelligence etc, is like looking at a plane soar and saying that the plane is fake flying. And this isn't even a nature versus artificial thing either. The origin point is entirely arbitrary.
You could just as easily move the origin to Bees and say, "oh, birds aren't really flying". You could move it to planes and say, "oh, helicopters aren't really flying." It's basically a meaningless statement.
If it can do and say things demonstrating induction or deduction then it is performing induction or deduction.
>It’s not structuring logical constructs and drawing conclusions
I don't think people are structuring logical constructs with every deduction they make
I don’t think people always do deductive reasoning when they attempt to do it. In fact I think people largely do abductive reasoning, even when they attempt deductive reasoning. Machines are better at deductive reasoning because sans some special purpose approach they can do nothing but follow the rules.
This is specifically why I think LLMs are so enchanting to humans, because it’s behavior and logic is more less sterile and more human in nature precisely because it’s a “most likely” based on its training data approach. With lots of examples of deductive reasoning it can structure a response that is deductively reasoned - until it doesn’t. The fact it can fail in the process of deductive reasoning shows it’s not actually deductively reasoning. This doesn’t mean it can’t produce results that are deductive - it’s literally unable to formulate a sense of rules and application of those rules in sequence to arrive at a conclusion based on the premise. It formulates a series of most likely tokens based on its training and context, so while it may quite often arrive at a conclusion that is deductive it never actually deduced anything.
I feel like you feel I’m somehow denigrating the output of the models. I’m not. I’m in fact saying we already have amazing deductive solvers and other reasoning systems that can do impressive proofs far beyond the capability of any human or LLM. But we have never built something that can abductively reason over an abstract semantic space, and that is AMAZING. Making LLMs perform rigorous deductive reasoning IMO is a non goal. Making a system of models and techniques that leverages best of breed and firmly plants the LLM in the space of abstract semantic abductive reasoning as the glue that unites everything is what we should be focused on. Then instead of spending 10 years making an LLM that can beat a high school chess champion, we can spend two months integrating world class chess AI into a system that can delegate to the AI chess solver when it plays chess.
LLMs are great at induction.
In a broad sense, they are also very good at deduction.
"I define a new word, the podition. A podition is any object that can fit on a podium. Is a computer a podition ? Why ?"
A correct answer is deductive.
LLMs eat these kind of questions for breakfast. Even the OG 2020 GPT-3 could manage them.
You really do have to stretch deduction to heights most people struggle with to have them falter majorly.