Having studied those things I can say that from their perspective “what’s the difference?” is an entirely legitimate question.
Boldly asserting that what LLMS do is not cognition is even worse than asserting that it is. (If you dig deep into how they do what they do we find functional differences, but the outcome are equivalent)
The butlerian view is actually a great place to start. He asserts that when we solve a problem through thinking and then express that solution in a machine we’re building a thinking machine.
Because it’s an expression of our thought.
Take for example the problem of a crow trying to drink from a bottle with a small neck. The crow can’t reach the water. It figures out that pebbles in the bottle raise the level so it drops pebbles till it can reach the water.
That’s thinking. It’s non-human thinking, but I think we can all agree.
Now express that same thought (use a non water displacement factor to raise the water to a level where it can do something useful)
Any machine that does that expresses the cognition behind the solution to that particular problem. That might be a “one shot” machine. Butler argues that as we surround ourselves with those one shot machines we become enslaved to them because we can’t go about our lives without them. We are willing partners in that servitude but slaves because we see to the care and feeding of our machine masters, we reproduce them, we maintain them, we power them.
His definition of thinking is quite specific. And any machine that expresses the solution to a problem is expressing a thought.
Now what if you had a machine that could generalize and issue solutions to many problems? Might that be a useful tool? Might it be so generally useful that we’d come to depend on it? From the Butlerian perspective our LLMS are already AGI. Namely I can go to Claude and ask for the solution to pretty much any problem I face and get a reasonable answer.
In many cases better than I could have done alone. So perhaps if we sat down with a double blind test LLMs are already ASI. (AI that exceeds the capability of normal humans)
> Boldly asserting that what LLMS do is not cognition is even worse than asserting that it is.
Why? Understanding concepts like "cognition" is a matter of philosophy, not of science.
> He asserts that when we solve a problem through thinking and then express that solution in a machine we’re building a thinking machine. Because it’s an expression of our thought.
Yeah, and that premise makes no sense to me. The crow was thinking; the system consisting of (the crow's beak, dropping pebbles into the water + the pebbles) was not. Humanity has built all kinds of machines that use no logic whatsoever in their operation - which make no decisions, and operate in exactly one way when explicitly commanded to start, until explicitly commanded to stop - and yet we have solved human problems by building them.
> Boldly asserting that what LLMS do is not cognition is even worse than asserting that it is.
That's the issue I was driving at. The machine is so convincing. How can we say what it does is not "thinking" when it seems to be breaking down a query like a human does. The distinction between what an AI is and what an LLM is - is so thin that most of us will be ignorant and combine the two because you really need to see what is under the hood before you understand that the responses you're getting are from a "model" - not some sentient thinking machine.
But what does it matter if it is from a "model" that understands text? It still produces more or less what other humans produce. Most of us won't care about the difference.
The butlerian view is actually a great place to start. He asserts that when we solve a problem through thinking and then express that solution in a machine we’re building a thinking machine. Because it’s an expression of our thought. Take for example the problem of a crow trying to drink from a bottle with a small neck. The crow can’t reach the water. It figures out that pebbles in the bottle raise the level so it drops pebbles till it can reach the water. That’s thinking. It’s non-human thinking, but I think we can all agree. Now express that same thought (use a non water displacement factor to raise the water to a level where it can do something useful) Any machine that does that expresses the cognition behind the solution to that particular problem. That might be a “one shot” machine. Butler argues that as we surround ourselves with those one shot machines we become enslaved to them because we can’t go about our lives without them. We are willing partners in that servitude but slaves because we see to the care and feeding of our machine masters, we reproduce them, we maintain them, we power them. His definition of thinking is quite specific. And any machine that expresses the solution to a problem is expressing a thought.
Now what if you had a machine that could generalize and issue solutions to many problems? Might that be a useful tool? Might it be so generally useful that we’d come to depend on it? From the Butlerian perspective our LLMS are already AGI. Namely I can go to Claude and ask for the solution to pretty much any problem I face and get a reasonable answer.
In many cases better than I could have done alone. So perhaps if we sat down with a double blind test LLMs are already ASI. (AI that exceeds the capability of normal humans)