Many philosophical traditions which incorporate a meditation practice emphasize that your consciousness is distinct from the contents of your thoughts. Meditation (even practiced casually) can provide a direct experience of this.
When it comes to the various kinds of thought-processes that humans engage in (linguistic thinking, logic, math, etc) I agree that you can describe things in terms of functions that have definite inputs and outputs. So human thinking is probably computable, and I think that LLMs can be said to be ”think” in ways that are analogous to what we do.
But human consciousness produces an experience (the experience of being conscious) as opposed to some definite output. I do not think it is computable in the same way.
I don’t necessarily think that you need to subscribe to dualism or religious beliefs to explain consciousness - it seems entirely possible (maybe even likely) that what we experience as consciousness is some kind of illusory side-effect of biological processes as opposed to something autonomous and “real”.
But I do think it’s still important to maintain a distinction between “thinking” (computable, we do it, AIs do it as well) and “consciousness” (we experience it, probably many animals experience it also, but it’s orthogonal to the linguistic or logical reasoning processes that AIs are currently capable of).
At some point this vague experience of awareness may be all that differentiates us from the machines, so we shouldn’t dismiss it.
> It's very difficult to find some way of defining rather precisely something we can do that we can say a computer will never be able to do. There are some things that people make up that say that, "While it's doing it, will it feel good?" or, "While it's doing it, will it understand what it's doing?" or some other abstraction. I rather feel that these are things like, "While it's doing it, will it be able to scratch the lice out of it's hair?" No, it hasn't got any hair nor lice to scratch from it, okay?
> You've got to be careful when you say what the human does, if you add to the actual result of his effort some other things that you like, the appreciation of the aesthetic... then it gets harder and harder for the computer to do it because the human beings have a tendency to try to make sure that they can do something that no machine can do. Somehow it doesn't bother them anymore, it must have bothered them in earlier times, that machines are stronger physically than they are...
You need to define "consciousness" first for the question to have any meaning, but all our definitions of consciousness seem to ultimately boil down to, "this thing that I'm experiencing".
What about the famous solution provided by Descartes, “Cogito ergo sum”? Let's assume the fact that “we think”, so we can put it in a function to be computable, how is that going to prove that “I exist” for a machine? How is the machine going to perceive itself as a conscious being?
> When it comes to the various kinds of thought-processes that humans engage in (linguistic thinking, logic, math, etc) I agree that you can describe things in terms of functions that have definite inputs and outputs.
Function can mean inputs-outputs. But it can also mean system behaviors.
For instance, recurrence is a functional behavior, not a functional mapping.
Similarly, self-awareness is some kind of internal loop of information, not an input-output mapping. Specifically, an information loop regarding our own internal state.
Today's LLMs are mostly not very recurrent. So might be said to be becoming more intelligent (better responses to complex demands), but not necessarily more conscious. An input-output process has no ability to monitor itself, no matter how capable of generating outputs. Not even when its outputs involve symbols and reasoning about concepts like consciousness.
So I think it is fair to say intelligence and consciousness are different things. But I expect that both can enhance the other.
Meditation reveals a lot about consciousness. We choose to eliminate most thought, focusing instead on some simple experience like breathing, or a concept of "nothing".
Yet even with this radical reduction in general awareness, and our higher level thinking, we remain aware of our awareness of experience. We are not unconscious.
To me that basic self-awareness is what consciousness is. We have it, even when we are not being analytical about it. In meditation our mind is still looping information about its current state, from the state to our sensory experience of our state, even when the state has been reduced so much.
There is not nothing. We are not actually doing nothing. Our mental resting state is still a dynamic state we continue to actively process, that our neurons continue to give us feedback on, even when that processing has been simplified to simply letting that feedback of our state go by with no need to act on it in any way.
So consciousness is inherently at least self-awareness in terms of internal access to our own internal activity. And that we retain a memory of doing this minimal active or passive self-monitoring, even after we resume more complex activity.
My own view is that is all it is, with the addition of enough memory of the minimal loop, and a rich enough model of ourselves, to be able to consider that strange self-awareness looping state afterwards. Ask questions about its nature, etc.
LLMs are recurrent in the sense that you describe, though, since every token of output they produce is fed back to them as input. Indeed, that is why reasoning models are possible in the first place, and it's not clear to me why the chain-of-thought is not exactly that kind of "internal loop of information" that you mention.
> Meditation reveals a lot about consciousness. We choose to eliminate most thought, focusing instead on some simple experience like breathing, or a concept of "nothing".
The sensation of breathing still constitutes input. Nor is it a given that a thought is necessarily encodeable in words, so "thinking about concept of nothing" is still a thought, and there's some measurable electrochemical activity encoding that in the brain which encodes it. In a similar vein, LLMs deal with arbitrary tokens, which may or may not encode words - e.g. in multimodal LMs, input includes tokens encoding images directly without any words, and output can similarly be non-word tokens.
> chain-of-thought is not exactly that kind of "internal loop of information" that you mention.
It is, but (1) the amount of looping in models today is extremely trivial. if our awareness loop is on the order of milliseconds, we experience it on the order of thousands of milliseconds at a minimum. And consider and consolidate our reasoning about experiences over minutes, hours, even days. Which would be thousands to many millions of iterations of experiential context.
Then (2), the looping of models today is not something the model is aware of at a higher level. It processes the inputs iteratively, but it isn't able to step back and examine its own responses recurrently at a second level in a different indirect way.
Even though I do believe models can reason about themselves and behave as if they did have that higher functionality.
But their current ability to reason like that has been trained into them by human behavior, not learned independently by actually monitoring their own internal dynamics. They cannot yet do that. We do not learn we are conscious, or become conscious, by parroting others conscious enabled reasoning. A subtle but extremely important difference.
Finally, (3) they don't build up a memory of their internal loops, much less a common experience from a pervasive presence of such loops.
Those are just three quite major gaps.
But they are not fundamental gaps. I have no doubt that future models will become conscious as limitations are addressed.
This is what I wrote while I was thinking about the same topic before I can across your excellent comment; as if it’s a summary of what you just said:
Consciousness is nothing but the ability to have internal and external senses, being able to enumerate them, recursively sense them, and remember the previous steps. If any of those ingredients are missing, you cannot create or maintain consciousness.
When I was a kid, I used to imagine if that society ever developed AI, there would be widespread pushback to the idea that computers could ever develop consciousness.
I imagined the Catholic Church, for example, would be publishing missives reminding everyone that only humans can have souls, and biologists would be fighting an quixotic battle to claim that consciousness can arise from physical structures and forces.
I'm still surprised at how credulous and accepting societies have been of AI developments over the last few years.
Probably because we've been conditioned to accept that machines, no matter how friendly, are not really conscious in the way we are, so there is no risk of them needing to be treated differently than a hammer.
AI developments over the last few years have not needed that view to change.
>it seems entirely possible (maybe even likely) that what we experience as consciousness is some kind of illusory side-effect of biological processes as opposed to something autonomous and “real”.
I've heard this idea before but I have never been able to make head or tail of it. Consciousness can't be an illusion, because to have an illusion you must already be conscious. Can a rock have illusions?
Well, it entirely depends on how you even define free will.
Btw, Turing machines provide some inspiration for an interesting definition:
Turing (and Gödel) essentially say that you can't predict what a computer program does: you have to run it to even figure out whether it'll halt. (I think in general, even if you fix some large fixed step size n, you can't even predict whether an arbitrary program will halt after n steps or not, without essentially running it anyway.)
Humans could have free will in the same sense, that you can't predict what they are doing, without actually simulating them. And by an argument implied by Turing in his paper on the Turing test, that simulation would have the same experience as the human would have had.
(To go even further: if quantum fluctuations have an impact on human behaviour, you can't even do that simulation 100% accurately, because of the no cloning theorem.
To be more precise: I'm not saying, like Penrose, that human brains use quantum computing. My much weaker claim is that human brains are likely a chaotic system, so even a very small deviation in starting conditions can quickly lead to differences in outcome.
If you are only interested in approximate predictions, identical twins show that just getting the same DNA and approximation of the environment gets you pretty far in making good predictions. So cell level scans could be even better. But: not perfect.)
> Humans could have free will in the same sense, that you can't predict what they are doing, without actually simulating them.
I think it's a good point, but I would argue it's even more direct than that. Humans themselves can't reliably predict what they are going to do before they do it. That's because any knowledge we have is part of our deliberative decision-making process, so whenever we think we will do X, there is always a possibility that we will use that knowledge to change our mind. In general, you can't feed a machine's output into its input except for a very limited class of fixed point functions, which we aren't.
So the bottom line is that seen from the inside, our self-model is a necessarily nondeterministic machine. We are epistemically uncertain about our own actions, for good reason, and yet we know that we cause them. This forms the basis of our intuition of free will, but we can't tell this epistemic uncertainty apart from metaphysical uncertainty, hence all the debate about whether free will is "real" or an "illusion". I'd say it's a bit of both: a real thing that we misinterpret.
You are right about the internal model, but I wouldn't dismiss the view from the outside.
Ie I wouldn't expect humans without free will to be able to predict themselves very well, either. Exactly as you suggest: having a fixed point (or not) doesn't mean you have free will.
The issue I have with the view from the outside is that it risks leading to a rather anthropomorphic notion of free will, if the criterion boils down to that an entity can only have free will if we can't predict its behavior.
I'm tempted to say an entity has free will if it a) has a self-model, b) uses this self-model as a kind of internal homunculus to evaluate decision options and c) its decisions are for the most part determined by physically internal factors (as opposed as external constraints or publicly available information). It's tempting to add a threshold of complexity, but I don't think there's any objectively correct way to define one.
I don't understand why a self-model would be necessary for free will?
> [...] c) its decisions are for the most part determined by physically internal factors (as opposed as external constraints or publicly available information).
I don't think humans reach that threshold. Though it depends a lot on how you define things.
But as far as I can tell, most of my second-to-second decisions are very much coloured by the fact that we have gravity and an atmosphere at comfortable temperatures (external factors), and if you changed that all of a sudden, I would decide and behave very differently.
> It's tempting to add a threshold of complexity, but I don't think there's any objectively correct way to define one.
Your homunculus is one hell of a complexity threshold.
> I think that LLMs can be said to be ”think” in ways that are analogous to what we do. ... But human consciousness produces an experience (the experience of being conscious) as opposed to some definite output. I do not think it is computable in the same way.
I for one (along with many thinkers) define intelligence as the extent to which an agent can solve a particular task. I choose the definition to separate it from issues involving consciousness.
To state it's a turing machine might be a bit much but there might be a map between substrates to some degree, and computers can have a form of consciousness, an inner experience, basically the hidden layers and clearly the input of senses, but it wouldn't be the same qualia as a mind, I suspect it has more to due with chemputation and is dependent on the substrate doing the computing as opposed to a facility thereof, up to some accuracy limit, we can only detect light we have receptors for after all. To have qualia distinct to another being you need to compute on a substrate that can accurately fool the computation, fake sugar instead of sugar for example.
What we have and AI don't are emotions. After all, that what animates us to survive and reproduce. Without emotions we can't classify and therefore store our experiences because there no reason to remember something which we are indifferent about. This includes everything not accessible by our senses. Our abilities are limited to what is needed for survival and reproduction because all the rest would consume our precious resources.
The larger picture is that our brains are very much influenced by all the chemistry that happens around our units of computation (neurones); especially hormones. But (maybe) unlike consciousness, this is all "reproducible", meaning it can be part of the algorithm.
We don’t know that LLMs generating tokens for scenarios involving simulations of conscious don’t already involve such experience. Certainly such threads of consciousness would currently be much less coherent and fleeting than the human experience, but I see no reason to simply ignore the possibility. To whatever degree it is even coherent to talk about the conscious experience of others than yourself (p-zombies and such), I expect that as AIs’ long term coherency improves and AI minds become more tangible to us, people will settle into the same implicit assumption afforded to fellow humans that there is consciousness behind the cognition.
The very tricky part then is to ask if the consciousness/phenomenological experience that you postulate still happens if, say, we were to compute the outputs of an LLM by hand… while difficult, if every single person on earth did one operation per second, plus some very complicated coordination and results gathering, we could probably predict a couple of tokens for an LLM at some moderate frequency… say, a couple of tokens a month? a week? A year? A decade? Regardless… would that consciousness still have an experience? Or is there some threshold of speed and coherence, or coloration that would be missing and result in failure for it to emerge?
Impossible to answer.
Btw I mostly think it’s reasonable to think that there might be consciousness, phenomenology etc are possible in silicon, but it’s tricky and unverifiable ofc.
> would that consciousness still have an experience?
If the original one did, then yes, of course. You're performing the exact same processing.
Imagine if instead of an LLM the billions of people instead simulated a human brain. Would that human brain experience consciousness? Of course it would, otherwise they're not simulating the whole brain. The individual humans performing the simulation are now comparable to the individual neurons in a real brain. Similarly, in your scenario, the humans are just the computer hardware running the LLM. Apart from that it's the same LLM. Anything that the original LLM experiences, the simulated one does too, otherwise they're not simulating it fully.
You can simulate as much of the human as you need to. So long as consciousness is a physical process (or an emergent property of a physical process), it can be simulated.
The notion that it is not a physical process is an extraordinary claim in its own right, which itself requires evidence.
You can simulate as much of an aircraft as you need to. So long as flying is a physical process, it can be simulated.
But your simulation will never fly you over an ocean, it will never be an aircraft or do what aircraft do. A simulation of heat transfer will not cook your dinner. A simulation of Your assumption that a simulation of a mind is a mind, requires evidence.
> But your simulation will never fly you over an ocean
It will fly over a simulated ocean just fine. It does exactly what aircraft do, within the simulation. By adding “you” to the sentence you've made it an apples to oranges comparison because “you” is definitionally not part of the simulation. I don't see how you could add the same “you” to “it will simulate consciousness just fine”.
It doesn't move real Oxygen and Nitrogen atoms, it doesn't put exhaust gas into the air over the ocean, it doesn't create a rippling sound and pressure wave for a thousand miles behind it, it doesn't drain a certain amount of jet fuel from the supply chain or put a certain amount of money in airline and mechanics' pockets, it doesn't create a certain amount of work for air traffic controllers... reductio ad abusurdum is that a flipbook animation of a stickman aircraft moving over a wiggly line ocean is a very low granularity simulation and "does exactly what aircraft do" - and obviously it doesn't. No amount of adding detail to the simulation moves it one inch closer to doing 'exactly what aircraft do'.
> "I don't see how you could add the same “you” to “it will simulate consciousness just fine”"
by the same reductio-ad-absurdum I don't see how you can reject a stickman with a speech bubble drawn over his head as being "a low granularity simulated consciousness". More paper, more pencil graphite, and the stickman will become conscious when there's enough of it. Another position is that adding things to the simulation won't simulate consciousness just fine - won't move it an inch closer to being conscious; it will always be a puppet of the simulator, animated by the puppeteer's code, always wooden Pinocchio and never a real person. What is the difference between these two:
a) a machine with heat and light and pressure sensors, running some code, responding to the state of the world around it.
b) a machine with heat and light and pressure sensors, running some code [converting the inputs to put them into a simulation, executing the simulation, converting the outputs from the simulation], and using those outputs to respond to the state of the world around it.
? What is the 'simluate consciousness' doing here at all, why is it needed? To hide the flaw in the argument; it's needed to set up the "cow == perfectly spherical massless simulated cow" premise which makes the argument work in English words. Instead of saying something meaningful about consciousness, one states that "consciousness is indistinguishable from perfectly spherical massless simulated consiousness" and then states "simply simulate it to as much detail as needed" and that allows all the details to be handwaved away behind "just simulate it even more (bro)".
Pointing out that simulations are not the real thing is the counter-argument. Whether or not the counter-argument can be made by putting "you" into a specific English sentence is not really relevant, that's only to show that the simulated aircraft doesn't do what the real aircraft does. A simulated aircraft flying over a simulated ocean is no more 'real' than drawing two stick figures having a conversation in speech bubbles.
You just wrote a lot of text just to say that you don't accept the simulation as “real”.
That's just semantics. I'm not here to argue what the word “real” means. Of course you can define it in such a way that the simulated aircraft isn't “really” flying over an ocean, and it would be just as valid as any other definition, but it doesn't say anything meaningful or insightful about the simulation.
Nobody contests your point that the simulated aircraft isn't going over a real ocean and isn't generating work for real-life air traffic controllers. But conversely you don't seem to contest the claim that oceans and air traffic controllers could be simulated, too. Therefore, consciousness can be simulated as well, and it would be a simulated consciousness that just doesn't fall into your definition of “real”.
You need to clearly define what constitutes "real" before we can meaningfully talk about the distinction between "real" atoms and simulated ones.
As far as physics go, it's all just numbers in the end. Indeed, the more we keep digging into the nature of reality, the more information theory keeps popping up - see e.g. the holographic principle.
> "As far as physics go, it's all just numbers in the end."
No it isn't; numbers are a map, maps are not the territory. You are asking me to define how a map is different from a city, but you are not accepting that the city is made of concrete and is square kilometers large and the map is made of paper and is square centimeters large as a meaningful difference, when I think it's such an obvious difference it's difficult to put any more clearly.
What constitutes a real atom: a Hydrogen atom capable of combining with Oxygen to make water, capable of being affected by the magnetic field of an MRI scanner, etc.
What constitutes a simulated atom: a pattern of bits/ink/numbers which you say "this is a representation of a Hydrogen atom", capable of nothing, except you putting some more bits/ink/numbers near it and speaking the words "this is it interacting to make simulated water".
Ok, you are saying that a map is different than the territory. That a simulation is meaningfully different.
Do you deny that you could be in a simulation right now, in the matrix? What you actually think are are molecules of oxygen are actually simulated molecules. That there is no way for you to every tell the difference.
Is simulate the right word there? With a hundred trillion connections between 80 billion neurons, it seems unlikely that it would ever be worth simulating a human brain, because it would be simpler to just build one than to assemble a computer complex enough to simulate it.
Yes that’s my main point - if you accept the first one, then you should accept the second one (though some people might find the second so absurd as to reject the first).
> Imagine if instead of an LLM the billions of people instead simulated a human brain. Would that human brain experience consciousness? Of course it would, otherwise they're not simulating the whole brain.
However, I don’t really buy “of course it would,” or in another words the materialist premise - maybe yes, maybe no, but I don’t think there’s anything definitive on the matter of materialism in philosophy of mind. as much as I wish I was fully a materialist, I can never fully internalize how sentience can uh emerge from matter… in other words, to some extent I feel that my own sentience is fundamentally incompatible with everything I know about science, which uh sucks, because I definitely don’t believe in dualism!
It would certainly with sufficient accuracy honestly say to you that it's conscious and believes it whole heartily, but in practice it would need to a priori be able describe external sense data, as it's not separate necessarily from the experiences, which intrinsically requires you to compute in the world itself otherwise it would only be able to compute on, in a way it's like having edge compute at the skins edge. The range of qualia available at each moment will be distinct to each experiencer with the senses available, and there likely will be some overlap in interpretation based on your computing substrate.
We in a way can articulate the underlying chemputation of the universe mediated through our senses, reflection and language, turn a piece off (as it is often non continuous) and the quality of the experience changes.
But do you believe in something constructive? Do you agree with Searle that computers calculate? But then numbers and calculation are immaterial things that emerge from matter?
When it comes to the various kinds of thought-processes that humans engage in (linguistic thinking, logic, math, etc) I agree that you can describe things in terms of functions that have definite inputs and outputs. So human thinking is probably computable, and I think that LLMs can be said to be ”think” in ways that are analogous to what we do.
But human consciousness produces an experience (the experience of being conscious) as opposed to some definite output. I do not think it is computable in the same way.
I don’t necessarily think that you need to subscribe to dualism or religious beliefs to explain consciousness - it seems entirely possible (maybe even likely) that what we experience as consciousness is some kind of illusory side-effect of biological processes as opposed to something autonomous and “real”.
But I do think it’s still important to maintain a distinction between “thinking” (computable, we do it, AIs do it as well) and “consciousness” (we experience it, probably many animals experience it also, but it’s orthogonal to the linguistic or logical reasoning processes that AIs are currently capable of).
At some point this vague experience of awareness may be all that differentiates us from the machines, so we shouldn’t dismiss it.