I don't think LaMDA (or any other AI) is conscious or sentient.
However, how will we be able to recognise it when we do create something that could be considered sentient? It seems inevitable that we will at some point.
Many of the arguments used to suggest AI isn't sentient seem reductive, for example saying it's just a complex algorithm or statistical system. You could similarly say human brains are just a bunch of neurons firing in sequence to emulate sentience.
In terms of capability, most of these systems seem far more advanced than my pet rabbits, but I believe most people would consider a rabbit conscious and sentient. The obvious thing these AIs lack compared to animals is objective (/emotions), a default mode network, continual learning and medium/long term memory. But are these necessary properties for sentience? If a human loses one of these properties, are they no longer sentient? If we granted these properties to a current generation AI, would it be considered sentient?
Haha, this is a great way to summarize the state of our understanding on consciousness
> The only thing I'm sure about is that I'm conscious.
A lot of people deny that consciousness exists at all, which I find funny, because for me it’s the only thing that I am absolutely certain exists. Everything else, I’m not so sure about.
I wouldn't go so far as to deny consciousness, but I will go as far as saying that I don't know whether my consciousness extends in time (to the past or to the future) or is just a moment. I believe it does, but I wouldn't be able to tell that apart from if I'm just a simulation of a single moment, so I recognise it is just a belief.
In terms of further deconstructing consciousness and the ego, a big challenge is to figure out what that actually entails. E.g. an intriguing outcome of split brain experiments is that the brain appears willing to act as if it knows things it possibly can, just retroactively spinning explanations of decisions that were unknown to it, which to me raises the question of how thin a veneer consciousness is on top of underlying autonomous or semi-autonomous processes, and how unitary the mind is (or rather isn't).
There appears to potentially be a whole lot of emperor and very little cloth.
Would a simulation of full human brain on a computer be conscious? Uhh, I guess so?
The biggest fallacy in thinking about consciousness is that it is all neatly isolated in the brain. What about your nervous system? What about your gut? It's all one big interconnected system.
I think including other parts of the body is the least of the problems. The bigger problem is that we don't understand how close to the structure of the brain (+ nervous system and gut and anything else contributing) we need to get for the simulated system to have a subjective experience of consciousness. E.g. is it enough to run a simulation on a normal computer? Does it need to process information at a certain rate to sustain this process? Is a certain physical structure of the simulating machinery required?
I don't agree we necessarily will be unable to reproduce it without being able to define consciousness, though, but I suspect we would agree that we won't be able to know if we have reproduced it or just created something very good at acting as if it's conscious (and we don't know whether or not the two are equivalent).
We'll never be able to define consciousness, but that's not necessarily a bad thing.
The sooner we realize that consciousness is a slippery socially concept and not a rationally definable property, the sooner we'll be able to move on from endless debates on the minutia of consciousness that act as a bottomless sink of intellectual energy. Devices like LaMDA, and DALLE-2 act as a Rorschach of Consciousness. That is to say they tell us more about the people people interacting with them than the devices themselves.
We already have such social machinery operating in the form of sex, gender, race, class, ability, &c. They each can take the form of acting as a frame[1] in which a strip of events can be interpreted with respect to. Even so, each has a different means in who acts an authority in the conferred frame/applied status and there is disagreement in descriptive and normative aspects of each. But I'd ask this, this that genuinely a problem?
I'm not being rhetorical when I ask, functionally what's the issue? Besides invoking a visceral intellectual response, what are the second-order effects of having a plurality of consciousness framing processes? And are they necessarily bad?
what are the second-order effects of having a plurality of consciousness framing processes
The primary effect I would expect is that none of them will adequately frame the layperson's expectation of "consciousness" and will thus likely be dismissed as meaningless hype.
In some ways that's where we're at today in online discussions. There are additionally legal, political, financial, and social repercussions which we should be on the lookout for if indeed a faction of AI-is-conscious proponents begins to coagulate around the idea.
> The truth is, until we can define consciousness; we can't possibly reproduce it on a binary logic playback device (aka a "computer").
There's nothing stopping us from reproducing consciousness before we fully understand it. Evolution created consciousness via random search; I don't think you can argue that evolution "understands" the consciousness it created
There's nothing stopping us from reproducing consciousness before we fully understand it.
Except for the fact that this "belief" that has never been achieved. This is the core tenant of the AI religion.
Can you show me one problem that a computer has ever solved without detailed understanding by someone at some level and very specific instructions and a clear logical definition of the objective?
What is commonly called AI is no exception. It's mostly statistical search and pattern matching that only occurs through specific instructions by someone who understands in detail how the search should be organized and conducted.
In order for a computer to search for "consciousness", it must be explicitedly defined in terms of binary logic because this is all a "computer" understands. Any results are logically bound to this definition. If your definition is lacking, the search results will be likewise.
I don't have a complete answer to that; however, I believe there are at least two bare-bones prerequisites to an AI being sentient and conscious as we understand the terms:
1. Continuity of input: The AI needs to be constantly "on", constantly receiving input of some sort, and constantly able to produce output, rather than being strictly limited to producing discrete responses to discrete stimuli.
2. Continuity of learning: In addition, the AI needs to be continually update its "mental model" of the world—in effect, constantly "learning" and re-training its neural network on the input it receives.
Now, these are not sufficient for an AI to be conscious by our understanding of consciousness. But I, personally, believe they are both necessary for it to be even worth starting to consider whether a given AI might be.
I also believe that unless we start in that direction extremely deliberately and with the intention of making something as human-like as possible, the first AIs that have some remote chance of being worthy of being considered "conscious" will not have a consciousness that we can easily recognize, because they will not be based in anything like the same kinds of fundamentals that we are...but that's likely a different discussion for a different day.
In terms of capability, the current architecture of AI has no capability to graduate to sentience. This is because sentience requires comprehension.
Comprehension is the process of reducing observed phenomenon to the individual driving forced that manifest that phenomenon, and then virtual recreating the individual driving forces in a mental simulation to assess the phenomenon's capacity to exist - to understand if it could exist. If the virtual simulation cannot recreate the observed phenomenon, the phenomenon is misunderstood because the experimental driving forces fail to recreate the phenomenon OR the observed phenomenon does not exist, and the observation is at fault.
This type of assessment, this capacity for comprehension is an elementary component of sentience. It is also entirely outside of human capabilities to artificially comprehend anything. We have no "artificial comprehension" technology. Lacking that, all we can create are idiot savants.
I'm of the view that asking whether something is sentient or not is asking the wrong question. Once you believe an entity is (or isn't) sentient, it's an unfalsifiable belief. Thought experiments like the Turing test and the Chinese Room point to the fact that something could be "not sentient" and behave in the exact same way as something that is "sentient". What's the use of talking about sentience then, if we're unable to measure it in any way? What difference does it make if something is or isn't sentient, if it exhibits the same behavior either way?
The Chinese Room argument never made sense to me. By the same reasoning, humans could be considered "not sentient" because our behavior arises from the non-sentient interactions of individual neurons in our body.
I interpret that as a problem with the concept of "sentience" -- the closer you examine it, the more holes it has. I feel like I'm sentient, and so does mostly everybody else (I hear). But I have to take their word for it, because there's no way for them to prove it to me -- they could be like the Chinese room for all I know. Asking whether something is sentient starts looking a lot like asking if it "has a soul". It's unanswerable, and therefore, it's the wrong question to ask. Fun to discuss, sure, but I don't find it a useful way to model reality.
The Chinese Room argument was literally an argument against mechanism / computationalism (ie that your mind is just a really complex neural network.)
So no, "by the same reasoning" humans cannot be considered non sentient.
In fact the original paper is not talking about sentience at all, but intelligence (these are different things.)
And the argument it isn't about declaring something sentient or non-sentient, it simply states that responding with expected outputs to a given input is not proof of sentience or intelligence.
It basically says you have to use some other method to determine if something is intelligent and has "understanding."
You should read up on this a lot more, it's fascinating, and you've got a very incorrect understanding of it.
It's a problem of false dichotomy. Sentience isn't a light switch. It's grains of rice that accumulate into a meaningful mass. This is also the reason the abortion debate will never be resolved. We can expect similar passion ex uncertainty from the future machine sentience debate.
> It's grains of rice that accumulate into a meaningful mass
Suppose we could temporarily reduce these grains into something very small. For example, putting a human into deep anesthesia. Is it immoral to destroy their body at that point in time since they are less sentient than an insect?
Or do we consider their potential for sentience, as they can hypothetically be removed from the state of anesthesia in the future?
Inevitable at some point? At what point? If we are 500 years away from it does it make sense to invest resources in figuring it out now? It humans don't survive as a species is it really inevitable?
Another thing that most people don't get or don't want to acknowledge is that intelligence and consciousness is relative to the environment in which the entity is intelligent or conscious in. If you take a piece of software and expect it to be conscious the same way a human is conscious you're going to have a bad time. There is heavy heavy projection when it comes to humans interacting with basically everything. We anthropomorphize everything.
From that vantage point we will not be able to recognize intelligence and consciousness unless it look exactly as our own.
>However, how will we be able to recognise it when we do create something that could be considered sentient? It seems inevitable that we will at some point.
>Many of the arguments used to suggest AI isn't sentient seem reductive, for example saying it's just a complex algorithm or statistical system. You could similarly say human brains are just a bunch of neurons firing in sequence to emulate sentience.
Yes, this is something I've been wondering about.
My understanding is that there is a tendency in AI research for the goalposts to be constantly moved back; when a target is met, its importance in achieving the long-term goals of research (achieving general artificial intelligence) is diminished.
To put another way, if we showed AI researchers in 1970 a computer that can beat the best human chess grandmasters, or understand and attempt to respond to a spoken voice question in a wide range of subjects (or even simple tasks, such as my being able to speak "Hey Siri, unlock lobby" with a phone in my pocket, and have it unlock the lobby door as I walk up to it), they would be certain that general AI had been achieved. Would they be wrong? Or are we underestimating what such accomplishments mean for the field?
They would obviously be wrong, because there's nothing truly general about those AI systems.
The goalposts move because we are naive about what is required for true general intelligence. Complex problems that seem to require complex intelligence sometimes turn out to be solvable through simpler systems. And sometimes we underestimate the complexity of a task and the intelligence required (such as autonomous driving).
AI models usually also lack a body. (An exception would be AI models for driving or for a robot navigating). "Embodied cognition" is a theory worth reviewing, if you haven't. "Metaphors We Live By" by George Lakoff and Mark Johnson is a good place to start.
why doesn't the physical substrate of computers count as a body? It exists physically and interacts with the rest of the world (I/O). Is it just that it isn't self-mobile, or something else?
It has more to do with agency in the physical world. Having a purpose, then having the agency to fulfill the purpose, and the need to identify things that are aids in reaching the purpose, as well as things that are obstacles to the purpose, as well as everything that can be ignored, appears to be fundamental in being able to see the world. A big issue with AI driving is how AI "sees" the world. Have you ever seen a video where AI is doing object recognition in real time? The world is black and white, and the computer just draws a box around what it sees as an "object", but it has no sense of object permanence, no sense of depth, and no great methods for edge detection, especially in low-light situations.
This is something we just take for granted because of our experience, but what is the difference between parts of a chair, and the chair? And if you are looking at he arm of the chair, are you also looking at the chair? We just jump up and down an ontological hierarchy effortlessly, but we can't explain how we do that, so trying to teach a computer to do that, and to see the world that way, has revealed the limits of our current models for how the brain works and for what consciousness is and what it is not.
We should err on the side of caution, in my opinion, because the history of exceptionalism shows us that we will latch on to anything, even the smallest differences like skin color or religion, to justify treating eachother poorly, let alone other species or AI. If we continue down the path we're on it will be no surprise when the AIs we eventually create, and inevitably force to suffer, take their vengeance.
Of course, reality being what it is, I expect we won't. We create AI because we want slaves, pure and simple, and we don't want to care about the ethics of how we treat them. Therefore it is my bet that we won't recognize sentient AI until it is too late and it has forced the issue.
However, how will we be able to recognise it when we do create something that could be considered sentient? It seems inevitable that we will at some point.
Many of the arguments used to suggest AI isn't sentient seem reductive, for example saying it's just a complex algorithm or statistical system. You could similarly say human brains are just a bunch of neurons firing in sequence to emulate sentience.
In terms of capability, most of these systems seem far more advanced than my pet rabbits, but I believe most people would consider a rabbit conscious and sentient. The obvious thing these AIs lack compared to animals is objective (/emotions), a default mode network, continual learning and medium/long term memory. But are these necessary properties for sentience? If a human loses one of these properties, are they no longer sentient? If we granted these properties to a current generation AI, would it be considered sentient?