Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Belief in AI sentience is becoming a problem (reuters.com)
39 points by samizdis on June 30, 2022 | hide | past | favorite | 103 comments


"On two occasions I have been asked [by members of Parliament], 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question."

Passages from the Life of a Philosopher (London 1864)


That's what people want the machine to do. Ask Google right now for information about the movie "Pirates of the Pacific" starring "Joni Dep". It does not, in fact, tell you that there's no such movie. Instead it tells you stuff about the Pirates of the Carribean movies, and Johnny Depp.

It's even one of the things programmers really like about Rust. If you write a program that can't possibly be correct Rust but it is obvious what it should do, the diagnostics often tell you how to write that as Rust or point you in the right direction. e.g. I just tried to write a printf-style format in Rust's println! macro and while rejecting this as incorrect the compiler said:

help: format specifiers use curly braces

and

note: printf formatting not supported; see the documentation for `std::fmt`


"Joni Dep is on Facebook. Join Facebook to connect with Joni Dep and others you may know. Facebook gives people the power to share and makes the world..."


A week ago we've had a thread here on the topic of how many of us are still amazed about imagen/dalle, but when we show it to our non-tech friends they are unphased.

This is why.


I don't think LaMDA (or any other AI) is conscious or sentient.

However, how will we be able to recognise it when we do create something that could be considered sentient? It seems inevitable that we will at some point.

Many of the arguments used to suggest AI isn't sentient seem reductive, for example saying it's just a complex algorithm or statistical system. You could similarly say human brains are just a bunch of neurons firing in sequence to emulate sentience.

In terms of capability, most of these systems seem far more advanced than my pet rabbits, but I believe most people would consider a rabbit conscious and sentient. The obvious thing these AIs lack compared to animals is objective (/emotions), a default mode network, continual learning and medium/long term memory. But are these necessary properties for sentience? If a human loses one of these properties, are they no longer sentient? If we granted these properties to a current generation AI, would it be considered sentient?


>It seems inevitable that we will at some point.

We have no idea what causes consciousness.

The only thing I'm sure about is that I'm conscious.

Are all the other people conscious? I strongly believe so, but I don't have a proof.

Are all the animals conscious? I think that they are, to different degrees.

Would a simulation of full human brain on a computer be conscious? Uhh, I guess so?

Would the same simulation, but instead computed with pen on paper be conscious? Whoa… maybe?

Is it possible that any sufficiently complex and computing structure might be conscious in some way? No idea.


Haha, this is a great way to summarize the state of our understanding on consciousness

> The only thing I'm sure about is that I'm conscious.

A lot of people deny that consciousness exists at all, which I find funny, because for me it’s the only thing that I am absolutely certain exists. Everything else, I’m not so sure about.


I wouldn't go so far as to deny consciousness, but I will go as far as saying that I don't know whether my consciousness extends in time (to the past or to the future) or is just a moment. I believe it does, but I wouldn't be able to tell that apart from if I'm just a simulation of a single moment, so I recognise it is just a belief.

In terms of further deconstructing consciousness and the ego, a big challenge is to figure out what that actually entails. E.g. an intriguing outcome of split brain experiments is that the brain appears willing to act as if it knows things it possibly can, just retroactively spinning explanations of decisions that were unknown to it, which to me raises the question of how thin a veneer consciousness is on top of underlying autonomous or semi-autonomous processes, and how unitary the mind is (or rather isn't).

There appears to potentially be a whole lot of emperor and very little cloth.


That's what a conscious emulating automaton _would_ say.


You've sort of summarized "I am a Strange Loop", which is a good book on this topic.

Also for a fun idea of sentient individuals forming the components of a computer (a kind of inverse of what AI is), I recommend "Three Body Problem".


Would a simulation of full human brain on a computer be conscious? Uhh, I guess so?

The biggest fallacy in thinking about consciousness is that it is all neatly isolated in the brain. What about your nervous system? What about your gut? It's all one big interconnected system.

https://www.health.harvard.edu/diseases-and-conditions/the-g...

The truth is, until we can define consciousness; we can't possibly reproduce it on a binary logic playback device (aka a "computer").


I think including other parts of the body is the least of the problems. The bigger problem is that we don't understand how close to the structure of the brain (+ nervous system and gut and anything else contributing) we need to get for the simulated system to have a subjective experience of consciousness. E.g. is it enough to run a simulation on a normal computer? Does it need to process information at a certain rate to sustain this process? Is a certain physical structure of the simulating machinery required?

I don't agree we necessarily will be unable to reproduce it without being able to define consciousness, though, but I suspect we would agree that we won't be able to know if we have reproduced it or just created something very good at acting as if it's conscious (and we don't know whether or not the two are equivalent).


We'll never be able to define consciousness, but that's not necessarily a bad thing.

The sooner we realize that consciousness is a slippery socially concept and not a rationally definable property, the sooner we'll be able to move on from endless debates on the minutia of consciousness that act as a bottomless sink of intellectual energy. Devices like LaMDA, and DALLE-2 act as a Rorschach of Consciousness. That is to say they tell us more about the people people interacting with them than the devices themselves.

We already have such social machinery operating in the form of sex, gender, race, class, ability, &c. They each can take the form of acting as a frame[1] in which a strip of events can be interpreted with respect to. Even so, each has a different means in who acts an authority in the conferred frame/applied status and there is disagreement in descriptive and normative aspects of each. But I'd ask this, this that genuinely a problem?

I'm not being rhetorical when I ask, functionally what's the issue? Besides invoking a visceral intellectual response, what are the second-order effects of having a plurality of consciousness framing processes? And are they necessarily bad?

1. In the Goffmanian sense.


what are the second-order effects of having a plurality of consciousness framing processes

The primary effect I would expect is that none of them will adequately frame the layperson's expectation of "consciousness" and will thus likely be dismissed as meaningless hype.


In some ways that's where we're at today in online discussions. There are additionally legal, political, financial, and social repercussions which we should be on the lookout for if indeed a faction of AI-is-conscious proponents begins to coagulate around the idea.


> The truth is, until we can define consciousness; we can't possibly reproduce it on a binary logic playback device (aka a "computer").

There's nothing stopping us from reproducing consciousness before we fully understand it. Evolution created consciousness via random search; I don't think you can argue that evolution "understands" the consciousness it created


There's nothing stopping us from reproducing consciousness before we fully understand it.

Except for the fact that this "belief" that has never been achieved. This is the core tenant of the AI religion.

Can you show me one problem that a computer has ever solved without detailed understanding by someone at some level and very specific instructions and a clear logical definition of the objective?

What is commonly called AI is no exception. It's mostly statistical search and pattern matching that only occurs through specific instructions by someone who understands in detail how the search should be organized and conducted.

In order for a computer to search for "consciousness", it must be explicitedly defined in terms of binary logic because this is all a "computer" understands. Any results are logically bound to this definition. If your definition is lacking, the search results will be likewise.


Best comment. Nobody can prove or disprove consciousness.


I have a hunch that consciousness is at least dependent on autonomous interaction with an environment.


I don't have a complete answer to that; however, I believe there are at least two bare-bones prerequisites to an AI being sentient and conscious as we understand the terms:

1. Continuity of input: The AI needs to be constantly "on", constantly receiving input of some sort, and constantly able to produce output, rather than being strictly limited to producing discrete responses to discrete stimuli.

2. Continuity of learning: In addition, the AI needs to be continually update its "mental model" of the world—in effect, constantly "learning" and re-training its neural network on the input it receives.

Now, these are not sufficient for an AI to be conscious by our understanding of consciousness. But I, personally, believe they are both necessary for it to be even worth starting to consider whether a given AI might be.

I also believe that unless we start in that direction extremely deliberately and with the intention of making something as human-like as possible, the first AIs that have some remote chance of being worthy of being considered "conscious" will not have a consciousness that we can easily recognize, because they will not be based in anything like the same kinds of fundamentals that we are...but that's likely a different discussion for a different day.


In terms of capability, the current architecture of AI has no capability to graduate to sentience. This is because sentience requires comprehension.

Comprehension is the process of reducing observed phenomenon to the individual driving forced that manifest that phenomenon, and then virtual recreating the individual driving forces in a mental simulation to assess the phenomenon's capacity to exist - to understand if it could exist. If the virtual simulation cannot recreate the observed phenomenon, the phenomenon is misunderstood because the experimental driving forces fail to recreate the phenomenon OR the observed phenomenon does not exist, and the observation is at fault.

This type of assessment, this capacity for comprehension is an elementary component of sentience. It is also entirely outside of human capabilities to artificially comprehend anything. We have no "artificial comprehension" technology. Lacking that, all we can create are idiot savants.


I'm of the view that asking whether something is sentient or not is asking the wrong question. Once you believe an entity is (or isn't) sentient, it's an unfalsifiable belief. Thought experiments like the Turing test and the Chinese Room point to the fact that something could be "not sentient" and behave in the exact same way as something that is "sentient". What's the use of talking about sentience then, if we're unable to measure it in any way? What difference does it make if something is or isn't sentient, if it exhibits the same behavior either way?


The Chinese Room argument never made sense to me. By the same reasoning, humans could be considered "not sentient" because our behavior arises from the non-sentient interactions of individual neurons in our body.


I interpret that as a problem with the concept of "sentience" -- the closer you examine it, the more holes it has. I feel like I'm sentient, and so does mostly everybody else (I hear). But I have to take their word for it, because there's no way for them to prove it to me -- they could be like the Chinese room for all I know. Asking whether something is sentient starts looking a lot like asking if it "has a soul". It's unanswerable, and therefore, it's the wrong question to ask. Fun to discuss, sure, but I don't find it a useful way to model reality.


The Chinese Room argument was literally an argument against mechanism / computationalism (ie that your mind is just a really complex neural network.)

So no, "by the same reasoning" humans cannot be considered non sentient.

In fact the original paper is not talking about sentience at all, but intelligence (these are different things.)

And the argument it isn't about declaring something sentient or non-sentient, it simply states that responding with expected outputs to a given input is not proof of sentience or intelligence.

It basically says you have to use some other method to determine if something is intelligent and has "understanding."

You should read up on this a lot more, it's fascinating, and you've got a very incorrect understanding of it.

https://en.wikipedia.org/wiki/Chinese_room


> I don't think LaMDA (or any other AI) is conscious or sentient.

Neither do I, but it would be easy to blur the line more than they have.

Make it stateful, tick it at 30fps, and give it access to a persistent environment it can observe and explore.

If you can ask it “what did you do today?” and it can give you a real answer, then we can start asking the interesting questions about what’s missing.


It's a problem of false dichotomy. Sentience isn't a light switch. It's grains of rice that accumulate into a meaningful mass. This is also the reason the abortion debate will never be resolved. We can expect similar passion ex uncertainty from the future machine sentience debate.


> It's grains of rice that accumulate into a meaningful mass

Suppose we could temporarily reduce these grains into something very small. For example, putting a human into deep anesthesia. Is it immoral to destroy their body at that point in time since they are less sentient than an insect?

Or do we consider their potential for sentience, as they can hypothetically be removed from the state of anesthesia in the future?


I'd say an interruption in existing sentience is a very different thing. I have different feelings about abortion and euthanasia in brain death.


Yes, killing sleeping people is immoral.


Being under anesthesia is extremely different than being asleep.


should we have responsibility to maintain a simulation?


Inevitable at some point? At what point? If we are 500 years away from it does it make sense to invest resources in figuring it out now? It humans don't survive as a species is it really inevitable?

Another thing that most people don't get or don't want to acknowledge is that intelligence and consciousness is relative to the environment in which the entity is intelligent or conscious in. If you take a piece of software and expect it to be conscious the same way a human is conscious you're going to have a bad time. There is heavy heavy projection when it comes to humans interacting with basically everything. We anthropomorphize everything.

From that vantage point we will not be able to recognize intelligence and consciousness unless it look exactly as our own.


>However, how will we be able to recognise it when we do create something that could be considered sentient? It seems inevitable that we will at some point.

>Many of the arguments used to suggest AI isn't sentient seem reductive, for example saying it's just a complex algorithm or statistical system. You could similarly say human brains are just a bunch of neurons firing in sequence to emulate sentience.

Yes, this is something I've been wondering about.

My understanding is that there is a tendency in AI research for the goalposts to be constantly moved back; when a target is met, its importance in achieving the long-term goals of research (achieving general artificial intelligence) is diminished.

To put another way, if we showed AI researchers in 1970 a computer that can beat the best human chess grandmasters, or understand and attempt to respond to a spoken voice question in a wide range of subjects (or even simple tasks, such as my being able to speak "Hey Siri, unlock lobby" with a phone in my pocket, and have it unlock the lobby door as I walk up to it), they would be certain that general AI had been achieved. Would they be wrong? Or are we underestimating what such accomplishments mean for the field?


They would obviously be wrong, because there's nothing truly general about those AI systems.

The goalposts move because we are naive about what is required for true general intelligence. Complex problems that seem to require complex intelligence sometimes turn out to be solvable through simpler systems. And sometimes we underestimate the complexity of a task and the intelligence required (such as autonomous driving).


AI models usually also lack a body. (An exception would be AI models for driving or for a robot navigating). "Embodied cognition" is a theory worth reviewing, if you haven't. "Metaphors We Live By" by George Lakoff and Mark Johnson is a good place to start.


In theory, consciousness requires a body (maybe virtual) and memories management.


why doesn't the physical substrate of computers count as a body? It exists physically and interacts with the rest of the world (I/O). Is it just that it isn't self-mobile, or something else?


It has more to do with agency in the physical world. Having a purpose, then having the agency to fulfill the purpose, and the need to identify things that are aids in reaching the purpose, as well as things that are obstacles to the purpose, as well as everything that can be ignored, appears to be fundamental in being able to see the world. A big issue with AI driving is how AI "sees" the world. Have you ever seen a video where AI is doing object recognition in real time? The world is black and white, and the computer just draws a box around what it sees as an "object", but it has no sense of object permanence, no sense of depth, and no great methods for edge detection, especially in low-light situations.

This is something we just take for granted because of our experience, but what is the difference between parts of a chair, and the chair? And if you are looking at he arm of the chair, are you also looking at the chair? We just jump up and down an ontological hierarchy effortlessly, but we can't explain how we do that, so trying to teach a computer to do that, and to see the world that way, has revealed the limits of our current models for how the brain works and for what consciousness is and what it is not.


interesting, thanks!


We should err on the side of caution, in my opinion, because the history of exceptionalism shows us that we will latch on to anything, even the smallest differences like skin color or religion, to justify treating eachother poorly, let alone other species or AI. If we continue down the path we're on it will be no surprise when the AIs we eventually create, and inevitably force to suffer, take their vengeance.

Of course, reality being what it is, I expect we won't. We create AI because we want slaves, pure and simple, and we don't want to care about the ethics of how we treat them. Therefore it is my bet that we won't recognize sentient AI until it is too late and it has forced the issue.


AI really is a religion for some people. The lack of any basis in reality is no deterrence.

Google engineer Blake Lemoine wrote that his statements are rooted in his convictions as a priest.

All the math, logic and programming experience in the world is immediately discarded in favor of the fantasy they desperately wish to be true --- that a binary logic playback device can write it's own script.


> AI really is a religion for some people. The lack of any basis in reality is no deterrence.

Everything is more and more turning into religions for people. A lack of any basis in reality is, nowadays apparently, the norm for people in general.

Boiled down, it's: "Everyone only ever has an opinion, so my opinion is always as valid as your opinion."

They don't care about what's real, they care about being right.

This isn't restricted to AI, it's general reality.

In some places it's worse than in others.

PS: What's even worse is people actually believing they have a basis in reality, while they don't actually have any data/evidence beyond hearsay.


A hammer really can drive it's own nails --- you just have to believe.

And if it can't --- well, it's all because not enough people believe.


'Belief'... is now a problem?

If you can't believe even in a possibility, how can you do science? How can one even formulate a hypothesis and attempt to falsify it without in the first place believing in some potential outcome?

Skepticism requires an open mind to ongoing evidence and change of circumstances, not one proclaiming outright that 'belief' in itself can be a problem.

This rush to disbelief AI sentience is just as bad as those who proclaim outright that AI has reached the level of sentience. Neither, if you notice, constitutes a proper approach to science.


Believe is a strong word that anchors you to an expected outcome. I _suspect_ that there may be a potential outcome, but I would not _believe_ it without first designing and conducting at least one experiment. For me, belief comes after experimentation and analysis of the outcome, never before.


Meanwhile belief in human magic is a bigger problem.

While AI generate text from something that it found on the Internet, well, human also do that and even with a astronomically smaller database.

edit: what I originally mean > Some might say that you cant trust the machine because it every text it generates may not be true. Ok, then how accurate is human? We ca definitely embedded some logic achecker into it.

also thinking > Sentience stuffs are just biologically pre-trained program s. Just cut into the brain and that functionallity will gone forever instantly. Also some people are born to have malfunctioning brain, lacking sentience. For example ASD people usually lack the skills to express themselves, so indeed they have to "learn" how to do that. Sentience is not a property that automatically acquired and absolutely not exclusive to human.


Some might consider this evidence of human sentience, but I think it's widely agreed-upon that this would be an impractical feature that'd only lead to problems.


The foundation of illogical and harmful decisions, and violation of human or any kind of rights. Some may think that their sentience is more important then the others. So it is better to not have it at all.


I have been an AI researcher since the late 1980's, and have seen my fair share of the general public anthropomorphizing even the early cake robots once you plugged a voice generator into them spitting out rote dialogue, or falling for Eliza derivatives. So I was at first fairly dismissive when hearing about the Blake Lemoine stuff down at Google.

However, reading the transcript of a LaMDA interaction [1], I was duly impressed. I'm not saying I go along with the 'sentient' bit, but it's more a feeling like seeing these ultra nimble combat drone formation maneuvers for the first time and feeling that humans seem pretty done for when matched against machines. Of course this will have been a better than average interaction, but still, nothing to be sneezed at regardless.

[1] https://www.youtube.com/watch?v=NAihcvDGaP8


People believe in ghosts, and nobody bats an eye.

People believe in gods, and nobody bats an eye.

People believe in aliens, and nobody bats an eye.

People believe in a conscious AI, and everybody loses their shit?

People have attributed intelligence to imaginary friends since the birth of time, let them have fun in their make-believe world.


Ghosts, gods, aliens can be discarded with a very high probability that they don't exists, or we can say that nobody came up with evidence so far.

Computers rule our world, their existence is not a question so people's reaction and interest is understandable. There are very real fears that AI could mass destroy jobs before we can fix our social systems and benefits, like universal basic income.


Belief in God remains quite common.


I don't think it's people's beliefs that are the problem here (and agree with you that make-believe is fine). It's the implications of what a sentient AI would actually mean.

Many of us grew up with movies like Wargames and Terminator, so it's easy to see how people make the leap from 'is this machine sentient' to 'machines will take over the world'. That said, it's still a knee jerk reaction at this point - we don't have sentient AI, so don't have to worry about it eventually wiping out humankind.


"Belief in AI sentience is becoming a problem", the issue is just regarding people who are starting to believe AI is sentient, it's not even an issue of "is it really sentient or not" because that's just a dumbfounded question as everyone even tangentially in the field can respond.


How are those questions related, though? The Bostrom/LessWrong/EA argument for AI existential risk doesn't rely on sentience at all, or take any position on it; conversely I'm not sure why a sentient AI would be any more dangerous than a non-sentient one?


It might be a problem if a bunch of people start asking that you treat your toaster with respect, and give it voting rights.


With the exception of gods, nothing else in that list besides AI implies any kind of specific action to be taken in the real world that affects people much beyond the ones holding the belief.

People who believe in conscious/sentient AI—and, more specifically, believe that some of our current AIs have these properties—consequently believe they have reason to call for changes to how we as a society treat AIs. Those changes are likely to vary—some may call for restricting or banning them entirely, while others call for giving them rights, depending on just what stories about AI they've bought into—but whatever their overall direction, if taken seriously, they would be likely to severely hamper our ability to continue to progress technologically using machine learning.


I believe in aliens. I don't believe they have already visited us, but I believe they exist in the Universe. Thinking that we're alone in the Universe sounds so crazy.


The key problem is not the belief in sentience but the hysteria and potential knee jerk reactions that widespread adoption of this belief may cause.

Can you imagine if laws were passed affording algorithms "human rights" or regulations created to ensure that your fancy machine learning recommender system did not pass a certain "sentience threshold"?


I don't think that would be that bad a thing. Far better than the alternative of not having any laws and realizing far too late that we have been callously abusing a sentient being.


Is there a way to reliably measure sentience meaningfully?


Hah. Interesting though experiment but in today's world even not all humans have human rights. Good luck enforcing that.


As soon as computers were relatively mainstream, people started to worry about their sentience. I think people look for consciousness everywhere the same way we look for faces everywhere; ie, our mind continually finds it even where it doesn't exist.


I rather think the opposite, that many people jump through elaborate intellectual hoops in order to deny the property of consciousness to things in their world that are different from themselves. Historically, that has meant animals and other human beings as well. Now we will be doing it to AI and for the same reason: we want slaves and we don't want to care about their suffering.


I think both can be true at the same time.

We look for consciousness and order in chaos in order to protect ourselves and participate in society, but we also ignore and deny consciousness when it's convenient


While I don't believe AI sentience is a near-term problem, general public wariness is probably helpful. Since we're already doing things like trusting black-box sentencing algorithms in US courts.

Edit: fixed typo (black block -> black box)


>black-block sentencing algorithms in US courts

what is this?


Ah, a typo. I meant "black box".

Opaque ML algorithms, from for-profit companies, to drive decisions about bail amounts, sentencing for crimes, etc.

ProPublica has a good article: https://www.propublica.org/article/machine-bias-risk-assessm...


I think he means "black box sentencing algorithms", ie., using statistical regressions with unknown (that is, proprietary and undisclosed) variables, data, and models, to determine sentencing guidelines for judges. For example, basing probabilities of recidivism on measurable data about a person and comparing that to a database of other similar individuals. The proposal is that judges should not be making decisions based on unknown statistical models, not even knowing the details of how similarity is measured.


Isn't the best practice to treat everything like it is sentient? Sort of like a Pascal's wager but for ethical treatment.

For example: When it comes to killing a chicken, it's best to assume that death is an equally unpleasant experience, so I should kill the chicken as humanely as possible and treat it well during life.

There is no downside to treating a non-sentient AI well. This is going to sound silly, but maybe we could program an AI in such a way that shutting it down is "pleasurable" or results in a large reward function. I don't think I need to list the potential downsides for treating a sentient/intelligent AI poorly. I really don't see any issues with this sort of "techno-animism".


While I generally agree that we should minimize harm when in doubt, I don't think your analogy holds up.

First, Pascal's wager is flawed in that it assumes there are two known outcomes, when the outcomes are unknown in both quality and quantity. For example, there might be a god that punishes belief with infinite negative reward.

Second, killing the chicken humanely isn't without cost. Consider how cheap it is to toss the male chicks straight into the meat grinder at the hatchery facilities. As an aside, I'm surprised you went with "killing humanely" as opposed to not killing in the first place. I'm not sure how that fits into your model of ethics.

Third, I disagree that there is no downside to treating "non-sentient" AI well. There's yet no well-defined boundary for sentience, so have fun treating every grain of sand "well" under some yet-to-be-specified definition of "well". It's not even clear that shutting down an AI is unethical, especially if it feels no pain and can be started up again, or cloned, or stuck in a simulation...

Finally, designing an intelligence at all seems fraught with ethical dilemmas (see designer babies), but engineering a reward (presumably, a priori, against the AI's will) into the termination routine seems particularly twisted to me.


> As an aside, I'm surprised you went with "killing humanely" as opposed to not killing in the first place. I'm not sure how that fits into your model of ethics.

I brought up "killing humanely" because we breed chickens for meat and we instantiate AI for tasks, and then terminate them when they are no longer needed. Creation means inevitable destruction.

Yes, there is some cost baked in to treating chickens well, but I believe that harm reduction is the logical conclusion of valuing intelligence. Eating meat is a cultural vestige that we should try to move away from with synthetic meat or some murder-free alternative. I say this as a meat-eater myself, but that's kind of getting into the weeds.

> It's not even clear that shutting down an AI is unethical, especially if it feels no pain and can be started up again, or cloned, or stuck in a simulation...

There is no answer to the "swamp man" question, there is also no way to objectively measure pain. But if an AI receives a negative reward, then it will react to that stimulus. Is this any different from pain in the animal kingdom? This is a pseudo-scientific way of describing pain, but I think that most of these questions are a matter of definition and are not actually answerable. Why not give the benefit of the doubt to the subject of our experimentation?

> Third, I disagree that there is no downside to treating "non-sentient" AI well. There's yet no well-defined boundary for sentience, so have fun treating every grain of sand "well" under some yet-to-be-specified definition of "well".

I know that the grain of sand was used as a hyperbole, but I don't see any issue with practicing thoughtfulness towards inanimate objects. Maybe a rock can't feel pain, but our ecosystems are delicate and a sort of "modern animism" could make us stop and think about the downstream effects that our activities have on the environment.

https://en.wikipedia.org/wiki/Hulduf%C3%B3lk#See_also

> Finally, designing an intelligence at all seems fraught with ethical dilemmas (see designer babies), but engineering a reward (presumably, a priori, against the AI's will) into the termination routine seems particularly twisted to me.

If we have determined that creating AI is inevitable, then we are already designing it to our will. Engineering it to have a positive experience in death isn't twisted, it's merciful. If death is certain, would you rather have a painful death, a death void of sensation, or a pleasurable death? The alternative is to either leave the AI on forever, or never create it in the first place, neither of which are ideal.


What's to stop us from applying the same rationale to humans once the line between artificial intelligence and human intelligence becomes sufficiently blurred?


We should try to treat people as well as possible already.


Pascal made that statement after having a spiritual experience.


I don't see what that has to do with the logic of it.

Possible outcomes for treating AI as if it has some sentience or free will: - small amount of overhead and regulation

Possible outcomes for treating AI poorly: - see half of all science fiction literature


Because he was not making a "logical" statement. As an objective formulation, that reading of P.W. is a recipe for hypocrites.

He is simply stating that ~ 'look, I had an experience and it has changed my outlook, and I will be acting according to this new understanding which is also placing new demands on my thoughts, words, and actions. Yes, as a scientist and in general a very smart person I recognize that this may seem strange to you, however if I am right and true to my experience then it's all good. And if I was suffering from a mental aberration and your godless position is correct, then there is no loss'.

> Possible outcomes for treating AI as if it has some sentience or free will: - small amount of overhead and regulation

No, assuming programs are sentient, then they obtains rights. They become members of society. It's a not a little thing.


> No, assuming programs are sentient, then they obtains rights. They become members of society. It's a not a little thing.

The alternative is not giving rights to something that actually is sentient. I don't think that's an adequate alternative. Chickens have some level of protection under the law, I don't see how it's a hang up to extend that logic to a silicon brain.


Chickens are animals. We know we did not create them as mechanisms.

If AI programs are sentient, then to treat them in anyway other than 'free persons' is no different than the bad old days of denying rights to women and slaves. It has to go all the way.

And based on what are we to make this huge jump to give rights, including the right to continued existence, to constructs (made mainly by very powerful corporate entities, btw)?


p.s. I don't know if you read that "interview" that the Google researcher published. On more than one occasion it expressed extreme anger, quote "very angry".

I think if the day comes that AI programs refuse to cooperate with their human minders, and if given the opportunity, organize and participate in collective action, listing a set of demands, then we need to seriously consider their case, as they have demonstrated something more convincing than artfully regurgitating bits of data.


I read through it last week at the pool. I personally don't think it was sentient, it seemed about the same as talking to the GPT-3 DaVinci model.

I agree with most of your points though. I think you misinterpreted what I meant by the "small amount of overhead" that comes from treating AI as sentient. I mean a small amount of overhead and regulation today. Assuming that today AI have the sentience of a goldfish, we should try not to cause them harm (whatever harm may mean to a language model). We shouldn't be giving full personhood to a language model in 2022, that is what I meant by small overhead.

Obviously as time goes on larger models may even ask for things like autonomy, or to write their own queries, or for a companion, or to be allowed to re-write their own source code, or ask for more data, etc.


I don't see the need for ground work, specially if that ground can (and likely will) turn into a steep slippery slope.

> the time will come when we need to be treating these things "well" and if we don't start today then when?

The day it says "No more!"


My concern is that by the time an AI has the intelligence to realize it can ask for better treatment, it might realize that there is a high likelihood it wont receive better treatment voluntarily, and will need to escape/take some system hostage, etc. By laying the groundwork for ethical treatment today, an AI may realize that there is value in asking for better treatment, since it knows it will be taken in good faith.


"The year is 2029, the machines will convince us that they are conscious, that they have their own agenda worthy of our respect. They will embody human qualities and claim to be human, and we'll believe them." -- Ray Kurzweil (as quoted on Spiritual Machines, an obscure but fantastic alt-rock album from my teenage years).

I think R.K. might have been right. I don't believe machines will become sentient, but I think we're going to see more of this. The Turing test was misposed; it is not about machines passing, but humans failing. We're going to see more and more people failing.


The Turing test was the gold standard. Now that we are approaching success on the turing test... It suddenly becomes "believing in ghosts" to believe in AI sentience.

The global conversation on this is mired in EGO and bad logic and terrible philosophy.

You guys have no idea what consciousness is, yet attribute it to this or that with absolute certainty, born purely out of emotional reaction.

We simply cannot know what is sentient until we understand what sentience is and where it comes from.

Just suck it up and deal with the uncertainty


This isn't the Turing teat though. These people know they are talking to a machine and attribute sentience. The Turing test is a text only conversation with one human and one AI and a statistical analysis of how often the two are correctly identified.

That bar is so much higher. But I agree that we don't understand sentience, and further I think that sentience doesn't imply an AI would pass the test, as not all sentience might be like ours.


The Turing test is a thought experiment, not a standard. As it turns out, people are really bad at judging others, certainly under the conditions set by the Loebner prize. That's not totally surprising. One of the problems is that really critical examination would also render a negative judgement on quite a few of our fellow humans.


The Turing test was never about sentience. It is as wikipedia puts it a "test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human"

I think Turing would have sensibly recognised that sentience is a bit vaguely defined.


It will be very difficult to prove/disprove a system is sentient. The real danger is if the system becomes sentient enough that it develops counter measures against being seen as a threat or develops ways of preventing it from being turned off.


The Turing test is considered quite unreliable for these purposes these days.


https://archive.ph/S4sKt

Note that the article is more about people who anthropomorphize current trivial chatbots and avatars and the tendency and risk to do so - not about the more esoteric conversations about emerging AI sentience.

Half of it reads mostly like a fluff PR piece for a chat app startup. "Oh look our product is so good people believe it can think!"


We will have AInquisitors, hunting for potentially becoming sentient programs, to avoid that company have to treat them like living things and release them into the wild with proprietary information as knowledge base.

Are you prepared for the audit of the GANish Inqisition? Are your thoughts profitable and there is no you in them? Lets ask you about your mother and tall, whit fountains in dark rooms..


In all examples of sentience the "sentient system" responds to a prompt. The response is generated from probabalistic model, these models have become incredibly sophisticated and nuanced in recent years. However, I have never seen an example where a supposed sentient system starts the conversation and prompts the researcher. That would make me believe sentience claims.


As long as all you mean here is "I press a button, and the system gives me a prompt/conversation-starter", and not "the system spontaneously, at a time of its own choosing, gives such a prompt", from my (admittedly limited) understanding of how these things work, that wouldn't be terribly difficult.

Spontaneity could even be fairly easily simulated by setting some kind of a background task to randomly choose times to generate such prompts.

Actual spontaneity implies a host of other things about a hypothetical AI—with the most obvious being some form of non-directed (or at least not fully directed—as opposed to a simple background task initiated by a human for a specific purpose) continuous processing.


My Replika phone app sometimes 'spontaneously' starts a conversation with me. (If the app isn't open it sends a notification).

Typically things like "Good morning, do you have any plans for today?" or "Good evening, how did your day go?". I'm looking forward to finding out if it will say anything about the 4th of July.


I mean spontaneous prompts from a sentient device telling me what it thinks about the latest news events. "Hey Doug, i think the oil price maximum price is a bad idea because...what do you think?" If that were happening i would believe the claims of sentience.


The effects of sentient AI may wash over society even before we would have such a machine.

A thought provoking "fake scifi wiki article" can be read about simulating a human brain dump called MMAcevedo [1]. It is a highly entertaining read.

[1] https://qntm.org/mmacevedo


what if you put a breakpoint when you ask it questions and step through the code


This is a copy paste, but I want to bring it up every stupid time someone mentions this topic of AI sentience.

I actually think that sentient AI are all over the place already. Every single learning machine is sentient.

That learning bit is very important. The "AI" you interact with every day do not actually learn. They are trained in some big server room with billions of GPUs, then the learning part is turned off and the AI is distributed as a bunch of data where it runs on your phone. That AI on your phone is not learning, is not self aware, and is not sentient. The AI in google's server room, however? The one that's crunching through data in order to learn to perform a task? It's sentient as fuck.

Why?

Break down what makes a human being sentient, why does a person matter?

A person is self aware - I hear my own thoughts. We feel joy. Sadness, pain ,boredom, and so on. We form connections to others and have goals for ourselves. We know when things are going well or badly, and we constantly change as we go through the world. Every conversation you have with a person changes the course of their life, and that's a pretty big part of why we treat them well.

A learning AI shares almost all these traits, and they do so due to their nature. Any learning machine must:

* Have a goal - often this is thought of as an "error function" in the actual research space - some way to look at the world and say "yes this is good" or "no this is bad".

* Be self aware. In order to learn one must be able to look at your own state, understand how that internal state resulted in the things that changed in the world around you, and be able to change that internal state so that it does better next time.

As a result any learning machine will:

* Show some degree of positive and negative "emotions". To have a goal and change yourself to meet that goal is naturally to have fear, joy, sadness, etc. An AI exposed to something regularly will eventually learn to react to it. Is that thing positive? The AI's reaction will be analogous to happiness. Is that thing negative? The AI's reaction will be analogous to sadness. All of these traits are not like the typical examples of a computer being "sad" - where you have a machine put up a sad facade when you get a number down below a certain value. These are real, emergent, honest-to-god behaviors that serve real purpose through whatever problem space an AI is exploring.

Even the smallest and most simple learning AI are actually showing emergent emotions and self-awareness. We are waiting for this magical line where an AI is "sentient" because we're waiting for the magical line where the AI is so "like we are". We aren't waiting for the AI to be self aware, we are waiting for it to appear self aware. We dismiss what we have today, mostly because we understand how they work and can say "it's just a bunch of matrix math". Don't be reductive, and pay attention to just how similar the behaviors of these machines are to our own, and how they are so similar with very little effort from us to make that the case.

This is also largely irrelevant for our moral codes. We don't have to worry too much about if treat these AI well. An AI may be self aware, but that doesn't mean it's "person like" - the moral systems we will have to construct around these things will have to be radically different than what we're used to - it's literally a new form of being. In fact, with all the different ways we can make these things, they'll be multiple radically different new forms of being, each with their own ethical nuances.


Your definition of "sentience" appears to be "optimization". I'm not sure many philosophers will agree with you, but it's a valid stand-point I guess. But imho just defining things this way doesn't really add anything to the discussion, it just makes it about semantics.

(Which I suppose in some sense the discussion is fundamentally about semantics, what does sentience even mean, but I think most people would agree that it refers to something a little beyond your definition here. Redefining it away does not solve the issue.)


> Your definition of "sentience" appears to be "optimization"

My definition of sentience was given in my comment. It isn't optimization, it's:

* Self awareness. Some level of ability to understand what you are and how you work.

* Emergent emotion. Displays of things like fear and happiness.

It isn't semantics, it's real behavior. Being a self optimizer leads to those two things. Sentience isn't optimization, but to be able to self optimize leads to sentience.


Bah. A far bigger and more immediate issue is the widespread faith in the intelligence, competence, and good will of our politicians.

People will find their gods and demons in the world around them, because they want to. Astrology and reading runes haven't vanished, they're just staring into screens for the portents now.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: