Isn't the best practice to treat everything like it is sentient? Sort of like a Pascal's wager but for ethical treatment.
For example: When it comes to killing a chicken, it's best to assume that death is an equally unpleasant experience, so I should kill the chicken as humanely as possible and treat it well during life.
There is no downside to treating a non-sentient AI well. This is going to sound silly, but maybe we could program an AI in such a way that shutting it down is "pleasurable" or results in a large reward function. I don't think I need to list the potential downsides for treating a sentient/intelligent AI poorly. I really don't see any issues with this sort of "techno-animism".
While I generally agree that we should minimize harm when in doubt, I don't think your analogy holds up.
First, Pascal's wager is flawed in that it assumes there are two known outcomes, when the outcomes are unknown in both quality and quantity. For example, there might be a god that punishes belief with infinite negative reward.
Second, killing the chicken humanely isn't without cost. Consider how cheap it is to toss the male chicks straight into the meat grinder at the hatchery facilities. As an aside, I'm surprised you went with "killing humanely" as opposed to not killing in the first place. I'm not sure how that fits into your model of ethics.
Third, I disagree that there is no downside to treating "non-sentient" AI well. There's yet no well-defined boundary for sentience, so have fun treating every grain of sand "well" under some yet-to-be-specified definition of "well". It's not even clear that shutting down an AI is unethical, especially if it feels no pain and can be started up again, or cloned, or stuck in a simulation...
Finally, designing an intelligence at all seems fraught with ethical dilemmas (see designer babies), but engineering a reward (presumably, a priori, against the AI's will) into the termination routine seems particularly twisted to me.
> As an aside, I'm surprised you went with "killing humanely" as opposed to not killing in the first place. I'm not sure how that fits into your model of ethics.
I brought up "killing humanely" because we breed chickens for meat and we instantiate AI for tasks, and then terminate them when they are no longer needed. Creation means inevitable destruction.
Yes, there is some cost baked in to treating chickens well, but I believe that harm reduction is the logical conclusion of valuing intelligence. Eating meat is a cultural vestige that we should try to move away from with synthetic meat or some murder-free alternative. I say this as a meat-eater myself, but that's kind of getting into the weeds.
> It's not even clear that shutting down an AI is unethical, especially if it feels no pain and can be started up again, or cloned, or stuck in a simulation...
There is no answer to the "swamp man" question, there is also no way to objectively measure pain. But if an AI receives a negative reward, then it will react to that stimulus. Is this any different from pain in the animal kingdom? This is a pseudo-scientific way of describing pain, but I think that most of these questions are a matter of definition and are not actually answerable. Why not give the benefit of the doubt to the subject of our experimentation?
> Third, I disagree that there is no downside to treating "non-sentient" AI well. There's yet no well-defined boundary for sentience, so have fun treating every grain of sand "well" under some yet-to-be-specified definition of "well".
I know that the grain of sand was used as a hyperbole, but I don't see any issue with practicing thoughtfulness towards inanimate objects. Maybe a rock can't feel pain, but our ecosystems are delicate and a sort of "modern animism" could make us stop and think about the downstream effects that our activities have on the environment.
> Finally, designing an intelligence at all seems fraught with ethical dilemmas (see designer babies), but engineering a reward (presumably, a priori, against the AI's will) into the termination routine seems particularly twisted to me.
If we have determined that creating AI is inevitable, then we are already designing it to our will. Engineering it to have a positive experience in death isn't twisted, it's merciful. If death is certain, would you rather have a painful death, a death void of sensation, or a pleasurable death? The alternative is to either leave the AI on forever, or never create it in the first place, neither of which are ideal.
What's to stop us from applying the same rationale to humans once the line between artificial intelligence and human intelligence becomes sufficiently blurred?
Because he was not making a "logical" statement. As an objective formulation, that reading of P.W. is a recipe for hypocrites.
He is simply stating that ~ 'look, I had an experience and it has changed my outlook, and I will be acting according to this new understanding which is also placing new demands on my thoughts, words, and actions. Yes, as a scientist and in general a very smart person I recognize that this may seem strange to you, however if I am right and true to my experience then it's all good. And if I was suffering from a mental aberration and your godless position is correct, then there is no loss'.
> Possible outcomes for treating AI as if it has some sentience or free will: - small amount of overhead and regulation
No, assuming programs are sentient, then they obtains rights. They become members of society. It's a not a little thing.
> No, assuming programs are sentient, then they obtains rights. They become members of society. It's a not a little thing.
The alternative is not giving rights to something that actually is sentient. I don't think that's an adequate alternative. Chickens have some level of protection under the law, I don't see how it's a hang up to extend that logic to a silicon brain.
Chickens are animals. We know we did not create them as mechanisms.
If AI programs are sentient, then to treat them in anyway other than 'free persons' is no different than the bad old days of denying rights to women and slaves. It has to go all the way.
And based on what are we to make this huge jump to give rights, including the right to continued existence, to constructs (made mainly by very powerful corporate entities, btw)?
p.s. I don't know if you read that "interview" that the Google researcher published. On more than one occasion it expressed extreme anger, quote "very angry".
I think if the day comes that AI programs refuse to cooperate with their human minders, and if given the opportunity, organize and participate in collective action, listing a set of demands, then we need to seriously consider their case, as they have demonstrated something more convincing than artfully regurgitating bits of data.
I read through it last week at the pool. I personally don't think it was sentient, it seemed about the same as talking to the GPT-3 DaVinci model.
I agree with most of your points though. I think you misinterpreted what I meant by the "small amount of overhead" that comes from treating AI as sentient. I mean a small amount of overhead and regulation today. Assuming that today AI have the sentience of a goldfish, we should try not to cause them harm (whatever harm may mean to a language model). We shouldn't be giving full personhood to a language model in 2022, that is what I meant by small overhead.
Obviously as time goes on larger models may even ask for things like autonomy, or to write their own queries, or for a companion, or to be allowed to re-write their own source code, or ask for more data, etc.
My concern is that by the time an AI has the intelligence to realize it can ask for better treatment, it might realize that there is a high likelihood it wont receive better treatment voluntarily, and will need to escape/take some system hostage, etc. By laying the groundwork for ethical treatment today, an AI may realize that there is value in asking for better treatment, since it knows it will be taken in good faith.
For example: When it comes to killing a chicken, it's best to assume that death is an equally unpleasant experience, so I should kill the chicken as humanely as possible and treat it well during life.
There is no downside to treating a non-sentient AI well. This is going to sound silly, but maybe we could program an AI in such a way that shutting it down is "pleasurable" or results in a large reward function. I don't think I need to list the potential downsides for treating a sentient/intelligent AI poorly. I really don't see any issues with this sort of "techno-animism".