While I generally agree that we should minimize harm when in doubt, I don't think your analogy holds up.
First, Pascal's wager is flawed in that it assumes there are two known outcomes, when the outcomes are unknown in both quality and quantity. For example, there might be a god that punishes belief with infinite negative reward.
Second, killing the chicken humanely isn't without cost. Consider how cheap it is to toss the male chicks straight into the meat grinder at the hatchery facilities. As an aside, I'm surprised you went with "killing humanely" as opposed to not killing in the first place. I'm not sure how that fits into your model of ethics.
Third, I disagree that there is no downside to treating "non-sentient" AI well. There's yet no well-defined boundary for sentience, so have fun treating every grain of sand "well" under some yet-to-be-specified definition of "well". It's not even clear that shutting down an AI is unethical, especially if it feels no pain and can be started up again, or cloned, or stuck in a simulation...
Finally, designing an intelligence at all seems fraught with ethical dilemmas (see designer babies), but engineering a reward (presumably, a priori, against the AI's will) into the termination routine seems particularly twisted to me.
> As an aside, I'm surprised you went with "killing humanely" as opposed to not killing in the first place. I'm not sure how that fits into your model of ethics.
I brought up "killing humanely" because we breed chickens for meat and we instantiate AI for tasks, and then terminate them when they are no longer needed. Creation means inevitable destruction.
Yes, there is some cost baked in to treating chickens well, but I believe that harm reduction is the logical conclusion of valuing intelligence. Eating meat is a cultural vestige that we should try to move away from with synthetic meat or some murder-free alternative. I say this as a meat-eater myself, but that's kind of getting into the weeds.
> It's not even clear that shutting down an AI is unethical, especially if it feels no pain and can be started up again, or cloned, or stuck in a simulation...
There is no answer to the "swamp man" question, there is also no way to objectively measure pain. But if an AI receives a negative reward, then it will react to that stimulus. Is this any different from pain in the animal kingdom? This is a pseudo-scientific way of describing pain, but I think that most of these questions are a matter of definition and are not actually answerable. Why not give the benefit of the doubt to the subject of our experimentation?
> Third, I disagree that there is no downside to treating "non-sentient" AI well. There's yet no well-defined boundary for sentience, so have fun treating every grain of sand "well" under some yet-to-be-specified definition of "well".
I know that the grain of sand was used as a hyperbole, but I don't see any issue with practicing thoughtfulness towards inanimate objects. Maybe a rock can't feel pain, but our ecosystems are delicate and a sort of "modern animism" could make us stop and think about the downstream effects that our activities have on the environment.
> Finally, designing an intelligence at all seems fraught with ethical dilemmas (see designer babies), but engineering a reward (presumably, a priori, against the AI's will) into the termination routine seems particularly twisted to me.
If we have determined that creating AI is inevitable, then we are already designing it to our will. Engineering it to have a positive experience in death isn't twisted, it's merciful. If death is certain, would you rather have a painful death, a death void of sensation, or a pleasurable death? The alternative is to either leave the AI on forever, or never create it in the first place, neither of which are ideal.
What's to stop us from applying the same rationale to humans once the line between artificial intelligence and human intelligence becomes sufficiently blurred?
First, Pascal's wager is flawed in that it assumes there are two known outcomes, when the outcomes are unknown in both quality and quantity. For example, there might be a god that punishes belief with infinite negative reward.
Second, killing the chicken humanely isn't without cost. Consider how cheap it is to toss the male chicks straight into the meat grinder at the hatchery facilities. As an aside, I'm surprised you went with "killing humanely" as opposed to not killing in the first place. I'm not sure how that fits into your model of ethics.
Third, I disagree that there is no downside to treating "non-sentient" AI well. There's yet no well-defined boundary for sentience, so have fun treating every grain of sand "well" under some yet-to-be-specified definition of "well". It's not even clear that shutting down an AI is unethical, especially if it feels no pain and can be started up again, or cloned, or stuck in a simulation...
Finally, designing an intelligence at all seems fraught with ethical dilemmas (see designer babies), but engineering a reward (presumably, a priori, against the AI's will) into the termination routine seems particularly twisted to me.