Ted Chiang believes it is AIs that have much to fear from us, not the other way around. He points out that to create intelligence, we will need to create something that is capable of suffering. He asks us to consider how little consideration we give to the billions of animals trapped in our system of industrial farming - animals we know are intelligent, feel emotions and are clearly capable of suffering - and then asks if we think we will treat computer programs with more consideration than that. Clearly not!
The correct alternative path is to just not do it. Let’s not attempt to create intelligent beings. Given the suffering we would cause, doing so is profoundly unethical.
While I have great compassion for anti-nativism, in the past few decades we have made immense sociological progress. If it is possible to create life that is incapable of shame or suffering and yet capable of enjoying its existence for its own sake, we should try to find out about it before turning the lights off.
I always thought that people are afraid of AGI for the very reason of knowing how they treat beings of lesser intelligence, hence the popularity of movies like the matrix - because that is what we'd (presumably) do if we were in their position.
Yes but they get to abuse their own children so we are "even".
Please don't take this seriously. A lot of hazing culture is based around the fact that you got bullied and now you get your revenge by bullying someone else. We shouldn't perpetuate that.
The correct alternative path is to just not do it. Let’s not attempt to create intelligent beings. Given the suffering we would cause, doing so is profoundly unethical.