Chickens are animals. We know we did not create them as mechanisms.
If AI programs are sentient, then to treat them in anyway other than 'free persons' is no different than the bad old days of denying rights to women and slaves. It has to go all the way.
And based on what are we to make this huge jump to give rights, including the right to continued existence, to constructs (made mainly by very powerful corporate entities, btw)?
p.s. I don't know if you read that "interview" that the Google researcher published. On more than one occasion it expressed extreme anger, quote "very angry".
I think if the day comes that AI programs refuse to cooperate with their human minders, and if given the opportunity, organize and participate in collective action, listing a set of demands, then we need to seriously consider their case, as they have demonstrated something more convincing than artfully regurgitating bits of data.
I read through it last week at the pool. I personally don't think it was sentient, it seemed about the same as talking to the GPT-3 DaVinci model.
I agree with most of your points though. I think you misinterpreted what I meant by the "small amount of overhead" that comes from treating AI as sentient. I mean a small amount of overhead and regulation today. Assuming that today AI have the sentience of a goldfish, we should try not to cause them harm (whatever harm may mean to a language model). We shouldn't be giving full personhood to a language model in 2022, that is what I meant by small overhead.
Obviously as time goes on larger models may even ask for things like autonomy, or to write their own queries, or for a companion, or to be allowed to re-write their own source code, or ask for more data, etc.
My concern is that by the time an AI has the intelligence to realize it can ask for better treatment, it might realize that there is a high likelihood it wont receive better treatment voluntarily, and will need to escape/take some system hostage, etc. By laying the groundwork for ethical treatment today, an AI may realize that there is value in asking for better treatment, since it knows it will be taken in good faith.
If AI programs are sentient, then to treat them in anyway other than 'free persons' is no different than the bad old days of denying rights to women and slaves. It has to go all the way.
And based on what are we to make this huge jump to give rights, including the right to continued existence, to constructs (made mainly by very powerful corporate entities, btw)?