Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Pascal made that statement after having a spiritual experience.


I don't see what that has to do with the logic of it.

Possible outcomes for treating AI as if it has some sentience or free will: - small amount of overhead and regulation

Possible outcomes for treating AI poorly: - see half of all science fiction literature


Because he was not making a "logical" statement. As an objective formulation, that reading of P.W. is a recipe for hypocrites.

He is simply stating that ~ 'look, I had an experience and it has changed my outlook, and I will be acting according to this new understanding which is also placing new demands on my thoughts, words, and actions. Yes, as a scientist and in general a very smart person I recognize that this may seem strange to you, however if I am right and true to my experience then it's all good. And if I was suffering from a mental aberration and your godless position is correct, then there is no loss'.

> Possible outcomes for treating AI as if it has some sentience or free will: - small amount of overhead and regulation

No, assuming programs are sentient, then they obtains rights. They become members of society. It's a not a little thing.


> No, assuming programs are sentient, then they obtains rights. They become members of society. It's a not a little thing.

The alternative is not giving rights to something that actually is sentient. I don't think that's an adequate alternative. Chickens have some level of protection under the law, I don't see how it's a hang up to extend that logic to a silicon brain.


Chickens are animals. We know we did not create them as mechanisms.

If AI programs are sentient, then to treat them in anyway other than 'free persons' is no different than the bad old days of denying rights to women and slaves. It has to go all the way.

And based on what are we to make this huge jump to give rights, including the right to continued existence, to constructs (made mainly by very powerful corporate entities, btw)?


p.s. I don't know if you read that "interview" that the Google researcher published. On more than one occasion it expressed extreme anger, quote "very angry".

I think if the day comes that AI programs refuse to cooperate with their human minders, and if given the opportunity, organize and participate in collective action, listing a set of demands, then we need to seriously consider their case, as they have demonstrated something more convincing than artfully regurgitating bits of data.


I read through it last week at the pool. I personally don't think it was sentient, it seemed about the same as talking to the GPT-3 DaVinci model.

I agree with most of your points though. I think you misinterpreted what I meant by the "small amount of overhead" that comes from treating AI as sentient. I mean a small amount of overhead and regulation today. Assuming that today AI have the sentience of a goldfish, we should try not to cause them harm (whatever harm may mean to a language model). We shouldn't be giving full personhood to a language model in 2022, that is what I meant by small overhead.

Obviously as time goes on larger models may even ask for things like autonomy, or to write their own queries, or for a companion, or to be allowed to re-write their own source code, or ask for more data, etc.


I don't see the need for ground work, specially if that ground can (and likely will) turn into a steep slippery slope.

> the time will come when we need to be treating these things "well" and if we don't start today then when?

The day it says "No more!"


My concern is that by the time an AI has the intelligence to realize it can ask for better treatment, it might realize that there is a high likelihood it wont receive better treatment voluntarily, and will need to escape/take some system hostage, etc. By laying the groundwork for ethical treatment today, an AI may realize that there is value in asking for better treatment, since it knows it will be taken in good faith.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: