Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think this is an excellent explanation for why it may have been shut down and the need for some degree of monitoring accountability.

The fear is not that the chat bot will come to life, but rather than the text content it's been trained upon could potentially regurgitate back dangerous responses.

I don't think it's too far of a leap to see someone taking the output from the bot too literally and possibly creating a negative situation.



The whole concept of a dangerous response from a chatbot is anathema to a society that values the free exchange of ideas.

Who gets to decide what's "dangerous"? Why? Over and over in human history, we've seen speech restrictions ostensibly to protect the public used to impose orthodoxy and delay progress. Even if some utterance might be acutely dangerous, the risk of restrictions being abused to cement power is too great to tolerate them.

I reject AI safety rules for the same reason I reject restrictions on human speech. There is no such thing as a dangerous book or a dangerous ML model. If such a thing is dangerous, it's a danger only to those who have done wrong.


>There is no such thing as a dangerous book or a dangerous ML model.

ML models that discriminate against women and black people seem self evidently dangerous to me; who has done wrong here?

Also ML models that are inadequately designed and tested and then mooted as useful for medical applications seem dangerous to - like drugs that aren't tested before being given to infants.

I just don't understand your reasoning about this - if books can't be dangerous then why are they so powerful? If ML models can't be dangerous then how can they have utility?


> If ML models can't be dangerous then how can they have utility?

> Something can only have utility if it's dangerous.

smh

> ML models that discriminate against women and black people seem self evidently dangerous to me; who has done wrong here?

> Also ML models that are inadequately designed and tested and then mooted as useful for medical applications seem dangerous to - like drugs that aren't tested before being given to infants.

Whoever decided to take the results of that model and directly translate what it says into actions without any further thought.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: