Hacker News new | past | comments | ask | show | jobs | submit login

Well, imagine the case where a loner looses their best friend, and goes to this website to "talk with their trusted friend again" instead of getting some real help. Will GPT-3 reliably counsel this sad and lonely person to "stay behind", or will it suggest the should "meet up"?

Consider a lonely teenager whose internet friend one day stops replying so they go and make their own friend, and then maybe after "taking advice" from this friend goes to school with a gun?

Maybe not terribly likely events, but I think a certain element of risk must be managed (maybe limit session durations so you can't fool yourself into thinking its for real as easily).

Roher however, has pretty clear views that he won't manage any risk, because what he's doing is art and then there's no place for safety.




> Well, imagine the case where a loner looses their best friend, and goes to this website to "talk with their trusted friend again" instead of getting some real help. Will GPT-3 reliably counsel this sad and lonely person to "stay behind", or will it suggest the should "meet up"?

So we should shut down the internet in general, because people ought to go outside instead too?


The Internet in general is a very broad thing with lots of valuable, legitimate use cases. There are definitely other specific things on the Internet which I think should be shut down because they tend to keep people trapped: infinite scroll, for example.


> There are definitely other specific things on the Internet which I think should be shut down because they tend to keep people trapped: infinite scroll, for example

So ISPs should force everyone who hosts a website to insert monitoring software, to prevent abuse like that. And if they refuse, they can cancel service.


>So we should shut down the internet in general, because people ought to go outside instead too?

That's a pretty good idea tbh.


I agree btw, because I think overall the internet was a mistake and a net negative. I just think this particular line of argumentation is really stupid, which is why I chose this example, to show how it can be used to cancel anything and everything.


Your straw man of the argument is however plainly ridiculous, so I'm not sure why you think it's successful.


It's not a straw man. It's taking your own logic to the extreme: it could be abused, so we need to either control it or get rid of it.


Taking it to the extreme is often a trivial way of creating a straw man.

My argument was about one specific service not being perfectly free of risk, and so there where clear boundaries inherent in the argument. You dismiss these boundaries and then act surprised the resulting argument is silly.

This makes me wonder what your stance is on boxes, as they too depend on imposing boundaries.


> Consider a lonely teenager whose internet friend one day stops replying so they go and make their own friend, and then maybe after "taking advice" from this friend goes to school with a gun?

But a person could give such "advice" too. Should we shut down text chat services that let humans talk to each other too?


Is this supposed to be a serious critique of an argument regarding the safety of a machine, that "a human could intentionally do the damaging action"?


I think this is an excellent explanation for why it may have been shut down and the need for some degree of monitoring accountability.

The fear is not that the chat bot will come to life, but rather than the text content it's been trained upon could potentially regurgitate back dangerous responses.

I don't think it's too far of a leap to see someone taking the output from the bot too literally and possibly creating a negative situation.


The whole concept of a dangerous response from a chatbot is anathema to a society that values the free exchange of ideas.

Who gets to decide what's "dangerous"? Why? Over and over in human history, we've seen speech restrictions ostensibly to protect the public used to impose orthodoxy and delay progress. Even if some utterance might be acutely dangerous, the risk of restrictions being abused to cement power is too great to tolerate them.

I reject AI safety rules for the same reason I reject restrictions on human speech. There is no such thing as a dangerous book or a dangerous ML model. If such a thing is dangerous, it's a danger only to those who have done wrong.


>There is no such thing as a dangerous book or a dangerous ML model.

ML models that discriminate against women and black people seem self evidently dangerous to me; who has done wrong here?

Also ML models that are inadequately designed and tested and then mooted as useful for medical applications seem dangerous to - like drugs that aren't tested before being given to infants.

I just don't understand your reasoning about this - if books can't be dangerous then why are they so powerful? If ML models can't be dangerous then how can they have utility?


> If ML models can't be dangerous then how can they have utility?

> Something can only have utility if it's dangerous.

smh

> ML models that discriminate against women and black people seem self evidently dangerous to me; who has done wrong here?

> Also ML models that are inadequately designed and tested and then mooted as useful for medical applications seem dangerous to - like drugs that aren't tested before being given to infants.

Whoever decided to take the results of that model and directly translate what it says into actions without any further thought.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: