Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Currently, 50% of AI researchers think there's a 10% chance or higher that human civilization will be wiped out in the near future as a result of our inability to control AI.

Another person posted a great youtube overview that sums up and covers the broad points in a 1 hour presentation.

I suggest you watch it to catch up. You'll notice one particular thing is absent, AGI sentience doomsday isn't discussed, though it is a valid risk case too, its not what most experts are concerned with. What does concern the experts is the lack of risk management, and the exponential on exponential growth.

With that kind of growth, its not enough to keep pace, you have to predict accurately where it will be and somehow exceed it, two almost impossible problems.

I highly suggest you review the video, and take the time needed to process what the experts are saying before discounting and minimizing something so impactful.



I'd be interested in what survey you are referring to. What came up in a search is this:

https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/

And 50% of A.I. researchers certainly didn't take that survey.

I also see no reference to the "near future" in the question

The question would include an A.I. destroying the human race 7000 years from now.

But I was mainly responding to your comment that "one might even go so far as to argue a literal public attempt at ending the human race."

Unless GPT-4 specifically is believed to be a threat to the human race your comment was hyperbole.

I'll take a look at that video.

Edited at add: and the video quotes a survey that you were clearly referring to that says nothing about time scale. There's no claim the "near" future is involved.

Also, the question is vague and certainly isn't asking if a Chatbot will destroy the human race.


> I'd be interested in what survey you are referring to.

Its the same as mentioned in the video.

> But I was mainly responding to your comment ... > Unless GPT-4 specifically is believed to be a threat to the human race ...

That's flawed logic. A false dichotomy, and also begs the question as to who decides it is a threat.

As for whether its dangerous, I think the fact that the model they discussed in that video shipped and deployed publicly before anyone knew it had embedded knowledge of research grade chemistry capable of some horrific things, all without the knowledge of the people who designed it. It was only discovered after the fact, and that is pretty disturbing.

With dangerous and existential threats, its not considered safe until deemed unsafe, its by-default considered unsafe until deemed its safe. That's how you limit tragedies.

We can disagree, but if we do I sincerely hope you do not touch this stuff.


It's banally true that intentionally putting an A.I. in charge of our nucleur arsenal might be dangerous.

My point is someone can answer a survey stating A.I. could destroy our species without believing GPT-4 is existentially dangerous.


You've changed your argument which causes me to be skeptical of your credibility

Not everyone is equally educated, the two are not mutually exclusive.

People can say either. Educated, rational and reasonable people would say yes on both if they do the risk management analysis and understand the factors driving how it will be used.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: