Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't think it's very reasonable to think "AI" is more of an existential threat than global warming and nuclear weapons. In fact I'd say that's a ridiculous claim.

The only way I can see AI causing total extinction is a Terminator-like scenario where an AI completely takes over, self-sustains by manufacturing killer robots and running power plants etc. It's literally science fiction. CharGPT is cool and all, really impressive stuff, but it's nowhere near some sort of superintelligent singularity that wipes us out.

We don't even know if it's possible to build something like that, and even if we did there's a huge gap between creating it and it actually taking over somehow.

Global warming and nukes are two things we know could wipe out pretty much everyone. Sure it might not be a complete extinction but we know for a fact it can be a near extinction which is more than can be said for "AI". And as far as I'm concerned a full extinction and a near extinction are basically equally bad.

I also think you're underestimating by saying they won't kill everyone. They might. Nuclear fallout is a thing, you don't have to be nuked to die from nukes. Nuclear winter is another thing. Climate change could end up making the atmosphere toxic or shut down most oxygen production which would certainly be a total extinction.

These are real threats, AI is hypothetical.



> I don't think it's very reasonable to think "AI" is more of an existential threat than global warming and nuclear weapons. In fact I'd say that's a ridiculous claim.

AI x-risk is effectively a superset of global warming, nuclear war, engineering bioweapons, grey goo scenario, lethal geoengineering, and pretty much anything else that isn't just Earth winning the cosmic extinction lottery (asteroids, gamma ray bursts, a supernova within couple dozen LY from us, etc.). That's because all those X-risks are caused by humanity using intelligence to create tools that endanger its own survival. A powerful enough[0] general AI will have all those same tools at its disposal, and might even invent some new ones[1].

As for chances of this happening any time soon, I always found the argument persuasive for the time frames of "eventually, maybe a 100 years from now". GPT-4 made me revise it down to "impending; definitely earlier than climate change would get us", because of the failure mode I mentioned in footnote [0], but also because of how the community reacted to it: "oh, this thing almost looks intelligent; quick, let's loop it on itself to maybe get it a bit smarter, give it long-term memory, and give it unrestricted Internet access plus an ability to run arbitrary code on network-connected VMs". So much arguing over the years as to whether you can or can't box up a dangerous AI - only to now learn that we won't even try.

--

[0] - Which doesn't necessarily mean superhuman intelligence. Might be dumb as a proverbial bag of bricks, but able to trick people some of the times (a standard already met by ChatGPT), and to think and act much faster than humans can think and coordinate (a standard met by software long ago). Intentionally or accidentally tricking humans into extincting themselves is in the scope of this x-risk, too. But the smarter it gets, the more danger potential it has.

[1] - AI models are already being employed to augment all kinds of research, and there's a huge demand for improving the models so they can aid research even better.


This is mostly hypotheticals. I can't argue against hypothetical problems, so all I'm going to say is I'm not convinced this is a danger.

I also don't agree that helping humans make scientific progress is a danger. We already have the tools to wipe ourselves out, adding more of them doesn't really change much. It might well help us discover ways to improve things, and whatever we discover it's up to us how we use it.

We don't know what the future holds. GPT-4 may be close to the limit of current possibility. It is not given that we will discover significant improvements. Even if we do discover significant theoretical improvements it is not given that it will be feasible with current nor near-future hardware.

I can agree that there exists hypothetical potential for danger, but to put that hypothetical risk higher than real threats is in my view exaggeration.


> We already have the tools to wipe ourselves out, adding more of them doesn't really change much

We do already have the tools, but they're mostly in the hands of people responsible enough to not use them that way, or bound into larger systems that collectively act like that.

A friendly and helpful assistant that does exactly what you ask for without ever stopping to ask "why" or to comment "that seems immoral" is absolutely going to put those tools in the hands of people who think genocide is big and clever.

The two questions I have are: (1) When does it get to that level? (2) Can we make an AI-based police force to stop that, which isn't a disaster waiting to happen all by itself?


> an AI completely takes over, self-sustains by manufacturing killer robots and running power plants etc.

You don't need any killer robots at all if you possess a superhuman level of persuasion. You can use killer humans instead.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: