Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think the best argument against us ever finding AGI is that the search space is too big and the dead ends are too many. It's like wandering through a monstrously huge maze with hundreds of very convincingly fake exits that lead to pit traps. The first "AGI" may just be a very convincing Chinese room that kills all of humanity before we can ever discover an actual AGI.

The necessary conditions for "Kill all Humanity" may be the much more common result than "Create a novel thinking being." To the point where it is statistically improbable for the human race to reach AGI. Especially since a lot of AI research is specifically for autonomous weapons research.



Is there a plausible situation where a humanity-killing superintelligence isn't vulnerable to nuclear weapons?

If a genuine AGI-driven human extinction scenario arises, what's to stop the world's nuclear powers from using high-altitude detonations to produce a series of silicon-destroying electromagnetic pulses around the globe? It would be absolutely awful for humanity don't get me wrong, but it'd be a damn sight better than extinction.


Physically, maybe not, but an AGI would know that, would think a million times faster than us, and would have incentive to prioritize disabling our abilities to do that. Essentially, if an enemy AGI is revealed to us, it's probably too late to stop it. Not guaranteed, but a valid fear.


What stops them is: being politically captured by an AGI.

Not to mention that the whole idea of "radiation pulses destroying all electronics" is cheap sci-fi, not reality. A decently well prepared AGI can survive a nuclear exchange with more ease than human civilization would.


I think it's much more likely that a non-AGI platform will kill us before AGI even happens. I'm thinking the doomsday weapon from Doctor Strangelove more than Terminator.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: