Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> For example, I'm sure China's central planners would love to get an AGI first, and might be willing to take a 10% risk of annihilation for the prize of full spectrum dominance over the US.

This is the main problem - no matter what constraints the US (or EU) puts on itself, authoritarian regimes like Russia and China will definitely not adhere to those constraints. The CCP will attempt to build AGI, and they will use the data of their 1.4 billion citizens in their attempt. The question is not whether they will - it's what we can do about it.



Saying we shouldn't "tap the brakes" on AI out of safety concerns because Russia/China won't is a little like saying we shouldn't build containment buildings around our nuclear reactors, because the Soviet Union doesn't. It's a valid concern, but the solution to existential danger is not more danger.


I think it's more like we shouldn't put a upper limit on the number of nuclear weapons we hold because the Soviet Union/Russia may not adhere to it.

We were able to (my understanding is fairly effectively) negotiate nuclear arms control limits with Russia. The problem with AGI is that there isn't a way to monitor/detect development or utilization.


"The problem with AGI is that there isn't a way to monitor/detect development or utilization."

This is not completely true, although it is definitely much more trivial to "hide" an AI, by e.g. keeping it offline and on-disk only. To some extent you could detect disk programs with virus scanners, encryption or obfuscation make it somewhat easy to bypass. Otherwise, these models do at least currently take a fair amount of hardware to run, anything "thin" is unlikely to be an issue, any large amount of hardware could be monitored (data centers, for example) in real time.

Its obviously not fool-proof and you would need some of the most invasive controls ever created to apply at a national level (installing spyware into all countries e.g.), but you could assume that threats would have these capabilities, and perhaps produce some process more or less demonstrated to be "AI free" for the majority of commercial hardware.

So I would agree it is very, very difficult, and unlikely, but not impossible.


Yes but you’d never be sure. Not sure enough to disarm.


> Saying we shouldn't "tap the brakes" on AI out of safety concerns

I didn't say that we shouldn't tap the brakes, nor is that the only strategy. Other ones include, in rough order of viability: global economic sanctions on hostile actors attempting to develop AGI; espionage/sabotage of other AGI effort (see the Iran centrifuges); developing technologies and policies meant to diminish the impact of a hostile actor having AGI; and military force/invasion of hostile actors to prevent the development of AGI.

I'm sure you can think of others - regardless, there are far more options than just "more AI research" and "less AI research".


Not so sure your analogy works here. Aren't containment buildings meant to protect the area where the reactors are? I think the closer analogy would be saying the US needed to tap the breaks on the Manhattan Project because nuclear weapons are dangerous even though Nazi Germany and Soviet Russia are going full steam ahead during WW2 or the cold war with their nuclear weapons programs. The world would probably be very different it we had chosen the 'safer' path.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: