A good analogy for AI risk. We'd never visited the Moon before, or any other celestial object. The risk analysis was not "we've never seen life from a foreign celestial object cause problems on Earth, therefore we aren't worried." The risk analysis was also not "let's never go to the Moon to be _extra_ safe, it's just not worth it."
The analysis was instead "with various methods we can be reasonably confident the Moon is sterile, but the risk of getting this wrong is very high, so we're going to be extra careful just in case." Pressing forward while investing in multiple layers of addressing risk.
Good question. Perhaps depends on the type of warning shot. Plenty of media has an anti-tech bend and will publicize warning shots if they see them -- and they do this already with near term risks, such as facial recognition.
If the warning shot is from an internal red team, then higher likelihood that it isn't reported. To address that I think we need to continue to improve the culture around safety, so that we increase the odds that a person on or close to that red team blows the whistle if we're stepping toward undisclosed disaster.
I think the bigger risk isn't that we don't hear the warning shots though. It's that we don't get the warning shots, or we get them far too late. Or, perhaps more likely, we get them but are already set on some inexorable path due to competitive pressure. And a million other "or's".
You mention media publicizing warning shots. Does that really work at all?
Most of the reporting I see is half-dismissive: [facial recognition is a risk but what are you gonna do? it can't be bad to fight crime.] This goes for everything. And it rarely results in effective control.
Internal practice in biology or chemistry labs kinda does - but takes a long time, and then accidents still happen.
NTSB accident investigations: Is there another field where each accident is taken as seriously as there? And step-wise improvement does not sound like a good solution for self-reproducing agents.
Great article on the Apollo mission return "quarantine". One lesson is that it got not much priority (the vehicle itself released in air and ocean water), and not much effort: lots of things in the lab were not tested or designed sufficiently (broken glove / gloveboxes, fire procedures involving breaching containment...) Another lesson is that this was apparently not tested or wargamed anywhere enough. No test run? A third is that of course it didn't go perfect, with the two first points as a starting point, and the fact that it was the first run. In hindsight, of course it would fail.
That argues for at least taking the idea of containment (for AI or Mars samples) more seriously. But it also argues that it will (of course) not be taken seriously enough. Plus amateurs not taking things seriously either. So, taking it even more seriously because of this prior experience.
Science is used to "fair warnings" (screwdriver criticality experiments, Marie Curie, now lunar samples, but yes also smallpox ... plenty of stories) - but all of these were minor: a few persons died, the rest learned. And the risk for a sufficient AI is not in the same scale. For that one, we don't have much experience. Comparable might be high containment pathogen labs maybe? - with plenty of problems themselves; and the difficulty of cleaning computers after an intrusion (proper procedure being a clean re-install - not possible for an AI leak.)
NASA had an easy win putting the astronauts in quarantine, there is no such easy win for current AI research, you can whistle-blow as much as you want but AGI will be worked on until it is real, regardless of legislation unless that legislation covers all countries which is impossible.
Not a great analogy. Today we have all kinds of profit-driven companies "going to the moon" without thinking too hard about the risks. There is not, and practically can't be, a central safety effort that has more effect than releasing reports. No one is enforcing quarantine.
If there was life on the moon in an analogous scenario, it would be a matter of a few trips before it was loose on earth.
Yes, but that's today. When the moon landing initially happened, nobody had ever been to another celestial body before, whereas now we have lots more experience visiting them and sampling their atmospheres and surfaces.
Nobody's ever created AI before, so we're in a similar situation in that nobody has firsthand experience of what to expect.
The specific part of the analogy that breaks down is that nobody actually knows if, when, nor how we will ever create AGI. So all safety efforts are necessarily speculative (because the field itself is speculative).
Like, if the scientists working on the moon landing didn't even know yet what the moon was made of, nor whether they would be getting to the moon by slingshot, by elevator, by rocket, by wormhole, or by some other yet unknown means, it would be very hard to make any meaningful proposal for how we would stay safe once we did get there.
In both cases a central safety effort seems nearly impossible. E.g. trying to enforce international AI risk cooperation via air strikes against data centers [1] can easily be avoided by defecting countries by building supercomputers underground.
With the moon bugs this wasn't a big problem, as they were so unlikely. But for AI the risk seems quite large to me.
Right, but the original comment is trying to draw comfort from the actual Apollo scenario where central enforcement very much happened, not from my modified scenario. I think we're on roughly the same page.
If we're going with that analogy, moon is roughly simultaneously visited by many private companies, each bringing samples back, some paying lip service "we're totally be careful", some not.
Continuing with that analogy, there are other planets, moons, solar systems with perhaps bigger chance of finding life. The laissez-faire approach to bringing samples back continues, now strengthened by the "see, we visited moon, brought samples and we still live!".
- The worst case worry about AI is a much bigger problem than the worst case worry about moon life. (IMHO)
- With moon we had a good idea on how to mitigate the risks just to be extra safe. With AI I believe we don't have any clue on how to do containment / alignment or if it's even possible. What is currently being done on the alignment front (e.g. GPT refusing to write porn stories or scam emails) has absolutely nothing to do with what worries some people about superintelligence.
I agree -- the risks are bigger, the rewards larger, the variance much higher, and the theories much less mature.
But what's striking to me as the biggest difference is the seeming lack of ideological battles in this Moon story. There were differences of opinion on how much precaution to take, how much money to spend, how to make trade offs that may affect the safety of the astronauts, etc. But there's no mention of a vocal ideological group that stands outright opposed to those worried about risks -- or a group that stands opposed to the lunar missions entirely. They didn't politicize the issue and demonize their opponents.
Maybe what we're seeing with the AI risk discussion is just the outcome of social media. The most extreme voices are also the loudest. But we desperately need to recapture a culture of earnest discussion, collaboration, and sanity. We need every builder and every regulator thinking holistically about the risks and the rewards. And we need to think from first principles. This new journey and its outcomes will almost surely be different in unexpected ways.
You are completely right and your description of the situation screams social media as the root cause of difference between then and now. Maybe 'social media' in a generic sense where any discussion board counts.
Again, devils advocate, but the people worried about AI (like Yudkowsky) are absolutely worried about it killing all humans. You can read more about the specifics on lesswrong.
With moon life I presume the worst case is some infectious and fatal disease that's difficult to contain?
The first one sounds like a bigger problem to me, but maybe it's not a discussion worth having. So, fair enough.
Skynet will only nuke us after the AI safety crowd has thoroughly convinced the military of how supremely dangerous and capable AI is. AI on its own seems pretty benign, keep security vulnerabilities patched and be skeptical of what you read on the internet.
I honestly believe this pop-scifi view we have of AI is probably the most dangerous part, it gives certain people (like those in weapons procurement) dangerous levels of confidence in something that doesnt provide consistent and predictable results. When the first AI cruise missile blows up some kids because it hallucinated them as a threat, it wont be because AI is so dangerous, it will be the overconfidence of the designers. Its threat to humanity is directly correlated to the responsibility we delegate it.
Total extinction of all life on Earth isn't also the worst case worry about AI? Anyway, both seem highly unlikely, that's why we shouldn't compare worst or best scenarios, but rather real, more probable risks, i.e. AI being used to develop advanced weapons. In that regard I'd say AI is worst, but it's mostly a matter of opinion, really.
Right, but it's pretty obvious that the risk of something that's already here destroying humanity / civilization / the planet, is far greater than that of some hypothetical thing that may have been waiting to be brought back from the moon of doing it. Both sides of the equation matter here.
I wonder if similar approach was taken for for internet/www. Google? Did anyone worry about PageRank threat to life? Maybe PageRank will turn out to have been the human nemesis after all... Only in hundred of years time frame
That's not even close to true. Humans don't have the ability to exponentially amplify their own intelligence. It's not too farfetched to imagine that AGI just might have such a capability.
I have to disagree completely here. In the case of going to the moon, the most reasonable prior for "astronauts pick something up that then returns with them intact and is able to survive in highly oxidizing atmosphere" should be near zero. The prior for "we bring something to the Moon that somehow contaminates the place" should be significantly higher than that, yet still very small. This is, of course, taking into account that we didn't know how hardy tardigrades and some of the various types of extremophiles could be then. But, IMO, I still don't think that should raise the risk estimate of bringing anything back very much, nor should it raise the risk estimate for contaminating the place to anything approaching that of any of the very real possibilities for which NASA literally had 8+ contingency plans. And I say all that even while factoring in that the potential impact of bringing something back that would be able to survive could be the destruction of humanity, destruction of Earth's biosphere (and all of humanity with it), or any of a number of other existential risk scenarios.
With AI, those probabilities are all flipped on their heads. The reason we know of some of the risks, e.g. the risk of deep fakes being used as tools for fraud, is because they have already happened. That one single scenario alone having already come to fruition takes whatever anyone should have for a prior probability of said risk and shoots it straight up to 100%. And, that alone is a key difference between AI today and lunar or terrestrial contamination by alien life in the 1960s.
Let us not also forget that there are many, many other risk scenarios than deepfakes being used for fraudulent purposes. Much as I hate to reference Rumsfeld on this, there truly are "unknown unknowns" here, and we have to take that seriously. And then there are the middle ground scenarios, such as the possibility of severe and lasting economic disruption, to the point where capitalism might not be able to function as it has for the past several centuries. I truly don't believe that any foreseeable "internal" risk[0] could cause capitalism to completely stop working forever, unless we just run out of stuff to dig up out of the dirt, but AI certainly could cause multiple decades of disruption, which would be nearly as bad for most people alive today.
I'm gonna cut myself off there, because I think this is getting a little ponderous, but also because I think my point is made now: biotic contamination, whether forward or reverse, involved a sum of a lot of hypotheticals with very low probabilities, whereas AI risk involves a sum of some certainties, a few potential (though perhaps low probability) existential risks, and also an indeterminate number of unknown risks of unknown probability. It seems pretty clear once you crunch it all out that AI is certainly the greater threat.
---
[0] Mmeaning one that originates within capitalism itself, like AI, rather than one that originates outside of capitalism, but puts pressure on the system, like climate change..
> the most reasonable prior "astronauts pick something up that then returns with them intact and is able to survive in highly oxidizing atmosphere" should be near zero
The Soviets had active bioweapons and espionage programs. In a MAD world and with geopolitical dominance at stake, it’s not unreasonable to take precautions against something planted in the lunar module.
Given the demonstrated capabilities and incentives of the actors involved vs. hypothetical AI manifestations, I think it’s way more reasonable to consider moon bugs the greater threat.
Eh. AI in the current box word form has nothing to do with super intelligence. Superintelligence is also overrated. What about a dumb fuck bot that just tries to be bad with social engineering to steal money? Frankly I think it’s inconceivable that we somehow get “superintelligent” bots acting with agency and destroying the world before we get “dumbass bots” acting at scale to make friends and then swindle their dollars.
Social interaction with agency is near and is a problem, but not a doomsday problem.
Hell, a lot of chat gpts apparent impotence is that the fact that it’s just a responded to prompts. A VERY light touch effort to make it actively conversational and speaking at its own cadence would feel very different.
You can easily harm people with ai. I can hypothetically harm people with ai today (fake news, etc). I can't harm people with fake moon life. AI already poses a greater threat to humanity than moon life ever did.
You can harm people with a feather. AI is a non issue, the only issue is the people using it, and thus far it seems like there are too many sociopaths using it, willing to steal people’s property just to generate images of sexualised animals and dubious quality code.
You mean skynet terminator wintermute risk it seems, which doesn't exist and we have no pathway to. The analogy doesn't hold for matrix multiplication. It might be fun to pontificate about what could happen if we had something that is effectively now magic, but it's just a philosophy thought experiment with no bearing on reality. The real danger would be policy makers who don't understand the different between current technology and philosophy class imposing silly rules based on their confusion.
Seems like they found a reasonable balance between crew safety and protection against contamination, at least for the moon, but I'm left with the thought that if there is life on Mars, there's no way to prevent contamination when probes or people bring it back.
If life on Mars is not a recent thing, it will probably have contaminated Earth already (and probably vice versa) as there are meteorites found on Earth that almost definitely came from Mars.
Yes, they're rocks that happened to end up on another planet after being thrown out into space by particularly large impactors. This would rule out most life, but there's still a chance that extremophiles and even some multicellular organisms capable of hibernating in the rock might have survived the trip, similar to ancient microbes and tapeworms locked away in the arctic being able to revive upon reaching survivable conditions again.
Nobody reasonably thought that there was life on Luna.
Mars is a completely different story. There's multiple lines of evidence strongly suggesting active life on Mars. It's very important to NOT bring it back to earth, but that's been essentially ignored to date.
Not necessarily. Imagine a microbe that loves to eat lipids, proteins, and other complex organic matter but has itself an utterly alien biology that presents little or no attack surface for immune systems that have evolved to tackle Earth life. Known antibiotics and even most antiseptics have no effect and we don’t have time to find anything that works before we all die, then all plants, then all large animal life. Our biosphere is reset to the pre-Cambrian but with ruined cities and petrified forests.
I’ve studied a fair amount of biology and don’t see a reason this is impossible. Would it be guaranteed to happen if we brought back an alien microbe? No. In fact the most likely scenario is probably that the alien microbe would not survive well in our biosphere. But it’s definitely a plausible scenario especially if alien life were some ultra tough extremophile that can handle a huge range of temperatures and chemical environments and metabolize all kinds of complex organic molecules. Evolutionary pressures in a radiation filled environment with extreme temperature cycles would select for something simple but very very hardy.
This is why intelligent biological entities will never land here in person or vice versa. If we found a crashed UFO with bodies the prudent action would be to drop a hydrogen bomb on it since if they’re anything like us their bodies are walking microbe zoos. We will never shake hands with ET but could totally swap mp3 collections.
The analysis was instead "with various methods we can be reasonably confident the Moon is sterile, but the risk of getting this wrong is very high, so we're going to be extra careful just in case." Pressing forward while investing in multiple layers of addressing risk.