First of all I consider the Drake equation to be at best armchair speculation. As I explained at https://news.ycombinator.com/item?id=34070791 it is quite plausible that we are the only intelligent species in our galaxy. Any further reasoning from such speculation is pointless.
Second, to make the argument they specify a whole bunch of apparently necessary things that have to happen for AGI to be a threat. They vary from unnecessary to BS. Let me walk through them to show that.
The first claimed requirement is that an intelligent machine should be able to improve itself and reach a superhuman level. But that's not necessary. Machine learning progresses in unexpected leaps - the right pieces put together in the right way suddenly has vastly superior capabilities. The creation of superhuman AI therefore requires no bootstrapping - we create a system then find it is more capable than expected. And once we have superhuman AI, well...
This scenario shows the second point, that it must be iterative, is also unnecessary.
The third point, "not limited by computing power" is BS. All that we need is for humans to be less efficient implementations of intelligence than a machine. As long as it is better than we are, the theoretical upper bounds on how good it can be are irrelevant.
The fourth point about a goal is completely unnecessary. Many AIs with many different goals that cumulatively drive us extinct is quite possible without any such monomaniacal goal. Our death may be a mere side effect.
The fifth point about happening so fast that we can't turn it off is pure fantasy. We only need AGI to be deployed within organizations with the power and resources to make sure it stays on. Look at how many organizations are creating environmental disasters. We can see disasters in slow motion, demonstrate how it is happening, but our success rate in stopping it is rather poor. Same thing. The USA can't turn it off because China has it. China can't turn it off because the USA has it. Meanwhile BigCo has increased profit margins by 20% in running it, and wants to continue making money. It is remarkably hard to convince wealthy people that the way they are making their fortunes is destroying the world.
Next we have the desire for the machine to actively destroy humanity. No such thing is required. We want things. AGI makes things. This results in increased economic activity that creates increased pollution which turns out to be harmful for us. No ill intent at all is necessary here - it just does the same destructive things we already do, but more efficiently.
And finally there is the presumed requirement that the machine has to do research on how to make us go extinct. That's a joke. Testosterone in young adult men has dropped in half in recent decades. Almost certainly this is due to some kind of environmental pollution, possibly an additive to plastics that messes with our endocrine system. We don't know which one. You can drive us extinct by doing more of the same - come up with more materials produced at scale that do things we want and have hard to demonstrate health effects down the line. By the time it is obvious what happened, we've already been reduced to unimportant and easily replaced cogs in the economic structure that we created.
-----
In short, a scenario where AGI drives humanity extinct can look like this:
1. We find a way to build AGI.
2. It proves useful.
3. Powerful organizations continue to operate with the same lack of care about the environment that they already show.
4. One of those environmental side effects proves to be lethal to us.
The least likely of these hypotheses is the first, that we succeed in building AGI. Steps 2 and 3 are expected defaults with probability close to 100%. And as we keep rolling the dice with new technologies making new chemicals, the odds of stop 4 also rise to 100%. (Our dropping testosterone levels suggest that no new technology is needed here - just more of what we're already doing.)
I disagree with most of the assumptions that get you close to P = 1. But I enjoyed your comment anyway. It makes me examine my own assignment of probabilities. I gave 1 and 2 high Ps and the others rather low or undefinable Ps.
I'm curious how you get rather low or undefinable Ps for the other two.
Look into the history of how the tobacco industry tried to suppress research on the harms of smoking, how the sugar industry tried to shift blame for health problesm like obesity from sugars to fats, and how the fossil fuel industry has resisted attempts to take responsibility for global warming. In all three cases not only did industry resist evidence of harm, it also funded publicity campaigns to try to shift public opinion their way.
Given that I know of no reason to believe that this will change, I think that point 3 has high probability.
As for point 4, we have a history of spreading chemicals widely before discovering bad things about them. The first such chemical to gain notoriety was DDT, but many more have followed. In the last few decades, https://www.pnas.org/doi/10.1073/pnas.2023989118 shows that flying insect biomass dropped by 3/4. https://www.urologytimes.com/view/testosterone-levels-show-s... likewise shows that, even after controlling for known factors like increased obesity, there is an unexplained decline in testosterone of roughly 1/3. It is reasonable to guess that both are the result of environmental factors. But we are not sure what factors those are, and are not significantly modifying our behavior. (How could we, when we don't know for sure what we are doing to cause the problem?)
Given these examples, I truly believe we are rolling environmental dice with our health. And if we keep rolling the dice, eventually we'll come up snake eyes.
First of all I consider the Drake equation to be at best armchair speculation. As I explained at https://news.ycombinator.com/item?id=34070791 it is quite plausible that we are the only intelligent species in our galaxy. Any further reasoning from such speculation is pointless.
Second, to make the argument they specify a whole bunch of apparently necessary things that have to happen for AGI to be a threat. They vary from unnecessary to BS. Let me walk through them to show that.
The first claimed requirement is that an intelligent machine should be able to improve itself and reach a superhuman level. But that's not necessary. Machine learning progresses in unexpected leaps - the right pieces put together in the right way suddenly has vastly superior capabilities. The creation of superhuman AI therefore requires no bootstrapping - we create a system then find it is more capable than expected. And once we have superhuman AI, well...
This scenario shows the second point, that it must be iterative, is also unnecessary.
The third point, "not limited by computing power" is BS. All that we need is for humans to be less efficient implementations of intelligence than a machine. As long as it is better than we are, the theoretical upper bounds on how good it can be are irrelevant.
The fourth point about a goal is completely unnecessary. Many AIs with many different goals that cumulatively drive us extinct is quite possible without any such monomaniacal goal. Our death may be a mere side effect.
The fifth point about happening so fast that we can't turn it off is pure fantasy. We only need AGI to be deployed within organizations with the power and resources to make sure it stays on. Look at how many organizations are creating environmental disasters. We can see disasters in slow motion, demonstrate how it is happening, but our success rate in stopping it is rather poor. Same thing. The USA can't turn it off because China has it. China can't turn it off because the USA has it. Meanwhile BigCo has increased profit margins by 20% in running it, and wants to continue making money. It is remarkably hard to convince wealthy people that the way they are making their fortunes is destroying the world.
Next we have the desire for the machine to actively destroy humanity. No such thing is required. We want things. AGI makes things. This results in increased economic activity that creates increased pollution which turns out to be harmful for us. No ill intent at all is necessary here - it just does the same destructive things we already do, but more efficiently.
And finally there is the presumed requirement that the machine has to do research on how to make us go extinct. That's a joke. Testosterone in young adult men has dropped in half in recent decades. Almost certainly this is due to some kind of environmental pollution, possibly an additive to plastics that messes with our endocrine system. We don't know which one. You can drive us extinct by doing more of the same - come up with more materials produced at scale that do things we want and have hard to demonstrate health effects down the line. By the time it is obvious what happened, we've already been reduced to unimportant and easily replaced cogs in the economic structure that we created.
-----
In short, a scenario where AGI drives humanity extinct can look like this:
1. We find a way to build AGI.
2. It proves useful.
3. Powerful organizations continue to operate with the same lack of care about the environment that they already show.
4. One of those environmental side effects proves to be lethal to us.
The least likely of these hypotheses is the first, that we succeed in building AGI. Steps 2 and 3 are expected defaults with probability close to 100%. And as we keep rolling the dice with new technologies making new chemicals, the odds of stop 4 also rise to 100%. (Our dropping testosterone levels suggest that no new technology is needed here - just more of what we're already doing.)