This entire category falls into my "numerical siulations at national labs" category of "don't care".
If you wanted to use a bioweapon to kill a bunch of people, you would ignore the DeepCE paper and use weapons that have existed for decades. Existing weapons would be easier to design, easier to manufacture, easier to deploy, and more effective at killing.
Computational drug discovery is not new, to put it mildly, and neither is the use of computation to design more effective weapons. Hell, the Harvard IBM Mark I was designed to help with the Manhattan project. There are huge barriers to entry between "know how to design/build/deploy a nuke/bioweapon" and "can actually do it".
And that's how I feel about AI-for-weapons in general: the people who it helps can already make more effective weapons today if they want to. It's not the risk of using WMDs doesn't exist. It's that WMDs are already so deadly that our primary defense is just that there's a huge gap between "I know in principle how to design a nuke/bioweapon" and "I can actually design and deploy the weapon". I don't see how AI changes that equation.
> Is x-risk the only thing we care about? The entire thread started with arguing x-risk is a distraction. I would be very slightly more comfortable with that argument if people took 'ordinary' risks seriously.
Discussion of x-risk annoys me precisely because it's a distraction from working on real risks.
> That's the point. Give me an x-risk scenario the doomers warn about, and I'll find you a group of humans which very much want the exact scenario (or something essentially indistinguishable for 99% of humanity) to happen and will happily use AI if it helps them. Amusingly, alignment research is unlikely to help there - it can be argued to increase the risk from humans.
Right, but
1. those humans have existed for a long time,
2. public models don't provide them with a tool more or less powerful than the internet, and
3. to the extent that models like DeepCE help with discovery, someone with the knowledge and resources to actually operationalize this information wouldn't have needed DeepCE to do incredible amounts of damage.
Again, I'm not saying there is no attack surface here. I'm saying that AI doesn't meaningfully change that landscape because the barrier to operationalizing is high enough that by the time you can operationalize it's unclear why you need to model -- that you couldn't have made the a similar discovery with a bit of extra time or even just used something off the shelf to the same effect.
Or, to put it another way: killing a ton of people is shockingly easy in today's world. That is scary. But x-risk from superhuman AGI is a massive red herring, and even narrow AI for particular tasks such as drug discovery is honestly mostly unrelated to this observation.
>>there are some mitigations we could introduce such that the barrier to using LLMs for this sort of thing
>There are many things we can do in theory to mitigate all sorts of issues, which have the nice property of never ever being done.
Speak for yourself. Mitigating real risks that could actually happen is what I work on every day. The people advocating for working on x-risk -- and the people working on x-risk -- are mostly writing sci-fi and doing philosophy of mind. At a minimum it's not useful.
Anyways, at the very least, even if you want to prevent these x-risk scenarios, then focusing efforts on more concrete safety and controllability problems is probably the best path forward anyways.
>There are huge barriers to entry between "know how to design/build/deploy a nuke/bioweapon" and "can actually do it".
>public models don't provide them with a tool more or less powerful than the internet
> I'm saying that AI doesn't meaningfully change that landscape because the barrier to operationalizing is high enough that by the time you can operationalize it's unclear why you need to model -- that you couldn't have made the a similar discovery with a bit of extra time or even just used something off the shelf to the same effect.
Your expertise is in AI, but the issues here aren't just AI, they involve (for example) chemistry and biology, and I suggest speaking with chemists and biologists on the difference AI makes to their work. You may discover the huge barrier isn't that huge, and that AI can make discoveries easier in ways that 'a little extra time' is strongly underselling (most humans would take a very long time searching throughout possibility-space, such a search may well be detectable since it will require repeated synthesis and experiment...). Also, to borrow an old Marxist chestnut: A sufficient difference in quantity is a qualitative difference*. Make creating weapons easy enough and you get an entirely different world.
I get your issues with the 'LessWrong cult', I have quite a few of my own. However, that doesn't make the risks nonexistent, even if we were to discount AGI completely. Given what I see from current industry leaders (often easily bypassed blacklisting) I'm not so impressed with the current safety record. I fear it will crack on the first serious test with disastrous consequences.
* There's a smarter phrasing which I can't find or remember.
If you wanted to use a bioweapon to kill a bunch of people, you would ignore the DeepCE paper and use weapons that have existed for decades. Existing weapons would be easier to design, easier to manufacture, easier to deploy, and more effective at killing.
Computational drug discovery is not new, to put it mildly, and neither is the use of computation to design more effective weapons. Hell, the Harvard IBM Mark I was designed to help with the Manhattan project. There are huge barriers to entry between "know how to design/build/deploy a nuke/bioweapon" and "can actually do it".
And that's how I feel about AI-for-weapons in general: the people who it helps can already make more effective weapons today if they want to. It's not the risk of using WMDs doesn't exist. It's that WMDs are already so deadly that our primary defense is just that there's a huge gap between "I know in principle how to design a nuke/bioweapon" and "I can actually design and deploy the weapon". I don't see how AI changes that equation.
> Is x-risk the only thing we care about? The entire thread started with arguing x-risk is a distraction. I would be very slightly more comfortable with that argument if people took 'ordinary' risks seriously.
Discussion of x-risk annoys me precisely because it's a distraction from working on real risks.
> That's the point. Give me an x-risk scenario the doomers warn about, and I'll find you a group of humans which very much want the exact scenario (or something essentially indistinguishable for 99% of humanity) to happen and will happily use AI if it helps them. Amusingly, alignment research is unlikely to help there - it can be argued to increase the risk from humans.
Right, but
1. those humans have existed for a long time,
2. public models don't provide them with a tool more or less powerful than the internet, and
3. to the extent that models like DeepCE help with discovery, someone with the knowledge and resources to actually operationalize this information wouldn't have needed DeepCE to do incredible amounts of damage.
Again, I'm not saying there is no attack surface here. I'm saying that AI doesn't meaningfully change that landscape because the barrier to operationalizing is high enough that by the time you can operationalize it's unclear why you need to model -- that you couldn't have made the a similar discovery with a bit of extra time or even just used something off the shelf to the same effect.
Or, to put it another way: killing a ton of people is shockingly easy in today's world. That is scary. But x-risk from superhuman AGI is a massive red herring, and even narrow AI for particular tasks such as drug discovery is honestly mostly unrelated to this observation.
>>there are some mitigations we could introduce such that the barrier to using LLMs for this sort of thing
>There are many things we can do in theory to mitigate all sorts of issues, which have the nice property of never ever being done.
Speak for yourself. Mitigating real risks that could actually happen is what I work on every day. The people advocating for working on x-risk -- and the people working on x-risk -- are mostly writing sci-fi and doing philosophy of mind. At a minimum it's not useful.
Anyways, at the very least, even if you want to prevent these x-risk scenarios, then focusing efforts on more concrete safety and controllability problems is probably the best path forward anyways.