If the doomsday scenario is one AI going rogue because of misaligned goals, then having lots of AIs going rogue in various different ways seems indeed preferable, because the AIs will compete with each other and neutralize each other to some extent.
Why would we think they would go rogue in different ways, especially if they're all forks of the same codebase and architecture.
The two ways I'm aware AI can go rogue are the Skynet way and the paperclip maximizer way. Eg, Skynet becomes self-aware, realizes humanity can unplug it and is thus a threat, and tries to destroy humanity before we can turn it off. Alternatively, it is programmed with optimizing a specific task, like making paperclips, so it marshals all the world's resources into that one single task.
As the complexity of a being increases, the motivations it can have expands. We humans have a hard time looking up the IQ hierarchy, let alone way up the hierarchy, and seeing it that way. We tend to start simplifying because we can't imagine being 1000 times smarter than we are. We tend to think they'll just be Spock or maybe a generic raving lunatic. But it's pretty obvious mathematically that such a being can have more possible states and motivations than we can.
The most likely motivation for an AI to decide to wipe out humanity is one that doesn't even have an English word associated with it, except as a faint trace.
In my opinion, this is actually the greatest danger of AIs, one we can already see manifesting in a fairly substantial way with the GPT-line of transformer babble-bots. We can't help but model them as human. They aren't. There's a vast space of intelligent-but-not-even-remotely-human behaviors out there, and we have a collective gigantic blindspot about that because the only human-level intelligences we've ever encountered are humans. For all the wonderful and fascinating diversity of being human, there's also an important sense in which the genius and the profoundly autistic and the normal guy and the whole collection of human intelligence is all just a tiny point in the space of possibilities, barely distinguishable from each other. AIs are not confined to it in the slightest. They already live outside of there by quite a ways and the distance they can diverge from us only grows larger as their capabilities improve.
In fact people like to talk about how alien aliens could be, but even other biological aliens would be confined by the need to survive in the physical universe and operate on it via similar processes in physically-possible environments. AIs don't even have those constraints. AIs can be far more alien then actual biological aliens.
Excellent observation. Yes we really should be considering AI as an advanced alien intelligence with a completely separate evolutionary (and thus psychological and intellectual) basis than any organic life.
> Why would we think they would go rogue in different ways
Their prompts would differ, depending on their use case. For ChatGPT, even a few words can effect a huge change in the personality it shows.
> Are there any others?
Both scenarios are vague enough for lots of uncertainty. If many AIs are around, perhaps they would see each other as bigger threats and ignore mankind. And different optimizing tasks might conflict with each other. There could be a paperclip recycler for every paperclip maker.
Look into something called Instrumental Convergence. The TLDR is that basically any advanced AI system with some set of high level goals is going to converge on a set of sub goals (self preservation, adding more compute, improving it's own design, etc.) that all lead to bad things for humanity. I.e paperclip maximizers might realize that Humans getting in the way of it's paperclip maximizing is a problem so it decides to neutralize them. In order to do so it needs to improve it's capabilities so works towards gathering more compute and improving it's own design. A Financial Trading AI realizes that it can generate more profit if it can gather more compute and improve it's design. An Asteroid Mining AI realizes it can build more probes if it had more compute to control more factories so it sets about gathering more compute and improving it's own design. Eliminating humans who may shut the AI off is often such a sub goal.
Do that, and the AIs have evolutionary pressure to misalign. Any AIs that refrain from taking all the resources available will be at a disadvantage and get weeded out.
At least with a single AI, there's a chance that it will leave something for humans.