I do not follow the “which means”. There are many obvious and hidden variables that will modulate a one-versus-many AGI outcome. Bostrom has a lot on this topic. Couldn’t a true AGI want companionship of peers like we do?
Of course, it is possible that such an AI, on the way to making paperclips, will realise it wants companionship, and even maybe human companionship.
The argument around AI safety is not that it's impossible for a friendly AI to emerge. It's that there are far more ways to build an AI that doesn't care about human life and wipes us out without even thinking about it, than ways to build a friendly AI, and we have no idea which one we're building or how to tell them apart before they're built.
As for the "will there be several AIs fighting each other" hypothesis, that depends on how rapid the exponential take-off is once a self-evolving AI emerges. But a very plausible scenario is that whichever one starts taking off first ends up so far ahead of the others that it is effectively the only game in town and does whatever it wants.