There are ancillary assumptions implicit in that (either recursive self improvement and the decision to eliminate humanity is so fast that no counter intelligence uninterested in paperclips can be made, or that how recursive self improvement has been achieved is such a mystery that no counter intelligence uninterested in paperclips can be made. Also that recursive self improvement doesn't itself involve either sequentially or simultaneously coming to adopt a range of views on the value of humans with respect to paperclips)
To be fair, assumptions about single superhuman intelligences make a little more sense if we're talking about a secret Skynet project carried out by a state's most advanced research labs and not a mundane little office supplier's program accidentally achieving the singularity after being tweaked for paperclip output.
To be fair, assumptions about single superhuman intelligences make a little more sense if we're talking about a secret Skynet project carried out by a state's most advanced research labs and not a mundane little office supplier's program accidentally achieving the singularity after being tweaked for paperclip output.