Is there anything that prevents Pegasus from spreading by itself or must it be installed via a targeted attack? And is there a way of scanning for it to see if a phone is infected?
There is nothing technical that prevents Pegasus from spreading by itself, some of the reportedly involved vulnerabilities could be "wormable", however, there are practical reasons that prevent that - for malware like Pegasus, the operator has an interest to avoid uncontrolled spread, since it relies on certain undiscovered and unpatched vulnerabilities staying undiscovered and unpatched, and uncontrolled spread makes it much more likely to be discovered, analyzed and "killing the goose that lays golden eggs".
So at least for now we'd expect all Pegasus installations to be a result of targeted attacks. On the other hand, if the tool leaks and becomes readily available to multiple actors, then the incentives change and one of them might decide to make a worm that infects everyone in the world who's not patched.
I suspect that it was not because it was hurtful or destructive, but because I chose to use the national currency of Israel as the denomination (instead of dollars, which would actually be disrespectful), and someone that skipped Social Studies, thought it was being "anti-semitic."
Sort of like the paediatrician in UK, that was attacked, because some idiot thought the sign outside her office meant she was a paedophile.
There is no self propagation code built into Pegasus.
It would be relatively trivial to write such - simply have it send the exploit via iMessage to all of a targets contacts, rinse and repeat.
This would be counterproductive though - the whole selling point of Pegasus is targeted surveillance, and such exploits are very costly - uncontrolled spreading would make it detected much faster, burning a valuable resource.
If such exploits were cheap, it’s plausible you could justify writing a variant that automatically attacks a targets entire address book to mine their social graph, but then you have the problem of analysing a shitload of probably worthless data…
If some hacker gets a clearly infectious Pegasus link they should make it spread through messages to everyone. Bricking everyone’s iPhone will probably make all the governments and Apple sit up and do some real damage to these actors.
Many of the Pegasus attacks are zero-click, so no link is needed. All they need to do is send you a message and you are compromised.
They presumably also configure their command and control to only persist if it is one of the designated targets and wipe all traces if it is not, so even forwarding the attack payload would probably not do anything. You would need to determine you have been compromised and then reverse engineer the exploit so you could replace the command payload with a irreversible bricking operation to do what you suggest.
At that point you might as well spend the $5M-$10M to develop the entire attack yourself. If you are a competitor to Apple spending $10M to completely destroy the $2.7T Apple is literal pocket change; too small to even show up on your financials.
> If you are a competitor to Apple spending $10M to completely destroy the $2.7T Apple is literal pocket change; too small to even show up on your financials.
You're comparing two near completely unrelated numbers here. That's not what enterprise value means; it doesn't mean much of anything really.
It works the usual way -- you make a payload that, when processed by a buggy code, executes itself. If the buggy code happens to be SMS packet parser, image decoder, text rendering, blocklist check or another 2 millions of things that happen to show you incoming SMS (or even better, flash message, or something not visible to user), then you don't have to click on it.
I mean if the bug in the browser, you have to visit the page to have the payload get to you, but it's a phone. A device for other people to contact you.
> make all the governments and Apple sit up and do some real damage to these actors.
International weapons dealing doesn't work that way. Point to any manufacturer of weapons and there's a bunch of people that don't like them. But the countries that benefit from those weapons don't agree.
Seems that the NSO business model is based on ultra exclusivity and a very small number of business clients. Technically, Pegasus could probably retransmit itself to infect another device, but it doesn't fit their business model so I doubt NSO would do this regularly.
Nation states (like KSA) will likely pay very large sums of money to use this against their perceived enemies abroad. A small and exclusive clientele is how a company like this stays out of the lime light.
From what I was able to read previously, it has no ability to spread by itself and has to be installed by a targeted attack. There is also a tool from Amnesty International that can detect it (or was able to): https://github.com/mvt-project/mvt
It is a race though, so past info may no longer be valid. However, I doubt it will ever be able to spread by itself, since it uses very expensive zero days to infect and they will be quickly fixed after detection.
AFAIK, phone numbers are the entry point, it’s the easiest and quickest way to target someone with it, else, it will be more involved to isolate the target, so don’t activate any number on your phone in addition to the lockdown mode, plus the usual security precautions should be in theory enough to protect you, ultimately, don’t use a “smart” phone.
Phone numbers are not targets. Baseband is the big fear vector due to it being a black box, but in reality the apps themselves are being targeted where your phone number is the primary key.
Since the type of exploit pegasus has been using has been recently seen in the wild and Apple has had to release more than one security update to address this attack vector it leads me to believe that not just targetted individuals should enable "lock down mode" on their apple devices. Although apple doesn't recommend it, it could be useful if there is a major malware outbreak across the iPhone ecosystem.