Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This comment pretty much dissects/explains NSO in the best terms ive seen in HN before.

"Pegasus" is not one hacking entity like most articles make it out to be. Its

1) A bunch of services that download data, given root access to a phone

2) a bank of 0-days, we don't know how deep.

For all we know, there are times when "Pegasus" doesn't work for hours, days, weeks, until the 0-day is rotated. We do know from some leaks that they have a mix of non-click and click exploits, and also support all different kinds of phone OS.

Their hacking abilities are definitely overstated, for all we know, for smooth continuous customer support, they could be buying 100% of their 0-days, and not finding any themselves. A 0-click 0-day for iPhones is worth about $2,000,000[1], a company with contracts like NSO can afford a lot of those. IMO the media portraying them as super-hackers is pure hype. Its a bunch of crooked business people who figured out how to extract money out of countries

[1] https://arstechnica.com/information-technology/2019/01/zerod...



I factually agree with what you're saying, but I don't think it really changes the practical outcome of the situation: a private organization is available for-hire to arbitrarily root and snoop on fully patched iOS devices at state-level actor scale. If they get the exploits from in-house or elsewhere, the outcome is basically the same.

Whether there's "Pegasus" attribution or not, the reality of the contemporary internet is: if you're targeted hard enough, you're probably screwed. (....but you're probably not actually targeted that hard, so practice good practices)

That being said, I agree with others that it's probably a good technical, PR, and long-term "marketability to regimes" approach for Apple to just double down and spend millions instead of thousands on competing with the black market to buy 0-days.


> a private organization is available for-hire to arbitrarily root and snoop on fully patched iOS devices at state-level actor scale. If they get the exploits from in-house or elsewhere, the outcome is basically the same.

This is a distinction without a difference. The major Great Powers are all cyber powers. The only difference is that NSO services the non-Great Powers too, with the implicit backing of the Israeli state apparatus. The media has created NSO a cyber power broker for the powers that be, but all of the UN Security Council permanent members have their own defence contractors and cybersecurity staff. Talented engineers are everywhere.


An extension to the link [1] above is: the price NSO pays for android zero click is higher than the price they pay foriPhone zero click exploits. This implies they do indeed a catalog of iOS exploits stashed.


I've heard a few people theorize about why Android exploits seem to pay more. The theory is that Android is 1) very fragmented, with each manufacturer having different versions and modifications and 2) updates are much slower/non existent.

To get the top payout, you'd need to come up with something that works across all manufacturers versions of Android and probably across 4 or 5 major versions. You might be able to find an exploit for all Androids running version x, but if that version only has 10% of the android market, you wouldn't get a full payout.

iOS users tend to heavily be on the latest version, or one version behind at most. As an example, most recent iOS exploits in the wild seem to be using iMessages. On iOS, you can focus your efforts at one thing. On Android? Your surface area is much smaller because each manufacturer is going to be shipping their own messenger app, for example.


Looks like there's finally a benefit to Android OS being such a clusterfuck with some many versions being currently active on a significant portion of devices. Not updating quickly increases the number of versions floating around.


The link is about Zerodium, not NSO. Also, 2.5M $ vs 2M $ is not a meaningful difference, neither presents a meaningful road bump to competent attackers. But your point that it indicates a robust stash is fair. They 100% do.


Note that the article is from 2019. The iOS 14 made significant changes to the way messages are processed by adding sandboxing and isolation. Here's a post by Project Zero evaluating the improvements: https://googleprojectzero.blogspot.com/2021/01/a-look-at-ime...


It doesn't really imply anything because iPhone's global market share is less than 30% with customers concentrated in North America and China, both danger zones for NSO operations. Android exploits might also take far longer to patch across all vendors and users might take longer to update compared to iOS.

It's fairly probable that iPhone exploits are just less valuable to a shady intel operation that sells mostly to small authoritarian regimes.


Your comment is not considering that these governments are more likely to target politicians and journalists which are more likely to use iPhone regardless of where they are located. I don’t know if the implication that iPhone is less secure holds but it’s likely.


> Your comment is not considering that these governments are more likely to target politicians and journalists which are more likely to use iPhone regardless of where they are located.

Are you sure that's true? In my experience governments often choose Android because they prefer the platform's organization-wide device management options over iOS. Many dissidents/journalists choose Android because it's easily rootable, giving them more privacy and control (I have a very small sample of the latter, however)


You could use Apple’s lockdown mode. It’s unmatched on Android.

Google and Samsung warn you about enabling root.

Samsung:

Is rooting your smartphone a security risk?

Rooting disables some of the built-in security features of the operating system, and those security features are part of what keeps the operating system safe and your data secure from exposure or corruption. Since today’s smartphones operate in an environment filled with threats from attackers, buggy or malicious applications, as well as occasional accidental missteps by trusted users, anything that reduces the internal controls in the Android operating system represents a higher risk.

https://insights.samsung.com/2022/07/28/what-are-the-securit...

Google:

Security risks with modified (rooted) Android versions Google provides device security protections to people around the world using the Android operating system. If you installed a modified (rooted) version of Android on your device, you lose some of the security protection provided by Google.

Important: If your account is enrolled in the Advanced Protection Program, don’t use that account on a device with a modified version of Android. Modified versions of Android can undermine Advanced Protection’s increased security features.

https://support.google.com/accounts/answer/9211246?hl=en


> It’s unmatched on Android.

I have great respect for the iOS security model. Seriously a marvel and best-in-class accomplishment.

But this is flatly not true. If you really care, you have Graphene et al, and even without that stock Android has plenty of well-tested features that enable you to lock down the device further than at stock. And rooting as a pathway to undermine security is a well understood aspect of the threat model


It doesn't matter whether NSO are genius hackers or their freelancers are. They are still outsmarting Apple all day long.


When significant functionality and backwards compatibility is required and money is limited, I'll happily work for red team, when brick is a valid solution, I will happily work for blue team.


As long as this attitude persists IT will continue to be viewed by people outside of it with the same degree of respect as your average mercenary.

Consider that in a just world you'd be in jail. Does the money still look that good?


The US carefully developed its cyber security plan during the word press macro era. Let's send the FBI to foreign countries in the hopes of arresting teenagers who learned how to cut and paste, genius.

Unfortunately, it forgets how to do this if the country is Israel instead of the Philippines.

Is there some solution in that to making sure 100% of possible red team members are more aligned with the profit interests of the US' strategic private companies than the US strategic partners in running illegal conspiracies?

I'm baffled as to what utopia of a profession has global tool collaboration and consequences, but somehow manages to deal with 230 groups of nationalists, thousands of sects, and embargo's on any one group paying people across all of these to provide a regulatory framework for safe and human benefiting tools in their category with no edge cases. If such a regulatory framework existed maybe it would shut down these mobile phone companies over behavioral harm?


Personal responsibility is where this starts. Not with the US, not with Israel or the Philippines. It starts with us, the technical people that do these things.


That makes no sense. A whole bunch of Americans won't do anything in this area because the US legal system is whimsical. But some nationalist professor was going to agree to make StuxNet, and maybe they were right, we certainly aren't going to all get to reach them to debate. So what is achieved?

Would Apple being totally incompetent at security and fighting exploits from NK prison labor, eventually with about the same fail rate, be a better world?

Export control on thoughts didn't work, so total disarmament on thoughts won't work. Prioritize security, cut out some of the entertainment and useless features through regulation because brain candy always wins in an unregulated market.


I'm not in the US. I don't work for Apple. And yet I can guarantee you that my work - assuming I'd be that capable in the first place - is used to reduce the security of various platforms through 'research' that leads to the existence of more zero days. You won't find me on anybody's red team.

So personal responsibility is where it starts and there isn't a fig leaf large enough that would allow you to pretend otherwise.


If these software updates are embargoed to some countries then your discovery is a tool of cyberwar under a fig leaf.


While I believe selling zero days to NSO group is significantly worse than working for Google or building surveillance capitalism software - we are mercenaries. Like 60% of software work is vehemently anti-middle class. Almost all of us have either contributed to some spying apparatus (analytics platforms), build some automation that replaced several humans, or developed something that contributed to the environmental destruction of our planet.

Let's be clear though, I'm not saying tech is bad. We'd all be doing manual labor on a a farm without it. I do think our demographic (including myself) has completely set aside any consideration for our impact in the name of optimization or a fat paycheck.


It is natural forcing function on Apple to improve.

Evolution playing before our eyes.


I'm curious how selling a multi-million dollar 0-day to a shady company actually works in practice. Like how does the seller demonstrate that their exploit works and isn't already in ShadyCo's catalog without giving up how it works (at which point ShadyCo could just not pay them and recreate it).


The same way everything works: trust


Apparently an escrow arrangement is used by some of these companies. You disclose vague details in exchange for an offer, and once you agree, they escrow the money and then you release the artifacts.


And the related concept, reputation. If NSO had a reputation for screwing 0-day finders, their supply would dry up


Surely NSO doesn't say "hand over your exploit and if we don't already have it we'll give you millions - you can trust us".

And I would argue most trade is not based on trust, except for maybe trust in the legal system and repercussions if someone tries to screw you over.


Not sure about NSO specifically, but this actually is how it works. If they screw someone over others won't sell their 0days. Except they don't pay the $2MM up front, they pay out based on a pre-agreed upon lifespan of the exploit.

First you provide a description of the exploit, then you get an estimate, then you have provide the exploit for vetting and the payout has multiple cliffs similar to equity vesting in a company.

This way you can't sell them a an exploit for $2MM and go play robinhood by reporting it to the vendor once the check clears.


Thank you, this is the kind of insight I was looking for.


I just don't understand how they are allowed to do this. I thought we had laws against intruding on systems, hacking, and wiretapping. How can a business do this in the clear and not get stopped by some law enforcement?


You can legally hack and wiretap your own phone, and build tools to do that. It's also legal to sell those tools.

The business is not hacking and wiretapping the phones of the victims. They are selling tools to governments, who either have the legal right to do the hacking under their own laws, or can safely flaunt their laws.


> You can legally hack and wiretap your own phone, and build tools to do that. It's also legal to sell those tools.

Just because you have a right to do something to your own device doesn't mean you have a right to sell it. It is not a huge stretch of the imagination to see 0-days being classified as munitions and encumbered by ITAR. I've seen open source drone guidance software taken down for similar reservations, and that was far from a weaponized instance.


Ah, reminds me of the days RSA was restricted for export[0]. Coming from Germany with FinFisher[1] having actively circumvented export restrictions it also appears those only help a bit if $$$ is involved.

[0]: https://en.m.wikipedia.org/wiki/Export_of_cryptography_from_...

[1]: https://www.ccc.de/de/updates/2022/etappensieg-finfisher-ist... (it appears they actually went bancrupt due to this in the long run)


There are still plenty of laws that are not in compliance with the digital age of the 21st century. Some laws only apply explicitly to hardware or physical or physically connected devices and you cannot extrapolate to get the law to apply from software standpoint. In some cases even “wireless” hardware such as a cell phone is legally different from a landline. One case is interfering with emergency calls being a Felony California if it’s a landline but a Misdemeanor if it’s a cell phone. That may be the basis for the drone thing but I’m just guessing.


The drone thing was because the algorithms implemented in the software could be used for missile guidance IIRC.


They have poweful western customers in the cops & spooks depts.


> a bank of 0-days, we don't know how deep.

I think Apple should randomize data structure ordering, change flags and logic in the the memory allocator, and choose a different set of compiler optimizations with every release.

At least that way, most exploits and bugs will at least require an expert to put in substantial effort to update them to work on a new OS release, and many exploits won't be possible at all on a new release - if for example the exploit allows a stack buffer to overrun by 1 byte, then it depends what data follows the buffer - and if the compiler randomizes that, then in the next release it might become non-exploitable.


This is generally only a minor annoyance unless you really know what you're doing.


Defense in depth. It raises the bar and making it more expensive and therefore less likely to be exploited


That’s not really how this works; the cost is marginal.


Is it marginal only for best-in-the-world experts and a serious hurdle for everyone else? If so that's still worthwhile as it means the attacker must hire (or be) an expensive expert.


The bar for iOS exploitation is already one that only admits best-in-the-world experts.


Also makes the value of Pegasus increase.


Yet increases their costs to get / develop exploits


And if it really become anything more wouldn't they just buy a popular app through a shell company and get early access to the betas?


The betas are freely available to download.


My understanding is that most of these zero-days are runtime so the above wouldn't help at all. The most recent one took advantage of Apple Wallet taking first dibs on a (virus) image and loading in the payload. Changing data structures/flags/compiler optimizations wouldn't have made a difference.


The process of going from [malicious image which gets loaded by apple wallet] to [shellcode running] depends hugely on compiler flags, memory layout, etc.


I could very well imagine that NSO charges per device exploited, and charges more for zero-click exploits used.

Each exploited phone raises the chance of the exploit being found and burned, so they really have to incentivize their customers to use them sparingly.


Which in turn means we need more surveillance projects to catch them - and that will increase their costs of doing business.


What makes you think they aren’t super hackers?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: