Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> What are the odds that NSO has like 20 other zero-days in their arsenal each set ready to deploy the day the current vulnerabilities are discovered and patched?

I feel it's the safe money, certainly. One exploit dev in a given year can churn out multiple weaponized 0 days, surely they have more than one dev working on such things, so you're talking about a stockpile of likely dozens of vulns. Some might collide with public vulns so they lose a few, but you knock one down and I have to assume they have others staged.

> Apple is rich enough to increase their bounties large enough to attract them to right side instead?

That's a good question. I think at NSO's price point the answer is probably "no", but I don't know. At best Apple could be competitive, but bug bounty work is far riskier - you might spend a long time without getting a payout, either due to some bad luck, collisions with already reported vulns, or a vendor just being a dick (pretty sure Apple have been dicks).

> what should anyone who even vaguely suspects state sponsored spying do?

Probably have more than one phone, for starters. Use authenticated protocols, not SMS/MMS. It's insane that anyone can just send data to your phone unprompted. I'd probably disable cell service altogether unless I'm actively making an outbound call to a known contact.



The only way Apple could make them report the vulnerability is if the bounty was not far from the amount of profit that NSO is making with their software.


The comment is not suggesting that Apple make the vulnerability attractive to report for the NSO as an organization, but presumably attractive to report for whatever hackers the NSO may purchase vulnerabilities from - or individuals employed by the NSO.

In such a case, Apple "only" needs to make the bounty high enough to significantly exceed the sale price of the vuln, or the salary of aforementioned employees.


For who had already sold a vuln to a criminal org like NSO once, I wonder will they switch to clean Apple. Perhaps they get more chance to be investigated, or not?.


Yeah you're right. For some reason I was thinking only NSO had these zero-days, which is not possible.


> The only way Apple could make them report the vulnerability is if the bounty was not far from the amount of profit that NSO is making with their software.

At which point it becomes cheaper to buy a law to force disclosure of those 0 days to vendor?


> Use authenticated protocols, not SMS/MMS. It's insane that anyone can just send data to your phone unprompted. I'd probably disable cell service altogether unless I'm actively making an outbound call to a known contact

I was just listening to Darknet Diaries episode 100 this past weekend and they mentioned an NSO-crafted zero-click vulnerability in Whatsapp that Citizen Lab had detected being exploited.

Though I suppose Whatsapp (anyone with my phone number can message me) wouldn’t qualify as an authenticated protocol.


At least, or I think, in Whatsapp it's possible to block unsolicited messages. Or to hide the message until you accept it.


Why is it on Apple to defend everyone against hackers sponsored by another country to begin with? The governments should be providing any resources necessary to defend here...


Because Apple makes the phones, silly. The iPhone is a 100% proprietary device, we know zilch about what code is running on it. Why should anyone be responsible besides the manufacturer?

Maybe the government should care about the Obamaphone, but not anything beyond that.


If an Israeli hit squad kills someone in a McDonalds, we dont get on McDonalds case for not providing a safe and secure place for their customers. Not putting up a sign when the floor is wet is on them. Assassins is on the government. It's not clear to me why things being different in the software world is so obvious that not seeing why is silly.

Regarding Obamaphone, in the US there is a government agency responsible for such things. The NSA. It's tasked with securing US information infrastructure on top of its more known role of signals intelligence. It just happens to favour the latter over the former so its not about to share its stockpile of iPhone zero days with Apple.


If McDonald's advertised their hamburgers as especially safe from outside influence and you are assassinated by someone poisoning your McDonald's hamburger, people will probably be upset at McDonald's.


NSO Group seems to know quite a bit about what code is running on it.


Close to 100% but not quite. It has some open source components.


Because that is what they advertised they would do [1].

“Apple makes the most secure mobile devices on the market. Lockdown Mode is a groundbreaking capability that reflects our unwavering commitment to protecting users from even the rarest, most sophisticated attacks,” said Ivan Krstić, Apple’s head of Security Engineering and Architecture.

I mean, we know nobody on their team actually believes Lockdown mode can protect against state funded actors with even a tiny $10M budget since their Lockdown mode total bypass bug bounty is only $2M.

But they did say it in their marketing, so they should be held to it even if we know for a fact that they are totally incapable of doing so. This is not a question of money, it is a question of ability, and we know they do not have that.

[1] https://www.apple.com/newsroom/2022/07/apple-expands-commitm...


Wait, the reward for completely bypassing most hardcore security measures in their most important device for the most valuable company in the world worth over 3 trillion is mere 2 millions?

Thats not a honest proposition by its very definition, just look at the assymetry of those numbers. Serious offer would add at least 2 zeroes to that.


It is actually reasonably fair, it only costs around 1-2M $ to find one. You expect Apple to pay 100M $ for 1M $ of work?

The real question is why is Apple allowed to lie about providing meaningful protection against state actors when they only think it only costs 2M $ to break it. In no universe is 1/5 the cost of a tank even a road bump for a state actor.

The other question is why is their security so terrible. The short answer is that they demonstrably know nothing about security since this is the most they have been able to do after decades of work, billions of dollars, and repeated promises of meaningful security. When somebody spends billions of dollars and decades failing to achieve even 1/10th of what they promised, you should take any new statements as extraordinary claims and demand extraordinary evidence.


> The real question is why is Apple allowed to lie about providing meaningful protection against state actors

It's not like anyone has been doing any better. Mobile phones are embedded devices targeted to everyday consumers, basically toys. They've never been engineered for anything like meaningful security against even mildly sophisticated attacks. The industry simply doesn't care about this, e.g. most phone SoC's are still not protected against misbehavior by any of the included devices, each of which is running some unknown proprietary firmware. That's just par for the course in the embedded ecosystem.


Why does the quality of any other product matter here?

Apple marketing claims it provides meaningful protection against state actors. Apple engineering says it does not. Even if nobody can do it, even if Apple is closer than anybody else, that does not excuse lying to people who are betting their lives on Apple’s representations that it works.

Apple can not protect against state actors. Apple knows that. If you are at risk, the only safe thing to do is avoid Apple (and all other smartphones). Apple knows that. They lie and insinuate that a iPhone is fit for this task so they can sell a few more iPhones caring not a single bit for the lives at risk. That is grossly unethical. Yet, it is par for the course in “cybersecurity”. That does not make it acceptable, that just means everything is rotten.


> Apple makes the most secure mobile devices on the market.

Well, they're not wrong on that one point. As it turns out, "most secure" is a pretty low bar. We'll see how Purism's Freedom Phone fares once it reaches genuine daily-driver status and it too becomes a target for this class of attacks.


Being open source doesn't mean immune to vulnerabilities. (and Purism's stuff will likely never be 100% open source due to regulatory complications with basebands)

Niche software often fares very poorly in terms of security because few people are trying to exploit it.


PureOS is decades behind in security compared to Android or iOS.


PureOS with Flatpak, Wayland and such make it close.


Not really. Even with modern technologies, the Linux desktop technology stack is very, very far behind when it comes to security.

The Linux kernel itself is a very weak foundation security-wise, the only way Android and ChromeOS get away with it is by using a very small feature set and restricting everything else as much as possible with seccomp, SELinux and heavy sandboxing.

The Linux desktop userland doesn't have meaningful hardening features compared to other platforms (even Windows is ahead, sadly). For example, practically all distros use glibc's memory allocator which has both poor performance and security [1] and their toolchain is based on gcc, with no support for modern compiler security features such as CFI (with the sole exception of Chimera Linux). Not to mention the permission model is completely outdated, like in that xkcd comic. Flatpak only mitigates this partially, because the Flatpak sandbox is very weak. The people working on Flatpak are doing their best, but from reading some GitHub issues, it's clear they are badly overworked and not security experts. The person responsible for Flatpak's seccomp sandbox has said it isn't even his main responsibility and he doesn't have much knowledge about seccomp and is learning along the way [2]. The Flatpak seccomp filter is based on a denylist rather than an allowlist, and many dangerous syscalls can't be blocked because applications rely on them (e.g. Firefox needs ptrace for the crash reporter). You also have to be very careful and use Flatseal (which is not officially supported) to deny permissions such as /home filesystem access, because it lets Flatpak apps override their own permissions by design [3]. And dangerous kernel components like io_uring are exposed [4], while Google disables them on their systems because of their exploitation potential.

Here is a more detailed article examining the lack of security of Linux phones in case you're interested: https://madaidans-insecurities.github.io/linux-phones.html

If you want a FOSS-based secure phone, GrapheneOS is the best option.

[1] Check this comment by GrapheneOS founder for some technical details and how it compares to hardened allocators such as Android's Scudo or Graphene's hardened_malloc: https://github.com/NixOS/nixpkgs/issues/90147#issuecomment-6...

[2] https://github.com/flatpak/flatpak/issues/4466#issuecomment-...

[3] https://github.com/flatpak/flatpak/issues/3637

[4] https://github.com/flatpak/flatpak/issues/5447


Apple is welcome to seek aid from the US Government, I imagine they would be happy to assist.


That's the question, though. The NSA is known to have strongly conflicting objectives. On the one hand, they're supposed to secure US government devices and sometimes assist US companies in securing devices. On the other hand, they're supposed to surveil foreign citizens using such devices and the devices of US citizens who communicate with foreign citizens, as well as assisting other US agencies in doing that.

In a nutshell, whether they will increase or subvert your security depends on factors outside of your control. But unless they have found ways of surveilling foreigners without compromising the security of any Apple device, it's almost certain that they won't disclose their own 0-day exploits to Apple.


The US government have already "assisted" plenty. Every assist is a setback. IE. Snowden's revelations, encryption standard weaknesses, backdoored devices, etc.


Obviously not what I'm talking about.


No, but that is exactly why Apple might be a little bit reluctant to go for assistance. It's a bit like going to the neighborhood burglar to ask him how to secure your house. "I'll be right over to take a good look at your property" is not the answer you think it is.


That's an absurdly narrow view of the USG security positioning. USG pours billions of dollars into defensive infrastructure through a number of methods.


Pouring money into it is something else than asking them to look into your secret kitchen.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: