This answer is dangerously naïve. Phone basebands and radios are full of vulnerabilities, if you don't want your phone to be a potential surveillance device given any minimally sophisticated adversary you should either turn off the radio or preferably shut it off entirely and remove the battery.
Hypothesis B: it's not dangerously naïve, it's deliberate misinformation designed to coax technical but unskeptical people into lowering their guard against this class of threat.
If ((Assume it's stupidity) == (discount/ignore the risk)), then assuming it's stupidity is never the safer assumption, even if it's empirically more likely to be the correct assumption, no?
All boils down to an individual's threat model at the end of the day anyway, though.
The practice of assessing whether a tempting evaluation of "malice" can instead cover evidence of structural faults is part of the effort towards seeing things as they are. And keeps you away from paranoia.
The notion that paranoia is the default emergent state of not assuming incompetence when potential malicious incentives can be easily articulated is just yet another ideological presupposition.
That part about paranoia was a half-joke. But no, it was not suggested (that was not a «notion») that paranoia would be a «default emergent state». It is tough a temptation of many.
And while you will often be able to identify «potential malicious incentives», you have to put those possibilities together with the rest of those which can complete the set.
Assessments must be complete.
--
Edit: oh, by the way, importantly: paranoia ("off-thought") means "delusionality", and in that sense the statement «And keeps you away from paranoia» was literal. "Be "cool" and exhaustive in assessment, and you will avoid getting stuck in alluring stories". The half joke was about the current use of the term (in the popular interpretation of the clinical state).
I think it's fair to say that money / power / sex will easily account for potential malicious incentives. The mindset that Hanlon's Razor fosters slows down the pattern recognition process that humans have built up throughout our entire existence. When building systems that must be resilient against corruption, the concept of zero trust serves well here.
But you have to always check. Yes, you do slow down «the pattern recognition process that humans have built up throughout»: because it is not reliable. It becomes (more) reliable through the exercise of doubt and assessment.
You have to always check in relationships with other people. When it comes to institutions / corporations / organizations, not so much and there are far fewer options to just chalk things up to incompetence. Again, in this context, we’re already talking about governments spying on their own citizens.
Malice is not falsifiable: anything could always just be another trick. So unless you want to end up believing everything is malice, it’s best to start with the benign explanations, until you’re sure they don’t fit.
"Stupidity" (term picked after Cipolla) is not a benign explanation. The entity stuck in the ice of Cocitus, at the bottom of hell, in Dante Alighieri's Commedia, is an apex of impotence.
But yes, it is an interesting proposal (perspective) to "resist from tempting explanation and picking the less attractive first" - just like the grit in delayed gratification.
A little OT but strongly related: in France you can go to prison if you refuse to give your phone's password to the police (nothing like a "free country", I guess).
Is there a way to set up a phone so that typing a "special" password puts the phone in an alternate state with different apps and content, etc. (and possibly erase the regular content)?
- I presume that's considered willful destruction of evidence and interfering with an official investigation, and worse charges than whatever you were probably facing (unless you really did fuck up and committed something bad).
- Investigators are not going to be typing your password into the running original device, they're going to be trying it against an offline clone of the encrypted storage. All that will happen is the decryption won't succeed and they'll tell you that it was the incorrect password and continue holding you until you give it up.
> Investigators are not going to be typing your password into the running original device, they're going to be trying it against an offline clone of the encrypted storage
Oh no, absolutely not. We're not talking about "investigators" here, just random cops in a random precinct who have zero infrastructure, zero knowledge about anything, and aren't pursuing any serious "investigation".
They will absolutely type your password into the running device. They're doing this all the time.
On point 3, while I agree he shouldn't be required to give up his password, we should note that they did find child porn on other devices and that there is testimony from another witness of more porn on those hard drives. I'm just saying that this is a bit different than there being no sufficiently prosecutable evidence and the courts requiring it. In fact, that's why they claim his 5th Amendment rights aren't violated (though obviously the length of his sentence relies upon that). He could currently be prosecuted under the current evidence, and that matters.
Impossible for the average thief, not impossible for government or Fortune 500 actors. There are private contractors in business solely dedicated to developing and licensing enclave cracks, and popping them is routine procedure for most law enforcement departments, even smaller ones.
Fundamentally you can't have a key and the data inside the same physical box and expect encryption to remain intact. Enclaves are just security through obscurity on steroids.
Google Play is a rootkit. Google will fully cooperate with any government. If you use GrapheneOS on a pixel device your bootloader is closed source and the system-on-chip is largely undocumented and impossible to audit without serious resources. So yeah. Shit's fucked man.
> Google will fully cooperate with any government.
I'll remind you that on previous MacOS versions (8 years ago?) researchers had discovered that the Mac laptop's integrated webcam could be turned on without the green LED turning on. So basically: the webcam turning on without the user knowing it. And way weirder: some random company somehow had the rights to sign code using that "feature".
The story got pretty much killed.
I'm sure if some digging had been done, you'd have found some three letter agency behind the shell company enjoying the very strange right to turn the webcam on on MacOS devices without the LED turning on.
For everybody out there: rest assured though, Apple are the good guys and there's no way they have the ability to turn on the webcam of your Mac laptop today without you knowing about it. [1]
> French police should be able to spy on suspects by remotely activating the camera, microphone and GPS of their phones and other devices, lawmakers agreed late on Wednesday, July 5.
I would assume this is possible. If the gov wants to bad enough, I'd guess most OSes have a way to remotely control and observe. A state has resources to research 0days, bank them, and use them as needed. But probably not worth using unless it's for a high value target.
Considering most mobile phone operator would require you to install additional software to be able to use their network and that they would most likely cooperate with the authority if asked by the justice department.
You can use any unlocked phone with any operator (assuming it can connect to EU cellular networks of course). Nothing in particular to do, just put in the SIM and it works.
I've never bought a phone from an operator, but I think it's also possible to switch operators without switching phones quite easily, no software update required.
If you give the government new extraordinary powers it will use them "only in necessary cases" but "necessary cases" has a way of creeping...
- Year 1: Terrorists.
- Year 2: Powerful gang leaders and drug kingpins
- Year 3: Run-of-the-mill murderers and kiddy diddlers
- Year 4: Deadbeat dads and unpaid parking tickets
- Year 5: Suspects who have encountered police and been released without being charged with a crime
Prime example: Civil forfeiture in the US. Was originally supposed to only be used on the worst of the worst drug cartel types, nowadays they'll use it to confiscate the life savings of some random black kid.
"That government governs best that governs least." You know, those limited government folks might be on to something...nah, they're all crypto grifters and crazy right wingers [/s]
I don't know what argument you're trying to make. Governments will research 0days because other governments are doing it, and it's best if you find them first and work out a defense. You know, in case you want to mess with someone's nuclear centrifuges and to avoid having yours screwed with.
Do you think that it should not be legal for the government to investigate a crime?
The system is made up of people, some of them may abuse their access. Other laws, in theory, will hold them to account.
> The system is made up of people, some of them may abuse their access. Other laws, in theory, will hold them to account.
In theory. The crux of the matter is whether there is significant abuse or not, and how well it is handled once uncovered. Based on history (PRISM, Five Eyes, etc.), I'm not at all optimistic.
No, it's not a bot ring. I assume you think that because I posted links to stackexchange quite a few times the last few months. Instead, I just skim over stackexchange.com as part of my feed and when there's something what I assume HN interests, I post it here.
I don't care much about Karma. I posted this specific topic since I find it kind of hilarious that police should now lawfully be able to do something they are almost surely not able to do. And I enjoy discussions to such topics here on HN, because most of the time the viewpoints mentioned here are at least of the same quality of the answers on stackexchange.
It rose to the top because of the question, the link about France, and because new posts get higher weights. It is at 79pts and 59 comments currently and about to fall off the front page. But also on the front page is a post with 6pts and 1 comment (1hr old), 17 points and 2 comments (2 hrs), 7pts and 2 comments (30 minutes). and a few more. Just a slow Saturday.
But it’s a good question. I want to know.
I am assuming this is not possible. The only thing i know of is capable of doing so is pegasus. But it’s very expensive afak.
It costs about 2-5M$ to buy or develop a new weaponized zero-click vulnerability that would allow you to simultaneously hack all 1,000,000,000 iPhones in use. So around 1/20 of a cent per iPhone.
I was under the impression that most modern (past few years) SoCs like Exynos, Qualcomm, Apple silicon all had IOMMU support. Sometimes it’s misconfigured to be too permissive but that’s getting better.
Why's IOMMU thrown around so casually in this forum as if it's a silver-bullet explosive reactive armors? They'd be running something like 30 years old giant main loop with "// don't remove this line, build breaks" comments everywhere, not like Rust microservices on formally verified microkernel.
The main CPU/application processor/main CPU might be running better secured Unix/Linux and might be able to protect itself from peripheral CPUs, but that's not the point; a phone had always been a pair (minimum) of computers, traditionally referred to as Application Processor(AP) and Baseband Processor(BP), of only the slightly faster one is exposed to the user, and it's unclear what is going on inside the other one or how to handle it. That's the problem.
To me the real question is. Technical feasibility aside.
Would the cell phone manufacturers (Apple, Samsung, Motorola, Nokia, Xiaomi, etc) say no when faced with the possibility of losing market share in France. Because of a law pushed through under the cover of security. Many a liberties have slipped under that blanket cover called security.
I think they will put in this feature if it's not already there.
I think the way it probably works is that if the US gov. wants to root someones phone anywhere in the world they just do it via some API given to them by apple/google directly.
If a foreign country wants to do it to someone on foreign soil (like the saudis to bezos did [1]) they exploit some vulnerability brought on the free market (like the whatsapp/video message exploit chain the saudis used, or exploits like the NSO zero-click iMessage exploit [2]).
If a foreign country wants to spy on its own citizens who protest the government, they could just use the local phone carriers capability to silently ping, update firmware or change system settings remotely, those are intentionally part of the mobile standards (including intentionally weak encryption) so governments can spy on its people.
Not to undermine the plausibility of your suggestions, but I like to wonder how answers like this read like to someone with direct experience and knowledge.
I know for a fact that my electronics (including smartphone) is being monitored (including this post) by my government.
That probably doesn't surprise others. What isn't as known is that the government also intrudes into chats with other people on social media.
They don't just monitor, but actively interfere.
Edit: By the way, Nokias and other dumbphones (without physical off-switches -- the PinePhone has them, but good luck getting one) can also get their mic and GPS remotely activated. The partial solution is to get one with a removable battery and remove the battery whenever not in use.
iPhones can be hacked into through IMEI if you connect them, but are useful, encrypted offline-only PDAs if you don't install any app.
Also, if your electronics are being spied on by the government to this degree, chances are you are also being physically monitored.
This is always a dumb take I see by so many people. No goverment is monitoring all electronics and there is zero evidence that is the case. Sure companies collect a lot of user data and that user data could easily be given out with a request. Or maybe if you are a really big target they might use a zero day against you but they are never going to have all electronic devices connected to a botnet. You and many other people can test it right now, just run a basic traffic analyzer through your phone or PC.
IANAL nor French, but reading the article, is this just saying that French police can get a warrant, issued by a judge, that allows them to tap a suspect's device (not longer than 6 months)? I just want to make sure I got the facts right.
As far as I understand that, since there is no explicit clause prohibiting an extension after 6 months. I think it is safe to assume that it can be extended by another 6 months provided the suspicion persists (i.e. a judge can be convinced).
I'm not french myself, so take it with a grain of salt.
We already know for a fact that they can surveil virtually all smart devices including appliances and televisions due to the Vault 7 leaks, and this would tend to be corroborated by the national geospatial intelligence agency telling congress that they have a high resolution 3d map of the entire globe's events at any given time.
The only one that mentions televisions is Weeping Angel (cool name) which attacks Samsung F Series Smart Televisions. Likely they can indeed target other devices but I'm not sure I'd go as far as saying that Vault 7 shows that they can target "virtually all smart devices".
Or am I missing something? Can anyone provide more concrete evidence?
I probably just got confused, but thank you for linking to the information about Vault 7 directly so that anyone can simply appraise for themselves whether or not I seem confused.
That's what I love about HN and Reddit, and similar websites: All the helpful counterpoint, especially when someone criticizes the intelligence community. Thank you so much!
Any broadband chip since 3G ships with proprietary drivers which have backdoors. I tried to build an open phone, worked for one of the major telcos, and could never get around the driver issue in trying to make an open phone.
BUT sophisticated attackers like US or Israeli governments (and I assume Russian or Chinese but I don’t have direct experience with these) don’t need these backdoors, getting anywhere near your phone is enough to root it to allow installation of spyware, according to my CSO who worked in naval intelligence. There are simply too many vulnerabilities for there to be a hardened device in the consumer space. Some are better than others (Apple) but as Bruce Schneier says, if you are worried about this sort of thing you really have to be totally disconnected from the internet and exchange encrypted physical media.
Depends on where you put the line between "open phone" and "baseband blackbox". Drivers are not an issue for phones like Librem 5 or PinePhone since they're using a separate modem module connected to the main SoC via USB and communicating over AT and QMI interfaces to which there are perfectly open drivers. The modem itself remains a vulnerable proprietary blackbox, but it does not have any access to your OS and you can cut it out from power while keeping the rest of the phone intact.
Open basebands are not something we're anywhere close to having though, for many reasons.
To add to this discussion, I must note what I don't see many mentioning here.
One doesn't need to do any shady stuff with baseband or stockpile on zero day vulns.
The current mobile ecosystem is such that any supported device (recieving updates and such) sends its unique identifier to the manufacturer before recieving OTA updates. And devices by default check for updates on a regular bases.
Basically the manufacturer can always target and track individual devices. And provision indivisualised signed updates. Not just at the country level but targeted to specific IMEI.
Coming to more concrete examples, Google is known to do AB testing with their Pixel line of devices, setting custom profiles for some users.
Xiomi had previously shown capability to actively disable devices that move outside of legal sale regions.
Samsung uses such capabilities for enterprise devices in Samsung's Enterprise/Knox platform. And consumer devices can be thought of as enterprise devices under the manufacturers domain.
---
So the government only simply needs to send these companies warrants to target, bug and track specific devices or registered customers.
Online platforms are already subjected to data requests from law enforcement which they must conform to (atleast those with supporting warrant).
Some try to recuse themselves from such compelled intrusion of their customers by employing end to end encryption (e2ee).
With this provision and manufacturer cooperation, they could get direct full control of the ends (personal devices). Obviating the need to "break" encryption.
Why deal with a dizzying cloud of services in wide range of jurisdictions when you can have full access to citizen devices with cooperation of a handful of manufacturers.
In summary, this is not just feasible, the elements for an organised remote control system are already present in current smartphone ecosystem. In form of signed updates by manufacturers that can target particular IMEI devices. One just needs this law to wade through the legality issues.
A solution to avoid such sweeping surveillance capability would be to convince manufacturers to not receive identifiable data before provisioning updates. And have a public ledger of officially signed image hashes, like those of of domain certificate transparency lists.
This question has been in my head recently. How feasible is it really? The answer in the link isn’t comprehensive. Is it really out of the question for manufacturer’s to ship a particular version of a device and software for a target country? Nation states have a history of backdooring or weakening particular technologies.
Are basebands not sandboxed at all? There's no conceivable reason that my baseband should be able to access my camera, microphone, or the contents of my display in normal production use, as that's all filtered through the CPU typically. Why not have an MMU that limits the baseband to DMA in a specific chunk of memory and reduce the attack surface dramatically? It's not just effective against nation states. With such a protection, 0-click OTA attacks targeting the baseband would have a much smaller blast radius.
Historically the baseband was the primary processor with full control and the CPU was subordinate. This is because the baseband code was developed by the chip manufacturer so they gave themselves full control over the system to make it easier for themselves.
This may no longer be the case right now as the primacy of the CPU has become increasingly obvious, but it should still be the default assumption since having the baseband in control lowers costs to the chip manufacturer which is their lifeblood.
Exactly, and we're talking about governments, not competing companies. "You wanna sell phones or build infrastructure here? Fine, here's a truckload of appliances to put in the middle of each pipe; no questions please". There are many ways a government can ruin businesses even without swatting their offices or raise public anger, they just need to apply different bureaucratic pressure where it is needed so that for example a permit, tax installment or reduction, whatever that otherwise would take 6 months will require say 5 years or more.
I only have my experience with this so it requires you to have a phone that is off and without a battery or in a faraday(foil) shielded bag. Be in an area your government doesn’t want regular people to be (unacknowledged military base), turn on the phone.
I’ve done this many times so I know how long it takes to power on my phone to a “usable” state on my iphone and android.
I can’t take my phone inside where I work and they have mobile phone detectors which set off alarms if you bring one near any door or the inner facility fence. I put my phones inside a foil cooler bag with ice packs so they won’t overheat inside the car.
My guess is that there was a cell site simulator and it was setup to take over any phone which comes in the area. I got the same result with my android and iphone. Phone boots, weird hang where all indicators appear but I cannot interact with the phone. Wait at least one minute then I can use the phone.
I think this is why governments don’t like China developed 5G technology. It doesn’t have their default back doors.
Absence of evidence is not evidence of absence, especially when searching for evidence left behind by competent adversaries (e.g. NSA, GCHQ, etc) who have a strong motivation to remain undetected.
But it is also not evidence of the thing for which there is absence of evidence.
EDIT:
> especially when searching for evidence left behind by competent adversaries (e.g. NSA, GCHQ, etc) who have a strong motivation to remain undetected.
No, there is no “especially”; absence of evidence means no basis for any affirmative belief, period, equally for any fact proposition. Arguing for “especially... ” is exactly arguing for a case where absence of evidence is evidence for the thing for which there is an absence of evidence.
In risk management, you shouldn't ignore known unknowns like that, you should either adapt your threat model or risk accept, not simply consider that risk nonexistent until proven.
How could we know for sure? Basebands are 100% proprietary, we have no idea how they operate and even less of an idea of how their operation might be subverted.
This is why I'm an open source advocate. It's not that open source automatically makes software/firmware trustworthy, it's that closed source empirically guarantees the software/firmware can never be deemed trustworthy.
And yet there have been plenty of long standing security issues in Linux…
Why would you think that a bunch of people volunteering their time would be more motivated to look for security issues and even those that are found, how many would be disclosed responsibly instead of being sold to places like Pegasus?
>And yet there have been plenty of long standing security issues in Linux…
• See the first half of my second sentence.
>Why would you think that a bunch of people volunteering their time would be more motivated to look for security issues
• So they're not harmed by the vulnerabilities. I'm on a big tech red team. I routinely look for (and report) vulns in open source software that I use - for my own selfish benefit.
>and even those that are found, how many would be disclosed responsibly instead of being sold to places like Pegasus?
• Not all of them, that's a fair point. But I'd rather have the ability to look for them in source than need to look for them in assembly.
• Keep in mind that the alternative you're proposing (that proprietary code can be more trustworthy than open source code) is pretty much immediately undermined by the fact that the entities who produce proprietary code are known to actively cooperate and collaborate with the adversary - look no further than PRISM for an example. Microsoft, for instance, didn't reluctantly accept - they were the first ones on board and had fully integrated years before the second service provider to join (yahoo, iirc).
• If you want to start a leaderboard for "most prolific distributor of vulnerable code", let's see how the Linux project stacks up against Adobe and Microsoft. I wouldn't even need to research that one to place a financial bet against "team proprietary".
> Why would you think that a bunch of people volunteering their time would be more motivated to look for security issues
I don't. I trust that bad actors are less motivated to insert malicious code, and I trust that transparency enforces good practices. All sufficiently complex code has unintended behavior, what matters to me is how you stop third parties from using my device beyond my control.
> and even those that are found, how many would be disclosed responsibly instead of being sold to places like Pegasus?
What do you think everyone else does with their no-click exploits? Send them to Santa?
FOSS doesn't mean "volunteers." FOSS means that the source is viewable, legally usable, and that changes can be made and redistributed without permission from the author(s).
Volunteers can make closed source software, massive corporations and governments can make FOSS.
Seems like some people really believe that FOSS is basically perfect when it comes to security. "It's FOSS so people would find any serious vulnerabilities". Heartbleed, anyone?
As an aside, I wonder if there's a term for this kind of "nobody says...but some do" thing. Everyone sees their own reality, blah blah. I trust that you're speaking in good faith, but that doesn't account for everyone, and good faith doesn't magically resolve arguments.