Hacker News new | past | comments | ask | show | jobs | submit login

Then why do the "CSAM" perceptual hashes live on the device and the checks themselves run on the device? Those hashes could be anything. Your phone is turning into a snitch against you, and the targeted content might be CCP Winnie the Pooh memes or content the people in charge do not like.

We are not getting this wrong. Apple is taking an egregious step to satisfy the CCP and FBI.

Future US politicians could easily be blackmailed by the non-illegal content on their phones. This is a jeopardy to our democracy.

The only reason this was announced yesterday is because it was leaked on Twitter and to the press. Apple is in damage control mode.

This isn't about protecting children. It's about control.

Stop defending Apple.




This boils down to two separate arguments against Apple: 1) what Apple has already implemented, and 2) what Apple might implement in the future. It's fine to be worried about the second one, but it's wrong to conflate the two.


>It's fine to be worried about the second one, but it's wrong to conflate the two.

Agreed, and just to be clear, I'm worried about that too. It just appears that we (myself and the objectors) have different lines. If Apple were to scan devices in the US and prevent them from sharing memes over iMessage, that would cross a line for me and I'd jump ship. But preventing CSAM stuff from getting on their servers seems fine to me.


> "preventing CSAM stuff from getting on their servers seems fine to me"

You're either naive or holding your fingers in your ears if you think this is the objective.

Let me repeat this again: this is a tool for the CCP, FBI, intelligence, and regimes.


I think the situation is clear when we think of this development from a threat modelling perspective.

Consider a back-door (subdivided into code-backdoors and data-backdoors) placed either on-device or on-cloud. (4 possibilities)

Scanning for CP is available to Apple on-cloud (in most countries). Scanning for CP is available to the other countries on-cloud (e.g. China users have iCloud run by a Chinese on shore provider). Scanning for CP is not available to Apple on-device (until now)

This is where the threat model comes in. Intelligence agencies would like a back door (ideally both Code and Data).

This development creates an on-device data-backdoor because scanning for CP is done via a neural network algorithm plus the use of a database of hashes supplied by a third party.

If the intelligence service poisons the hashes database then it won't work because the neural network scans for human flesh and things like that, not other kinds of content. So the attack works for other sexual content but not political memes. It is scope-limited back door.

For it to be a general back door, the intelligence agency would need the neural network (part of apple's on-device code) and well as the hashes database to be modified. So that is both requiring a new code back door (Apple has resisted this), and a data back door both on-device.

Currently Apple has resisted:

Code back doors (on device) Data back doors on device (until now)

and Apple has allowed Data back doors in cloud (in certain countries) Code back doors in cloud (in certain countries)

In reality the option to not place your photos in iCloud is a euphemism for "don't allow any data backdoor". That is because iCloud is a data-backdoor due to it being able to be scanned (either by Apple or an on-shore data provider).

My analysis is that the on-device scanning does not improve Apple's ability to identify CP since it does so on iCloud anyway. But if my analysis is incorrect, I'd be genuinely interested if anyone can correct me on this point.


iCloud photos aren’t currently encrypted, but this system provides a clear path to doing that, while staving accusations that E2E of iCloud will allow people to host CP there with impunity.

When the device uploads an image it’s also required to upload a cryptographic blob derived from the CSAM database which can then be used by iCloud to identify photos that might match.

As built at the moment, your phone only “snitches” on you when it uploads a photo to iCloud. No uploads, no snitching.

We know that every other cloud provider scans uploads for CSAM, they just do it server side because their systems aren’t E2E.

This doesn’t change the fact that having such a scanning capability built into iOS is scary, or can be misused. But in its original conception, it’s not unreasonable for Apple to say that your device must provide a cryptographic attestation that data uploaded isn’t CP.

I think Apple is in a very hard place here. They’re almost certainly under significant pressure to prove their systems can’t be abused for storing or distributing CP, and coming out and saying they’ll do nothing to prevent CP is suicide. But equally the alternative is a horrific violation of privacy.

Unfortunately all this just points to a larger societal issue. Where CP has been weaponised, and authorities are more interested in preventing the distribution of CP, rather than it’s creation. Presumably because one of those is much easier to solve, and creates better headlines, than the other.


>iCloud photos are encrypted, so scanning has to happen on device.

Is this true? I feel like Apple benefits from the confusion about "Encrypted at rest" + "Encrypted in transit" and "E2E Encrypted". It's my understanding that Apple could scan the photos in iCloud, since they have the decryption keys, but they choose not to, as a compromise.

I'm keying into this because this document: https://support.apple.com/en-us/HT202303 doesn't show Photos as part of the category of data that "Apple doesn't have access to." That's mentioned only in the context of the E2E stuff.


You’re right, currently iCloud photos isn’t E2E. I’ve updated my comment


> Future US politicians could easily be blackmailed by the non-illegal content on their phones. This is a jeopardy to our democracy.

US politicians should not be using normie clouds full stop. This is a risk and always has been.


Do you know which of your kids is going to be a politician? Better keep them all off the internet to keep their future career safe.

This is why it's important to stop now.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: