Hacker News new | past | comments | ask | show | jobs | submit login

It does not seem to be scanning phones, but stuff uploaded to iCloud, which is completely different. The article gives the idea that your device will be scanned.



>"Before an image is stored in iCloud Photos, an on-device matching process is performed for that image against the known CSAM hashes," Apple said.

It is scanning phones, but only files that are about to, but haven't yet, be uploaded to iCloud. That scan is happening on the phone itself.


That actually make it more okay with me. Apple can't have child pornography on their servers, that would be illegal. However, the fact that they are doing the scanning one the device could indicate that they don't have the ability to do the scans in iCloud. Presumably they can't read even read the images once stored in iCloud, so they have to do it on the device.

I don't know if that's the reason, but seems like a reasonable guess.


Apple actually isn't legally liable for what users upload until it's reported to them. And they are capable of doing the scanning server-side, since iCloud doesn't use end-to-end encryption.


> since iCloud doesn't use end-to-end encryption

Interesting. They say they do, seemingly for many things, though not all[1]. Do you have more info?

[1] https://support.apple.com/en-us/HT202303


You are right that some specific features on iCloud do have end-to-end encryption (only those listed under "End-to-end encrypted data" on this page).

But the majority of users' sensitive data is not included in that set of features. For example the Photos (what's being affected here), Drive, and Backup features don't use it. Note that any encryption keys backed up using iCloud Backup are therefore effectively not end-to-end protected either.

Somewhat misleadingly, this page indicates those features use encryption both "in transit" and "at rest", but Apple controls the encryption keys in those cases, so they are actually not end-to-end encrypted.

Here is a report indicating that scanning is already happening on the server side: https://nakedsecurity.sophos.com/2020/01/09/apples-scanning-...

Here is a report about how the FBI specifically pressured them against adding end-to-end encryption to iCloud backups: https://www.reuters.com/article/us-apple-fbi-icloud-exclusiv...


>>Before an image is stored onto iCloud Photos, the technology will search for matches of already known CSAM. Apple said that if a match is found a human reviewer will then assess and report the user to law enforcement.

>>Before an image is stored in iCloud Photos, an on-device matching process is performed for that image against the known CSAM hashes


Exactly, only "private" cloud data will be scanned instead, which is industry-standard practice for any self-respecting cloud provider anyway. It's a wonder how Apple wasn't doing it already.

In any case, this will be automated, rather than some poor Tier-1 pouring over iCloud Photos.

So only the guilty (and the false positives) would worry.


> So only the guilty (and the false positives) would worry.

If you truly want to "protect the children" you should have no issue for the police to visit and inspect your, and all of your neighbors houses. Every few days. Unannounced, of course. And if you were to resist, you MUST be a pedophile who is actively abusing children in their basement.

You're not guilty, are you?


My point exactly. What is privacy in the face of CSAM, after all.

/s (if not already clear enough)


Innocent until proven guilty implies no false positives. What happens if I get arrested because of a false positive? What happens to my life because there will always be that doubt from everyone?

My social life is crippled for the rest of my life because of a false positive. Which can happen to anyone. Which means everyone should worry.


This is false, the scanning occurs on the phone. Plus, as has already been discussed at length, the NCMEC database is loaded with false positives.

The "nothing to hide" argument tends to fall apart when the database being used against you is full of legal imagery (which often isn't borderline or pornographic at all -- some of the flagged images literally don't show people).

Slippery slope to non-CSAM material? That ship has sailed already. The databases are a mess. From day 1, it detects non-CSAM.


And whatever would make us believe that as users we should expect a difference between a company's servers and the same company's devices?

Would the device belong to us, or be covered by some misguided sense of ownership and privacy?

The garden was already walled, and the name on the gate wasn't the user's to begin with.

Now, where's my rusty yet trusty Nokia? Oh, wait, it doesn't have VoLTE...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: