In my opinion their goal was to get stuff to a state where they could encrypt everything on iCloud so that even they can't access it.
To counter the "think of the children" -argument governments use to justify surveillance, Apple tried scanning stuff on-device but the internet got a collective hissy-fit of intentionally misunderstanding the feature and it was quickly scrapped.
> In my opinion their goal was to get stuff to a state where they could encrypt everything on iCloud so that even they can't access it.
They basically did. If you turn on Advanced Data Protection, you get all of the encryption benefits, sans scanning. The interesting thing is that if you turn on ADP though, binary file hashes are unencrypted on iCloud, which would theoretically allow someone to ask for those hashes in a legal request. But it's obviously not as useful for CSAM detection, as, say, PhotoDNA hashes. See: https://support.apple.com/en-us/HT202303
For anyone else wondering, to enable it just go to iOS Settings -> iCloud and you'll see "Advanced Data Protection." Toggle it to enabled to create a recovery key, which you'll then be prompted to input correctly after saving it somewhere safe, and then return to the iCloud Settings page, toggle it one more time and enter your recovery key again to confirm.
> but the internet got a collective hissy-fit of intentionally misunderstanding the feature
how was it misunderstood? your device would scan your photos and notify apple or whoever if something evil was found. wasn't that what they were trying to do?
Your device would scan your photo at the point of you uploading it to the cloud and then it could encrypt it before sending it to the cloud. That meant that Apple's cloud servers didn't need to be able to scan it to comply with US Govt "recommendations" for cloud providers.
Whereas right now all the other cloud providers just send the photo as-is and scan it on the cloud servers.
With Apple's approach, the cloud servers don't get to look at every single one of your photos like cloud vendors do today, scanning happens within the privacy of your own phone, and only known-kiddy-porn signatures are flagged.
Apple came up with a way to make things way more private, but the concept of your own device working "against" you if you happen to be a pedophile was too much of a leap.
Your device would've scanned your photos ONLY if you would've uploaded them to Apple's cloud service anyway.
And it wouldn't have notified Apple of "something evil", just specifically known and human-verified actual real child abuse photos. And not even that, it would have needed multiple matches of those very real and verified abuse photos before it flagged them so that a real human could see a "visual derivative" of the photos.
Only if those multiple matches of derivatives were deemed as actual, very real, child pornography the authorities would've been called.
But nope. Now they just scan ALL your data in the cloud when the authorities demand it. And that's somehow better according to the internet in a way I still can't understand.
That was explained in the original design. Each possible match would count, let’s call it a “point”.
Once you reached a certain threshold (the number was not given) it would trigger an alert in a system at Apple.
Each report contained a bit of data that wasn’t enough to identify someone. Once enough “points” from one account accumulated they’d have enough to identify who you were, which files matched, and presumably the full decryption key.
I believe the plan was the suspect files would be decrypted and compared against the real CSAM signatures. If a close match was found it would be sent to NCMEC for confirmation and law enforcement actions.
The threshold was to prevent false positives from the perceptual hashes, like the Google AI scanning incident. Reportedly nobody has one or two pictures. People with CSAM tend to have a lot, so they’d show up “bright red”. They probably didn’t want to reveal the number so people wouldn’t try to keep only that many pictures on their phone to avoid detection.
> What do you think they were going to do once the scanning turned up a hit? Access the photos? Well that negates the first statement.
In the whitepaper, the cryptography required that Apple have multiple different photodna
(or whatever the name was for the on-device one) matches before they could unwrap the user's message containing these suspected CSAM photos and to then send them to NCMEC.
"reduced-quality copy" was the wording in the whitepaper IIRC.
So the resolution most likely would've been the same, but the detail blurred so that the poor human agent wouldn't have to see actual CSAM, just enough to make a call whether it is or isn't a likely match.
No. A small thumbnail “visual derivative” is included with the neural hash, which is unlocked (only for matches) only once the number of matches exceeds a threshold.
This was all outlined in the first two pages of the white paper, and explained in more detail further down.
To counter the "think of the children" -argument governments use to justify surveillance, Apple tried scanning stuff on-device but the internet got a collective hissy-fit of intentionally misunderstanding the feature and it was quickly scrapped.