One system optionally checks iMessages (to be sent or received) against one or more "nudity" neural networks if the user is identified as a child in the iCloud family sharing group. If it triggered, the child is given information/opt out, optionally choosing to go ahead with sending or viewing, with the caveat that their parent(s) will be notified (optionally, I presume). Nothing about this event is sent off device. Apple employees aren't receiving or evaluating the photos. No one other than the identified parent(s) is party to this happening.
Neural networks are from perfect, and invariably parents and child are going to have a laugh when it triggers on a picture of random things.
The other, completely separate system checks files stored in iCloud photos against a hash list of known, identified child abuse photos. This is already in place on all major cloud photo sites.
Apple clearly wanted to roll out the first one but likely felt they needed more oomph in the child safety push so they made a big deal about expanding the second system to include on-device image hash functionality. I would not be surprised at all if they back down from the latter (though 100% of that functionality will still happen on the servers).
There are two systems and I will comment first on the second one. The second system you were referring to, as in the 'other', is the CSAM. And by the funcionality description, already sounds terrifying enough.
You will be flagged, reported to the authorities under technical argumentation the Algorithm has a "one in a trillion" chance of failure. Account blocked, and you start your "guilty until proven innocent process" from there.
Due to scale at what Apple works its also clear to see, if you think about it, that the additional human process will be on a random basis. The volume would be too high for a human chain to validate each flagged account.
In any case at multiple occasions, it is clear that the current model of privacy with Apple is that there is no
privacy. It is a tripartite between you, the other person you interact with, and Apple algorithms and human reviewers. Is there any difference between this, and having a permanent video stream from what is happening inside each division in your house, analyzed by a "one in a trillion neural net algorithm", and additionally reviewed by a human on a need to do basis ?
"Using another technology called threshold secret sharing, the system ensures that the contents of the safety vouchers cannot be interpreted by Apple unless the iCloud Photos account crosses a threshold of known CSAM content. Only when the threshold is exceeded does the cryptographic technology allow Apple to interpret the contents of the safety vouchers associated with the matching CSAM images."
"The threshold is selected to provide an extremely low (1 in 1 trillion) probability of incorrectly flagging a
given account. This is further mitigated by a manual review process wherein Apple reviews each report to confirm there is a match, disables the user’s account, and sends a report to NCMEC. If a user feels their account has been mistakenly flagged they can file an appeal to have their account reinstated."
So in other words, there is no doubt for this one there will be intervention, of Apple employees when required.
For training purposes, for system testing, for law enforcement purposes etc...
I guess these and other functionality was the reason the decided not to encrypt the icloud backups
After reviewing what I believe to be every single document published so far by Apple on this, I found only one phrase and nothing more detailing how it works. Looking at the amount of detail published for CSAM, the lack info on this one already looks like a redflag to me. So I am not really sure how you can be so certain of the implementation details.
The phrase is only this:
"Messages uses on-device machine learning to analyze image attachments and determine if a photo is sexually explicit. The feature is designed so that Apple does not get access to the messages."
Nothing more I could find. If you have additional details please let me know.
This is a feature that will be added in a future update there some conclusions from my part. If we just stay with the phrase "The feature is designed so that Apple does not get access to the messages." I have to note its missing the same level of detail that was published for CSAM.
I can only conclude that:
- They do not get access to the messages currently due to the current platform configuration. Note they did not say the images will stay on the phone currently or in the future. Just that uses local neural nets technology and feature is designed, so that Apple does not get access to the messages.
- They did not say they will not update the feature in the
future for leveraging the icloud compute capabilities
- They did not say if there is any opt in or opt out for data for their training purposes
- They do not say if they can update locally the functionality at the request of law enforcement.
I agree with you that it looks like it stays locally by the description, but the phrase as written looks like weasel words.
They also mention one of the scenarios: Where the child will agree to send the photo to the parent before viewing, would that be device to device communication or
via their icloud/CSAM system ? Unclear, at least for what I could gather so far.
I am certainly willing to agree I conflated them if you clarify.