Regarding the first part, if that really is the relevant policy, fair enough. I don't know the specific policies used to include or exclude a given image. Where can I find the details?
I consider someone looking at pictures from my local device without my consent to be an attack on my privacy, regardless of the content, or whether they send me to jail afterwards.
The reason apple's threshold exists in the first place is because individual false-positives happen. Some of the images leaked from the victim's device may be entirely unrelated. In the specific example of an environmental protester, there might even be images documenting "crimes" (entirely unrelated to CSAM), due to increased criminalization of protest techniques.
A system that may be manipulated (anonymously, from great distance) to trigger spot-checks on the devices of anyone I don't like is a broken system.
> Regarding the first part, if that really is the relevant policy, fair enough. I don't know the specific policies used to include or exclude a given image. Where can I find the details?
I'm not sure either or I would've linked it. I have implemented such a reporting system though (compliant to US law, which is privacy-preserving relatively speaking, not any upcoming EU laws, which like all other EU law seems like a huge pain to live under.)
> I consider someone looking at pictures from my local device without my consent to be an attack on my privacy, regardless of the content, or whether they send me to jail afterwards.
To be clear, this is only if you're using a cloud storage service like iCloud Photos, Google Drive etc. It's meant to be a strict improvement over the usual setup, which is that your data isn't hidden from the cloud provider at all and they can just look at whatever. It would certainly be had to have any scanning if you're not opting into a cloud service.
I consider someone looking at pictures from my local device without my consent to be an attack on my privacy, regardless of the content, or whether they send me to jail afterwards.
The reason apple's threshold exists in the first place is because individual false-positives happen. Some of the images leaked from the victim's device may be entirely unrelated. In the specific example of an environmental protester, there might even be images documenting "crimes" (entirely unrelated to CSAM), due to increased criminalization of protest techniques.
A system that may be manipulated (anonymously, from great distance) to trigger spot-checks on the devices of anyone I don't like is a broken system.