"Child porn" is a smokescreen. It's what every privacy advocate saw coming: find the worst, most repugnant thing possible and use it to backdoor encryption.
It's not a dream, and it's really happening, right now.
Yeah. Children are the perfect political weapon. You can use children to justify anything and destroy the reputation of anyone who opposes you. It's reasonable to assume anyone using children as argument is acting in bad faith.
I seriously doubt governments actually care about children. They probably care a great deal more about maintaining their own power and control over their subjects. We see countries like Russia and China using the tools of surveillance and law enforcement to silence political opposition and dissent.
Yes, given all the burgeoning authoritarian states where phones are sold it's clearly population control.
Regardless of country, scanning for material critical of PRC, Lukashenko or Erdoğan is the exact same tech you might want to scan for copyright violations, union organizing, political party opposition, cryptocurrency, or any other dissent your local burgeoning authoritarian wants to pay for.
Both phone OS vendors have over and over proven eager to satisfy every authoritarian, no matter the human impact. We're screwed.
CSAM is still problematic, though. The consensus is that creates a market that incentivizes the abuse of children, because you cannot produce CSAM without CSA. It seems to be the one of the few classes of data that is an exception to the standard of privacy that is applied to almost all other data, to the point that Apple took action against it. Dozens of other countries besides the US also agree, and outlaw that class of data as well. Finding actual CSAM (and not just any kind of pornography involving minors, hence the shift in terminology) is supposedly directly tied to finding child abusers, thus preventing future child abuse from taking place.
There appear to be few to no studies that give statistical evidence for the market hypothesis or that the spread of CSAM causes CSA. But even if they existed, I still think the argument would ultimately become "letting one more child abuser get away because we preserved our privacy instead." How can such an argument be challenged, given that the prevention of CSA is a legitimate problem on a global scale? Even so much as mentioning an argument that takes into consideration the nature of CSA appears to be taboo - and for good reason, as it carries a risk of being labeled many kinds of terrible things oneself. That seems to be why a lot of threads here are reducing the issue to "think of the children" or lambasting the potential slippery slope of authoritarian surveillance, instead of discussing why CSAM is outlawed to begin with, and thus the reasons why Apple came to this decision in the first place.
Another thing that nobody seems to talk about is that Apple doesn't want to be held liable for CSAM stored on their servers, either. Within that context, this change is Apple's way of addressing that issue, which also happens to erode individual privacy. Apple's values appear to dictate that the tradeoff is worth it in the end. About the only people that have pushed back on this change come from technology or privacy-conscious circles. Nobody else seems to care. That is the status quo, and I'm not sure how it's going to change with general public sentiment the way it is surrounding CSA.
> Finding actual CSAM (and not just any kind of pornography involving minors, hence the shift in terminology) is supposedly directly tied to finding child abusers, thus preventing future child abuse from taking place.
NCMEC and similar groups call all pornography involving minors CSAM.[1] And this system can't detect new CSAM.
> Another thing that nobody seems to talk about is that Apple doesn't want to be held liable for CSAM stored on their servers, either.
Many people said they would prefer server side scanning. Other people argued E2E encryption would shield Apple from liability.
Since the system can't detect new CSAM, doesn't it actually incentivize the production of new material? This mechanism could then actually increase the amount of abuse going on.
That's just one more argument to the fire: the intent here is not to protect children. The intent is to scan everybody for things "to be determined", and not subject to public scrutiny.
Europe is not immune to this. Jose Manuel Barrosso, the ex-EU-commission-president used to be a member of a communist revolutionary student group ... that has murdered people. He was present at at least three such killings, and it is not clear if "present" is where it stops, but obviously he was never convicted of murder (he was, of other things). When he came to office, there was a witch hunt for material and articles telling this story. And obviously plenty of lower ranking officials wanted to take care of some of their own dirty laundry. This is the real source of the "right to forget" privacy legislation. Makes you all warm and fuzzy inside, doesn't it.
EVEN when it comes to child abuse, the top politicians in Europe just brazenly ignore the rules. Emmanuel Macron's wife ... has publicly confessed to being a pedophile, and having sexually abused at least one child (yes, mr. Macron: she was his French teacher. She blames him, incidentally, as if that matters). Macron has publicly confirmed this (and taken "the blame", which of course doesn't matter: he was a minor). For anyone in France, it is clear: this is a despicable crime, and she would be harshly punished ... but any action against him might bring Le Pen to power, or even tip the balance in the EU parliament to anti-EU parties if we are really unlucky. I bet he has said as much to the public prosecutor.
A child defending a pedophile happens often, by the way, especially to stay out of the hands of child services, and is of course never accepted as an excuse ... except ... when it applies to even pretty low-level government officials.
I was gonna write a whole thing, but mostly I just wanted to say we'll be able to deepfake CSAM inside of 10 years, no question, so this whole thing is moot.
I think this is a massively important point that needs further consideration.
If a person can generate images on demand any system based on recognising known images is immediately bypassed.
AI image classification will be your only option but the false positive volumes have the potential to be massive, swamping any follow up investigation.
It's not a dream, and it's really happening, right now.