> "Scanning every user’s privately stored iCloud data would create new threat vectors for data thieves to find and exploit"
> "It would also inject the potential for a slippery slope of unintended consequences. Scanning for one type of content, for instance, opens the door for bulk surveillance and could create a desire to search other encrypted messaging systems across content types."
Yes, and it was patently obvious from the onset. Why did it take a massive public backlash to actually reason about this? Can we get a promise that future initiatives will be evaluated a bit more critically before crap like this bubbles to the top again? Come on you DO hire bright people, what's your actual problem here?
I want to be clear that I agree with you and I am not providing this explanation as an excuse for Apple, but merely as an explanation for what might have happened with the timeline: first they announced they were doing this, a bunch of us said "no this is the first step towards breaking e2e entirely", and THEN this year there was the high-profile issue--note that I am not saying it is a new issue, but merely that it was suddenly a high-profile one that actually caused a lot of press and backlash--with the laws in the UK and/or Australia or whatever that showed we were all correct, and so I'd guess even the most ardent "I am smarter than everyone" person at Apple finally went "ah damn I was wrong".
No, as I didn't even know which country it was from for sure, right? ;P But I went ahead and typed "UK encryption" into the Hacker News search and it all popped up from the last few months; the first hit, from a month ago, was even about Apple actively scrambling to push back on it (lending even more support to my point).
This is truly mind-boggling because it’s Apple and such an important topic. We are right to expect better from these bright people.
It worries me that Sarah Gardner seems to be truthful when she hints at that no one at Apple reached out to her after Apple killed the plan in December 2022. She must have also missed articles like the one at The Verge [1].
Gardner’s message to Tim is dated 2023-08-30 03:24:37 (CEST).
Erik Neuenschwander's reply was printed out 2023-08-31 14:41:00 (probably PDT).
So that’s a period of about 44 hours that the print out [0] represents.
It would be helpful to know the time when Apple’s reply was sent to deduce whether Apple had time for additional deliberations before writing the email to Gardner.
Regardless, as reported by The Verge they obviously had deliberations shortly after the public outcry, killed the plan in December 2022, and then presumably failed to inform Gardner about the reasons.
If that’s the case, far smaller companies have checks and balances in place to keep that from happening.
>It would be helpful to know the time when Apple’s reply was sent to deduce whether Apple had time for additional deliberations before writing the email to Gardner.
In the old days, and I assume it is still the case, judging from observing Apple for the past 20+ years, every single public response has to go through Apple PR. Even public interview before hand they were given lots of preparation.
It is likely Apple has this prepped for a long time. Or even it was another PR case scenario [1] where the CEO was baited to send this email before a response was given and somehow "leaks" to the press.
Apple was concerned about governments using the excuse of CSAM to pass laws which would force Apple to weaken encryption across the board.
Whether this was the right response to such concern is something I’m not unsympathetic towards. Certainly I think it’s reasonable to say that Apple was trying to thread a needle in a way which was never going to please everyone, even if it somehow turns out to have been the least-worst outcome.
Yes, but to OP's point: this was patently obvious from the onset. Even here the comments at the time [1] pointed to all sorts of potential misuse, political or religious prosecution, dystopian cases of false positives, and that this would leave the door open to future government escalation beyond CSAM.
How could they not see that they would have a giant backlash on their hands? Did they overestimate their ability to get away with the "it's for our children" excuse this badly?"
I think Apple was doing the exact opposite. They wanted to do the __least possible thing__ in order to stave away the far worse outcome of intelligence departments using the "it's for our children" excuse to pressure elected representatives to vote for back doors on consumer encryption.
The ridiculous thing is that Apple's proposal was functionally identical to what other platform vendors (e.g. Google, Microsoft) were already doing. In all cases — including Apple's proposed system — only photos uploaded to cloud storage would be scanned to see if it matched CSAM already known to the government. The only difference with Apple's proposal was initial "fuzzy hash" calculation would be performed on-device prior to upload, instead of on-cloud after upload.
The reason for doing it differently was because it meant (in theory) satisfying both masters — implementing real end-to-end encryption, while not being seen as a CSAM scanning laggard compared to Google, Microsoft, etc.
Other vendors just scan all your shit and nobody cares.
Signal and Meta (WhatsApp) don’t scan your messages. Apple’s actions have shown they are untrustworthy, and even if they’ve reversed this particular decision, compromising on principle has put authoritarians on notice they are open to compromise in the future, like the UK’s horrific Online Safety Bill.
Pointing out a few app vendors simply isn't impressive. Of course some app vendors can take a stand against mid-tier governments. It's great marketing for them, and the corporate risk isn't so high. It's the platform vendors which have much at stake.
And of the three big platform vendors, two of them already scan private photos for CSAM right now — Google and Microsoft. Yet nobody is outraged because nobody actually cares. There's no logical consistency. Google and Microsoft can implement scanning and there's no outrage. Apple went to great lengths to tell everyone exactly what they were proposing to do before they did it, and all the online people are outraged and calling Apple untrustworthy. Sure, whatever.
Neither Google nor Microsoft operate a secure messaging platform that matters (email, yes, but email security is a lost cause), unlike WhatsApp (and Facebook Messenger), iMessage, Telegram or Signal. I'm not giving Google & Microsoft a pass, they're just irrelevant for the discussion at hand.
The CSAM scanning Apple abandoned, ie the topic at hand, was for the cloud, like the one operated by Google or Microsoft. You switched the topic to secure messaging platforms, which are irrelevant for the discussion at hand.
Wholesale scanning of private data is unaaceptable, and the fact you engage in such mental gymnastics to justify it is a sign that everyone should be distrusting any platform by default. Simple as.
Your first sentence makes no sense. Perhaps you meant to say: "Wholesale scanning of private data is unacceptable therefore everyone should be distrusting any platform by default." That's a statement I would agree with.
I reject the claim of "mental gymnastics" as a meme bereft of substance, and specifically in this instance, as an unproven hypothesis. But even if I granted the inference, I don't see how overly elaborate argumentation could have any explanatory power for the trustworthiness (or otherwise) of major platforms.
I'm curious, why do you think I'm advocating trust in any platform? And I'm curious why you think that I have any concern one way or another for an "exodus" from any platform?
Really, I'm just replying to say that while I think I understand the gist of the tone of your reply, I don't actually understand anything you wrote. And my hope is that you can clarify, because — and I say this with all sincerity — I really am genuinely curious.
> How could they not see that they would have a giant backlash on their hands? Did they overestimate their ability to get away with the "it's for our children" excuse this badly?"
I imagine it wasn't an environment where one could argue those concerns on fair grounds without it being seen as enabling CSAM and shot down. I also imagine it's enticing to call their competitors' products "pedo-phones".
So how come there is no public backlash against Microsoft and Google who scan your cloud pictures in the cloud, enabling political and religious persecution, dystopian cases of false positives, and future government escalation?
Apple had a bunch of technical obfuscation that may have convinced them that they'd engineered their way out of a tough spot. "Your scientists were so preoccupied with whether they could, they didn't stop to think if they should."
When it was made public people shot all sorts of holes both in the technical mechanisms themselves as well as their insufficiency even if they all worked as intended.
We should have some sympathy-- they did have some neat technology. But neat technology isn't enough.
Post Steve Jobs Apple, especially after Scott Forstall and Katie Cotton left. ( Along with a few other top executives) Tim Cook's Apple were left with people of Harmony. These left aligning, DEI focus people have the same characteristic, their way is for the force of good, hence their way is the only way. Same as early Google in the 00s. ( Privacy is a fundamental human right? Actively securing and promoting Chinese components in their supply chains. )
I am sure CSAM started with good intention. As with most ideals do. But fundamentally they don't work in this complex world.
All roads to hell are paved with good intention.
I hope there are still enough of Steve Jobs' conviction left inside Apple.
> Tim Cook's Apple were left with people of Harmony. These left aligning, DEI focus people have the same characteristic, their way is for the force of good, hence their way is the only way.
Ironic that this opinion is just a reductive and reactionary as the thing that it decries.
I'm curious about the new parental control features they announced at the same time as the iCloud photo scanning. My recollection is that when they withdrew the iCloud scanning they also withdrew the new parental controls.
I'm curious why they also withdrew those. For those who don't remember the parental control, which were largely overshadowed by the controversy over the cloud stuff, they were to work like this:
1. If parents had enabled them on their child's device, they would scan incoming messages for sexual material. The scan would be entirely on-device. If such material was found the material would be blocked, the child would be notified that the the message contained material that their parents thought might be harmful, and asked if they wanted to see it anyway.
2. If the child said no, the material would be dropped and that would be the end of it. If the child said yes what happened next depended on the age of the child.
3. If the was at least 13 years old the material would be unblocked and that would be the end of it.
4. If the while was not yet 13 they would be given another warning that their parents think the material might be harmful, and again asked if they want to go ahead and see it. They would be told that if they say "yes" their parents will be notified that they viewed the material.
5. If they say no the material remains blocked and that is the end of it.
6. If they say yes it is unblocked, but their parents are told.
There wasn't a lot of discussion of this, and I only recall seeing one major privacy group object (the EFF, on the grounds that if it reaches step 6 it violates the privacy of the person sending sex stuff to your pre-teen because they probably did not intend for the parents to know).
The issue is that it’s predicated on an age field that can be set separately. It’s easy to use parental controls to control non-children by setting a lower age internally. Think victims of human trafficking or adults in odd relationship situations.
It might’ve served the greater good of bringing home to parents that their kids are, in fact, old enough to be sexual beings and suppression of their sexuality just isn’t possible anymore. Maybe that would lead to fewer cases of kids like the one in my classroom yesterday, who had just learned at the ripe old age of 14 in the 9th grade that he was going to be a father. The proud mother will join him in high school next year, after she completes the 8th grade. If she completes it, I suppose.
I don’t disagree with you at all but there’s zero chance a public tech company wants to be the face of educating youths about their sexual agency (with material out of their control, at that)
I believe they pulled that back as well, but now there is "Communication Safety" and "Sensitive Content Warning" available to everyone in iOS 17, coming out sometime on or after Sept 12: https://www.apple.com/ios/ios-17-preview/ ctrl-f "Sensitive Content Warning"
This feature is back in iOS 17, but it doesn’t do notifications and can be enabled per-device. It basically just blurs nsfw photos when they’re on the screen. It’s great when on a business trip and sexting with the SO. You can open your messages in front of your coworkers without having to worry about the messages being open from the night before.
I would be far too worried about false-negatives to do that. Especially as they get rarer, and I'd become more confident about loading up images in public.
Big fan of keeping different forms of content separate.
I'm a bit confused here. This seems like failure analysis gives us a clear winning direction. When the system fails: would you rather a nude image show clear or a non-nude image be blurred? I think 99% of people would rather have the latter. There is little risk to a normal image being falsely flagged but the outcome is likely far worse when a nude image is not censored.
It really surprises me a lot of times how little failure analysis is done in software engineering considering it is basically the cornerstone of most physical engineering. It is critical that you design systems to fail in certain ways. That is error messaging... (Also see Blackstone's Ratio for an example in law)
Different forms for different content is a nice answer, but you also have to remember that we're talking a product for the masses. So while I don't think your answer is wrong, it is.
> if it reaches step 6 it violates the privacy of the person sending
Let's say the parents are abusive, and someone wants to talk with the child about that (via chat, for some reason).
Now, if the algorithms sometimes incorrectly flag private messages that were in fact safe -- could that be mitigated by letting the sender know: "Your message will be scanned and possibly shown to the parents of the recipient" before they hit Send?
(O.t.o.h. that leaks info about the age of the recipient.)
The article keeps saying that Apple has responded or that Apple has clarified and then linking other Wired articles. Is there an Apple press release somewhere? If so, I'd rather read that.
ETA: looks like they directly provide documents from Apple at the bottom of the article
Sarah Gardner, the author of the letter to Apple and CEO of the Heat Initiative, worked for 10 years and until earlier this year as a VP at Thorn [1]. Thorn sells a "comprehensive solution for platforms to identify, remove and report child sexual abuse material." [2]
She's using PR to pressure on Apple into implementing the kind of solution her previous company is selling. Won't someone think of the children??
It's nice that Apple have clarified this. I think that the original intent was a misstep and possibly an internal political situation that they had to deal with. I can see that a number of people would be on each side of the debate with advocacy throughout the org.
There is only one correct answer though and that is what they have clarified.
I would immediately leave the platform if they progressed with this.
I’m not sure I understand Apple’s logic here. Are iCloud Photos in their data centers not scanned? Isn’t everything by default for iCloud users sent there automatically to begin with? Doesn’t the same logic around slippery slope also apply to cloud scans?
This is not to say they should scan locally, but my understanding of CSAM was that it would only be scanned on its way to the cloud anyways, so users who didn’t use iCloud would’ve never been scanned to begin with.
Their new proposed set of tools seems like a good enough compromise from the original proposal in any case.
You are correct, the original method would only have scanned items destined to iCloud and only transmitted some hash of matching hashes. And yes, similar slippery arguments exist with any providers that store images unencrypted. They are all scanned today, and we have no idea what they are matched against.
I speculated (and now we know) when this new scanning announced, that it was in preparation for full E2EE. Apple came up with a privacy preserving method of trying to keep CSAM off their servers while also giving E2EE.
The larger community arguments swayed Apple from going forward with their new detection method, but did not stop them from moving forward with E2EE. At the end of the day they put the responsibility back on governments to pass laws around encryption - where they should be, though we may not like the outcome.
> There are also ways to detect matches even with e2ee
By definition, encryption (with unique user keys) means you can't infer nor check what the content of the message is. Not without client cooperation, which is what this feature would have been.
This is what I was recalling, this method gives you a clever way to do it using the file itself as the key:
> “Convergent encryption solves this problem in a very clever way:
“The way to make sure that every unique user with the same file ends up with an encrypted version of that file that is also identical is to ensure they use the same key.
However, you can’t share keys between users, because that defeats the entire point; you need a common reference point between users that is unknown to anyone but those users.
“The answer is to use the file itself: the system creates a hash of the file’s content, and that hash (a long string of characters derived from a known algorithm) is the key that is used to encrypt said file.
“If every iCloud user uses this technique — and given that Apple implements the system, they do — then every iCloud user with the same file will produce the same encrypted file, given that they are using the same key (which is derived from the file itself); that means that Apple only needs to store one version of that file even as it makes said file available to everyone who “uploaded” it (in truth, because iCloud integration goes down to the device, the file is probably never actually uploaded at all — Apple just includes a reference to the file that already exists on its servers, thus saving a huge amount of money on both storage costs and bandwidth).
“There is one huge flaw in convergent encryption, however, called “confirmation of file”: if you know the original file you by definition can identify the encrypted version of that file (because the key is derived from the file itself). When it comes to CSAM, though, this flaw is a feature: because Apple uses convergent encryption for its end-to-end encryption it can by definition do server-side scanning of files and exploit the “confirmation of file” flaw to confirm if CSAM exists, and, by extension, who “uploaded” it. Apple’s extremely low rates of CSAM reporting suggest that the company is not currently pursuing this approach, but it is the most obvious way to scan for CSAM given it has abandoned its on-device plan.”
That makes me happy, because 12 years ago here on HN I posted a comment [1] outlining how a Dropbox-like service could be implemented that stored user files encrypted, with the service not having the keys, yet allow for full deduplication when different users were storing the same file, while still supporting the normal Dropbox sharing features.
The file encryption part was based on using a hash of the file as the key.
It's always nice to later find out that one's quick amateur idea turns out to be an independent rediscovery of something legit. Now that I've learned it is called "convergent encryption" Googling tells me it it goes back to 1995 and a Stac patent.
This still suffers the same problems as the original proposal. Specifically, Apple could still be pressured or forced by governments to check for non-CSAM images. And using cryptographic hashing means they can’t detect altered files, while using perspective hashing leaves them open to false positives.
That’s not what’s commonly understood to be a modern cipher.
It would be trivial for a government to make a list of undesired messages/images and find everyone that has forwarded it.
Bit, yes. Though on a moderately large file it would be easy to brute force all one-bit modifications, and then the effort grows exponentially (basically) in the number of bits flipped, so you’ll want to do more than a few.
> At the time I also thought it was obvious it was in preparation for e2ee
I thought the same.
> despite loud people on HN who disagreed
Yeah, loud people be like that, but this is really Apple’s communication fault. They could have started with that “hey we want to provide e2e encrypted storage, the price of it will be that we need to scan what you upload for csam”.
In my opinion their goal was to get stuff to a state where they could encrypt everything on iCloud so that even they can't access it.
To counter the "think of the children" -argument governments use to justify surveillance, Apple tried scanning stuff on-device but the internet got a collective hissy-fit of intentionally misunderstanding the feature and it was quickly scrapped.
> In my opinion their goal was to get stuff to a state where they could encrypt everything on iCloud so that even they can't access it.
They basically did. If you turn on Advanced Data Protection, you get all of the encryption benefits, sans scanning. The interesting thing is that if you turn on ADP though, binary file hashes are unencrypted on iCloud, which would theoretically allow someone to ask for those hashes in a legal request. But it's obviously not as useful for CSAM detection, as, say, PhotoDNA hashes. See: https://support.apple.com/en-us/HT202303
For anyone else wondering, to enable it just go to iOS Settings -> iCloud and you'll see "Advanced Data Protection." Toggle it to enabled to create a recovery key, which you'll then be prompted to input correctly after saving it somewhere safe, and then return to the iCloud Settings page, toggle it one more time and enter your recovery key again to confirm.
> but the internet got a collective hissy-fit of intentionally misunderstanding the feature
how was it misunderstood? your device would scan your photos and notify apple or whoever if something evil was found. wasn't that what they were trying to do?
Your device would scan your photo at the point of you uploading it to the cloud and then it could encrypt it before sending it to the cloud. That meant that Apple's cloud servers didn't need to be able to scan it to comply with US Govt "recommendations" for cloud providers.
Whereas right now all the other cloud providers just send the photo as-is and scan it on the cloud servers.
With Apple's approach, the cloud servers don't get to look at every single one of your photos like cloud vendors do today, scanning happens within the privacy of your own phone, and only known-kiddy-porn signatures are flagged.
Apple came up with a way to make things way more private, but the concept of your own device working "against" you if you happen to be a pedophile was too much of a leap.
Your device would've scanned your photos ONLY if you would've uploaded them to Apple's cloud service anyway.
And it wouldn't have notified Apple of "something evil", just specifically known and human-verified actual real child abuse photos. And not even that, it would have needed multiple matches of those very real and verified abuse photos before it flagged them so that a real human could see a "visual derivative" of the photos.
Only if those multiple matches of derivatives were deemed as actual, very real, child pornography the authorities would've been called.
But nope. Now they just scan ALL your data in the cloud when the authorities demand it. And that's somehow better according to the internet in a way I still can't understand.
That was explained in the original design. Each possible match would count, let’s call it a “point”.
Once you reached a certain threshold (the number was not given) it would trigger an alert in a system at Apple.
Each report contained a bit of data that wasn’t enough to identify someone. Once enough “points” from one account accumulated they’d have enough to identify who you were, which files matched, and presumably the full decryption key.
I believe the plan was the suspect files would be decrypted and compared against the real CSAM signatures. If a close match was found it would be sent to NCMEC for confirmation and law enforcement actions.
The threshold was to prevent false positives from the perceptual hashes, like the Google AI scanning incident. Reportedly nobody has one or two pictures. People with CSAM tend to have a lot, so they’d show up “bright red”. They probably didn’t want to reveal the number so people wouldn’t try to keep only that many pictures on their phone to avoid detection.
> What do you think they were going to do once the scanning turned up a hit? Access the photos? Well that negates the first statement.
In the whitepaper, the cryptography required that Apple have multiple different photodna
(or whatever the name was for the on-device one) matches before they could unwrap the user's message containing these suspected CSAM photos and to then send them to NCMEC.
"reduced-quality copy" was the wording in the whitepaper IIRC.
So the resolution most likely would've been the same, but the detail blurred so that the poor human agent wouldn't have to see actual CSAM, just enough to make a call whether it is or isn't a likely match.
No. A small thumbnail “visual derivative” is included with the neural hash, which is unlocked (only for matches) only once the number of matches exceeds a threshold.
This was all outlined in the first two pages of the white paper, and explained in more detail further down.
> I’m not sure I understand apples logic here. Are iCloud Photos in their data centers not scanned? Isn’t everything by default for iCloud users sent there automatically to begin with? Doesn’t the same logic around slippery slope also apply to cloud scans?
I don’t see the problem with this status quo. There is a clear demarcation between my device and their server. Each serving the interests of their owner. If I have a problem with their policy, I can choose not to entrust my data to them. And luckily, the data storage space has heaps of competitive options.
This status quo is that a lot of countries want to use the CSAM argument to push privacy-invasive technology (cough UK) like e.g. forcing companies to allow the government to break E2EE to catch CSAM distributors. Apple made this feature while planning to move iCloud Photos to E2EE so that they could argue "look, we still catch x CSAM distributors with n < 0.x% false positive rate, even with E2EE photos. therefore you don't need to pass these laws that break E2EE."
I know "give them an inch, they take a mile" is a reductive comparison but I really can't see this way of thinking going any other way in the long term.
> the data storage space has heaps of competitive options
The generic space does, yes. But if you want native integration with iOS, your only choice is iCloud. It would certainly be nice if this was an open protocol where you could choose your own storage backend. But I think the chances of that ever happening are pretty much zero.
Precisely! The software running on the phone should be representing the owner of the phone, period. We begrudgingly accept cloud scanning because that ship has already sailed, despite it being a violation of the analog of fiduciary duty. But setting the precedent that software on a user's device should be running actions that betray the user is from the same authoritarian vein as remote attestation. The option ignored by the "isn't this a good tradeoff" question is one where the device encrypts files before uploading them to iCloud, iCloud may scan the encrypted bits anyway to do their legal duty, and that's the end of the story. This is what we'd expect to be happening if device owners' interests were being represented by the software on the device, and so we should demand no less despite the software being proprietary.
1. What you’re asking for (“The option … where the device encrypts files before uploading them to iCloud, iCloud may scan the encrypted bits anyway to do their legal duty, and that's the end of the story.”) is impossible.
2. The division you envisage (“The software running on the phone should be representing the owner of the phone, period.”) is wishful thinking. Do you think the JavaScript in your browser does only things in your interest?
A state of affairs where users' devices encrypt files, and then iCloud scans the stored blobs to perform a perfunctory compliance check is clearly not impossible. So please describe what you mean.
Web javascript is one of the places the battle is being fought. Users are being pushed into running javascript (and HTML) that acts directly against our own interests (eg ads, surveillance, etc). Many of the capabilities exploited by the hostile code should be considered browser security vulnerabilities, but the dynamic is not helped by one of the main surveillance companies also making one of the main browsers.
But regardless of the regime the authoritarians are trying to push, the computer-represents-user model is what we should aspire to - the alternative is computational disenfranchisement.
> Are iCloud Photos in their data centers not scanned?
No outright statement confirming or denying this has ever made to my knowledge, but the implication, based both on Apple's statements and the statement of stakeholders, is that this isn't currently the case.
This might come as a surprise to some, because many companies scan for CSAM, but that's done voluntarily because the government can't force companies to scan for CSAM.
This is because based on case law, companies forced to scan for CSAM would be considered deputized and thus subsequently it would be a breach of the 4th amendments safeguards against "unreasonable search and seizure".
The best the government can do is to force companies to report "apparent violations" of CSAM laws, this seems like a distinction without a difference, but the difference is between required to actively search for it (and thus becoming deputized) v. reporting when you come across it.
Even then, the reporting requirement is constructed in such a way as to avoid any possible 4th amendment issues. Companies aren't required to report it to the DOJ, but rather to the NCMEC.
The NCMEC is a semi-government organization, autonomous from the DOJ, albeit almost wholly funded by the DOJ, and they are the ones that subsequently report CSAM violations to the DOJ.
The NCMEC is also the organization that maintains the CSAM database and provides the hashes that companies, who voluntarily scan for CSAM, use.
This construction has proven to be pretty solid against 4th amendment concerns, as courts have historically found that this separation between companies and the DOJ and the fact that only confirmed CSAM making its way to the DOJ after review by the NCMEC, creates enough of a distance between the DOJ and the act of searching through a person's data, that there aren't any 4th amendment concerns.
The Congressional Research Service did a write up on this last year for the ones that are interested in it[0].
Circling back to Apple, as it stands there's nothing indicating that they already scan for CSAM server-side and most comments both by Apple and child safety organizations seem to imply that this in fact is currently not happening.
Apple's main concerns however, as stated in the letter by Apple, echo the same concerns by security experts back when this was being discussed.
Namely that it creates a target for malicious actors, that it is technically not feasible to create a system that can never be reconfigured to scan for non-CSAM material and that governments could pressure/regulate it to reconfigure it for other materials as well (and place a gag order on them, prohibiting them to inform users of this).
At the time, some of these arguments were brushed off as slippery slope FUD, and then the UK started considering something that would defy the limits of even the most cynical security researcher's nightmare, namely a de facto ban on security updates if it just so happens that the UK's intelligence services and law enforcement services are currently exploiting the security flaw that the update aims to patch.
>(e) Failure To Report.—A provider that knowingly and willfully fails to make a report required under subsection (a)(1) shall be fined—
(1) in the case of an initial knowing and willful failure to make a report, not more than $150,000; and
(2) in the case of any second or subsequent knowing and willful failure to make a report, not more than $300,000.
I find these clauses at odds with one another in that the Failure to Report clause created a tangible duty upon the provider, which, were I a judge, would satisfy me that rhe provider was, in fact, deputized.
Does nobody actually read the legislation that is passed and realize that oops, I just passed am unconstitutional law.
That they include the construed... clause just solidifies for me that the legislators in question were trying to pull a fast one.
It’s because they wanted their cake and eat it too, get as close as possible to the 4th amendment without crossing the line.
Put simply, if they have knowledge of it they have a duty to report, but they can’t be compelled to try and find out.
In theory this means that if they happen to stumble upon it or are being alerted to it by a third party (e.g. user report) then they have to report it, in practice many voluntarily monitor it, maybe because they want to avoid having to litigate that they didn’t have knowledge of it or maybe because it’s good PR or maybe because they care for the case.
I think in most cases it’s all of the above in one degree or another.
I have no qualms with voluntary monitoring and reporting. However the inclusion of the penalty imposes a tangible duty. That tangible duty is enough to convince me this act is effectively a de facto deputization. The act of searching is, in essence "look out for, raise signal when found". This Act does everything it can to try to cast the process that happens after the search phase as "the search forbidden by the 4th Amendment" instead of the explicitly penalized activity, which is couched as "voluntary, and not State mandated despite a $150000 price tag assessed by... The State". Even going so far as creating a quasi-government entity, primarily funded by the State whose entire purpose is explocitly intended to act as a legal facade to create sufficient "abstract distance" through which the State can claim "it twas not I who did it, but a private organization, Constitional protections do not apply"
Words mean things, and we've gotten damned loose with it these days in my opinion when the want strikes. "Voluntary" anything with a $150000 fine for not doing it is no longer voluntary. It's now your job. If it's your job, and the State punishes you for not doing it, you are a deputy of the State. I do not care how many layers of legal fiction and indirection are between you and the State.
If you can't not comply without jeopardy, it ain't voluntary.
> I find these clauses at odds with one another in that the Failure to Report clause created a tangible duty upon the provider, which, were I a judge, would satisfy me that the provider was, in fact, deputized.
Absolutely not. That section requires a report under the circumstances where a provider has obtained “actual knowledge of facts and circumstances” of an “apparent violation” of various code sections (child porn among others). It doesn’t place on the provider the burden of seeking out that knowledge. In other words, it covers the cases where, for example, a provider receives a report that they are hosting a child porn video and are pointed to the link to it. Providers can’t jam their fingers in their ears and shout LALALA when they’re told they’re hosting (or whatever) CSAM and given the evidence to support it. They don’t have to do anything at all to proactively find it and report it, however.
Think of it like this. I, as a high school, teacher, am a mandated reporter of child abuse. It’s literally a crime (a misdemeanor) for me not to report suspected child abuse. But I don’t have to go out and suss out whether any of my students are being abused. That doesn’t make me a state actor for 4th Amendment purposes (although I am otherwise, because I am a public school teacher, but that’s a different issue).
Except it does make you a state actor, and even children know it, as even the 9-11 year old demographic has literally disclosed to me, the "crazy uncle" in their life, that they are not comfortable being open with any type of guidance counselor or state licensed therapist due to knowledge of just such a dynamic.
A spade, is a spade by any other name. If the state will come down on you for not doing something (message generation), you are a deputy of the State. Period.
I don’t know if they do or not, but like everyone else I assume they are. Seems like it would be a massive legal (and PR!) liability if it was discovered they weren’t.
Because the idea is that the iCloud data would be encrypted so their servers couldn’t scan it. With the plan being they would do on device scanning of photos that were marked as being stored on iCloud.
It’s objectively better than what google does but I’m glad we somehow ended up with no scanning at all.
that sounds strange, I mean i'm not sure what's the big difference. If data is scanned on icloud, this means it's not encrypted, got it, if scanned on devices, data is fully encrypted on icloud, but apple has access by scanning it on devices and can send unencrypted matches, so it behaves as an unencrypted system, that can be altered at apple's will, just like icloud...
but still, why scanning locally only if icloud is enabled? why not scan regardless? Since policy is meant to 'catch bad ppl', why limit to icloud option and not scan all the time
Apple doesn’t want to scan period. However if Apple does E2ee icloud, the biggest political issue will be that of CSAM. So in order to reserve CSAM, they came up with this scheme.
Apple doesn’t want to expand their power which is why they don’t scan locally. They weren’t doing it before and they don’t want to offer it now.
Part of the reason why this was (and is) a terrible idea is how these companies operate and the cost and stigma of a false negative.
Companies don't want to employ people. People are annoying. They make annoying demands like wanting time off and having enough money to not be homeless or starving. AI should be a tool that enhances the productivity of a worker rather than replacing them.
Fully automated "safety" systems always get weaponized. This is really apparent on Tiktok where reporting users you don't like is clearly brigaded becasue a certain number of reports in a given period triggers automatic takedowns and bans regardless of assurances there is human review (there isn't). It's so incredibly obvious when you see a duet with a threatening video gets taken down while the original video doesn't (with reports showing "No violation").
Additionally, companies like to just ban your account with absolutely no explanation, accountability, right to review or right to appeal. Again, all those things would require employing people.
False positives can be incredibly damaging. Not only could this result in your account being banned (possibly with the loss of all your photos on something like iCloud/iPhotos) but it may get you in trouble with law enforcement.
Don't believe me? Hertz was falsely reported their cars being stolen [1], which created massive problems for those affected. In a better world, Hertz executives would be in prison for making false police reports (which, for you and me, is a crime) but that will never happen to executives.
It still requires human review to identify offending content. Mass shootings have been live streamed. No automatic system is going to be able to accurately differentiate between this and, say, a movie scene. I guarantee you any automated system will have similar problems differentiating between actual CSAM and, say, a child in the bath or at the beach.
These companies don't want to solve these problems. They simply want legal and PR cover for appearing to solve them, consequences be damned.
False positives would constitute a huge invasion of privacy. Even actual positives would be, a mom taking a private picture of her naked baby, how can you report that. They did well dropping this insane plan. The slippery slope argument is also a solid one.
NYT article about exactly this situation[0]. Despite the generally technical competency of HN readership, I imagine there would be a lot of people who would find themselves completely fucked if this situation happened to them.
The tl;dr is that despite this man ultimately having his name cleared by the police after having his entire Google account history (not just cloud) searched as well his logs from a warrant served to ISP, Google closed his account when the alleged CSAM was detected and never reinstated it. He lost his emails, cloud pictures, phone number (which losing access to prevented the police from contacting him via phone), and more all while going through a gross, massive invasion of his privacy because he was trying to do right for his child during a time when face-to-face doctor appointments were difficult to come by.
This should be a particularly salient reminder to people to self-host at the very least the domain for their primary and professional e-mail.
The apple one was only matching against known images, not trying to detect new ones.
The google one actually does try to detect new ones and there are reported instances of Google sending the police on normal parents for photos they took for the doctor.
I feel this neatly captures the overarching corporate philosophies and attitudes of Apple and Google in a single example.
Pick your favourite other example of when Apple and Google have faced roughly the same problem as each other, and hold up their respective solutions next to those in the example of CSAM scanning above. I bet they'll look similar.
And it would only notify someone for human review if a certain threshold was reached; just having one or two violating images would have tripped the system.
I haven't forgot about the guy that sent photos of his child to his doctor and was investigated for child pornography. With these systems, in my humble opinion, you are just one innocent photo at the beach away from your life turned upside down.
And Google to this day refuse to admit the mistake. They've even gone as far as to insinuate that he still is a pedo despite a police investigation clearing him.
Pretty ridiculous idea. Bad actors simply won't use their platform if this was in place. It would only be scanning private data from all people who aren't comitting crimes.
You'd be surprised. Lots of offenders are very low sophistication. If you read news articles about how a particular offender was caught with illegal material, so so often it's because they uploaded it to a cloud provider. It's not a one-sided tradeoff here.
What percentage of offenders victimize children and never record it in any way? If that's the overwhelming majority of abuse cases, what are we even doing here?
Worse. Only a tiny fraction of child abuse material cases actually get investigated due to lack of resources… this debate about scanning is an insane distraction.
I think they likely also considered the lawsuit exposure. If just 0.0001% of users sued over false positives, Apple would be in serious trouble.
And there's another dynamic where telling your customers you're going to scan their content for child porn is the same as saying you suspect your customers of having child porn. And your average non-criminal customer's reaction to that is not positive for multiple reasons.
Section 230 removes liability for restricting good faith attempts to combat CSAM.
> (2) Civil liability
> No provider or user of an interactive computer service shall be held liable on account of—
> (A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected
I don't see any reference to child porn there. Who decides what's obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable? Especially otherwise objectionable?
And I can assure you that every single judge in the USA, and almost every single member of a jury would decide that CSAM is obscene.
Thats how the law works. We have tons of laws that use general words like this, and trying to be "clever" usually just results in a lost court case or prison time for the person who thinks they found a loophole.
Quite a lot of things may or may not fall under that definition. Thats how the law works.
> And why would I have to accept an US jury's opinion?
Well, because they and judges are the ones empowered by the government monopoly on violence to judge the law and then have it be enforced, thats why.
Ignore the law at your own peril. But anyway, even if you did ignore the law, this doesn't have anything to do with you.
This is about companies immunity from prosecution. So, even if you disagree with the law, those companies are still immune under section 230, for good faith efforts to remove obscene content.
> Also you didn't define 'otherwise objectionable'.
It would be defined as whatever judges and juries define it as. I don't define it. Instead it is defined by those people.
> For example what I think
If you are not a judge or currently on a jury, then what you think is irrelevant.
The law is not computer code. Instead, it is interpreted by humans. And that is the case for basically all of law.
> Well, because they and judges are the ones empowered by the government monopoly on violence to judge the law and then have it be enforced, thats why.
... by the US government on US territory, i think. They have no business defining "otherwise objectionable" elsewhere.
> They have no business defining "otherwise objectionable" elsewhere.
The principles that I have describe also apply to other countries as well.
So yes it is absolutely the case that in other countries there are judges and juries that apply the law via the process of judges and juries.
But once again, no matter what some other country thinks, on this specific topic, it is about a company getting immunity from USA enforcement. So all other countries do not matter on this topic, because this is about US law.
> “Scanning every user’s privately stored iCloud data would create new threat vectors for data thieves to find and exploit," Neuenschwander wrote. "It would also inject the potential for a slippery slope of unintended consequences. Scanning for one type of content, for instance, opens the door for bulk surveillance and could create a desire to search other encrypted messaging systems across content types.”
Both of these arguments are absolutely, unambiguously, correct.
The other side of the coin is that criminals are using E2EE communication systems to share sexual abuse material in ways and at rates which they were not previously able to. This is, I argue, a bad thing. Is is bad for the individuals who are re-victimised on every share. It is also bad for the fabric of society at large, in the sense that if we don't clearly take a stand against abhorrent behaviour then we are in some sense condoning it.
Does the tech industry have any alternate solutions that could functionally mitigate this abuse? Does the industry feel that it has any responsibility at all to do so? Or do we all just shout "yay, individual freedom wins again!" and forget about the actual problem that this (misguided) initiative was originally aimed at?
The extreme hysteria created by anything related to children often seems to be carte blanche to destroy privacy and implement backdoors in applications. Most child abuse comes from family members (which must be solved at the source), and the ultra extreme cases simply make awful law (doing away with E2EE or instituting mass surveillance to catch an incredibly small minority is absurd).
Much like other 'tough on crime' measures (of which destroying E2EE is one) the real problems need to be solved not at the point of consumption (drugs, guns, gangs, cartels) but at the root causes. Getting rid of E2EE just opens the avenue for the abuse of us by the government but in no way guarantees we'll meaningfully make children safer.
And no, we are not 'condoning' it when we declare E2EE an overall good thing. Real life is about tradeoffs not absolutes, and the tradeoff here is protection from the government for potentially billions vs. maybe arresting a few thousand more real criminals. This is a standard utilitarian tradeoff that 'condones' nothing.
Don't worry, we fill our homes and pockets with enough cameras and microphones from private companies that the government can require the monitoring of everyone in every family 24/7 to make sure we're finally safe!
> The extreme hysteria created by anything related to children often seems to be carte blanche to destroy privacy and implement backdoors in applications.
Yes, and you can tell because the proposed solutions attack privacy when alternative solutions exist.
For example, simply deleting CSAM material from devices locally without involving any other parties could have achieved the goals without privacy violations.
It makes me somewhat uncomfortable to argue for not involving other parties (like the police) in cases where real CSAM is found on someone’s device. Same as most people, I think that CSAM is morally reprehensible and really harmful to society. But just deleting it en-masse would have been an effective and privacy-respecting solution.
I think it’s important to see nuance even in things we don’t like to think about. Not everything that has a price tag has a price. We were told we needed to give up privacy, but that wasn’t necessary to take CSAM out of circulation.
Ah... But you see, now you're arguing in "bad faith" because you're trying to protect the kiddie diddlers!
...Understand I don't see it that way and applaud you for your way of thinking. I too have had to wrestle with the very uncomfortable "bed fellows" as it were that adhering to consistent application of principles inevitably results in.
The fact remains though that in a large swathe of the population, the ripping off and sacrifice of personal privacy is considered a small price to pay to inflict harm on that subpopulation. My issue comes in in that once you make the exception for one subpopulation, the slope is set.
Though even your "just delete it" has dystopian ramifications. Imagine that were implemented like Tianenmen Square? Recordings that are not blessed? You're still leaving in somebody's hands essentially executive control over what information can be allowed to exist, which is an unconscionably powerful lever to build.
This is one of those rare circumstances where "do nothing and clean up the mess" may be the most wise course of action.
> Most child abuse comes from family members (which must be solved at the source)
Yes. Since becoming an abuser is a process and not a moment, part of the solution must be making access to CSAM much harder.
> And no, we are not 'condoning' it when we declare E2EE an overall good thing.
Agreed. I'm sorry if I worded things in a way that caused you to see an implication which was not intended. To be clear: E2EE is a good thing. Championing E2EE is not equivalent to condoning CSAM.
What I did say is that in failing to try and provide any meaningful solutions to this unintended consequence of E2EE, the industry is effectively condoning the problem.
> This is a standard utilitarian tradeoff
If that's the best we can do, I'm very disappointed. That position says that to achieve privacy, I must tolerate CSAM. I want both privacy and for us not to tolerate CSAM. I don't know what the solution is, but that is what I wish the industry were aiming for. At the moment, the industry seems to be aiming for nothing but a shrug of the shoulders.
We have all this AI now, maybe give the people who want that stuff realistic enough victimless content and we can see if cases drop? It seems like we're approaching a time when the technology makes it possible to test the theory and get an answer anyway.
That position says that to achieve privacy, I must tolerate CSAM. I want both privacy and for us not to tolerate CSAM.
Not true, you can have privacy and at the same time not tolerate child pornography, those are two perfectly compatible positions and arguably the current state. What you can not have - by definition - is privacy on the one hand and on the other hand no privacy in order to look for child pornography. You can still fight child pornography in any other way, but when it comes to privacy, you have to make a choice - give people their privacy or look through their stuff for illegal content, you can not have both. If you have enough evidence, a judge might even grant law enforcement the permission for privacy violating measures, it should just not be the default position that your privacy gets violated.
...Now that, is a well conceived viewpoint, but I still argue policy-wise, that no penalties can conscionably be assessed for failure to engage in said activity without spreading the taint of deputization, which de-facto unmakes your stance. As a government deputy, you don't have that privacy. If you are not compelled to act as a deputy of the State, I can accept you escalating to NCMEC, but we also have to accept there is no recourse for providers who turn a blind eye to the whole thing.
Failure to recognize this perpetuates the fundamental inconsistency.
> Yes. Since becoming an abuser is a process and not a moment, part of the solution must be making access to CSAM much harder.
This is a very big assumption. Sexual abuse of minors has existed long before the internet, and long before photography. The notion that less availability of CSAM leads to less real-world abuse is not at all clear.
> If that's the best we can do, I'm very disappointed. That position says that to achieve privacy, I must tolerate CSAM. I want both privacy and for us not to tolerate CSAM. I don't know what the solution is, but that is what I wish the industry were aiming for. At the moment, the industry seems to be aiming for nothing but a shrug of the shoulders.
As other commenters have pointed out, the solution is to prevent children from being abused in the first place. Have robust systems in place to address abuse, and give kids effective education and somewhere to speak out if it happens to them or someone they know.
> Since becoming an abuser is a process and not a moment, part of the solution must be making access to CSAM much harder.
In my opinion CSAM is a symptom, not a cause.
It's difficult to "stumble across" that kind of material unless you're already actively looking for it, which means some amount of "damage" is already done.
I also highly doubt that someone with no proclivities in that direction would 'turn' as a result of stumbling across CSAM. I'd guess they'd go the other way and be increasingly horrified by it.
> It's difficult to "stumble across" that kind of material unless you're already actively looking for it
It's entirely likely that borderline cases (15-17 years old) are seen by millions of people without them realizing it. Pornhub is a popular "mainstream" porn website that has issues with CSAM and routinely removes it when found. It's entirely possible for "normal" consumers of pornographic material to stumble into CSAM in places like that unknowingly. When you're talking about obviously prepubescent children, I'm in full agreement that it's almost always restricted to folks who seek it out explicitly.
>>That position says that to achieve privacy, I must tolerate CSAM. I want both privacy and for us not to tolerate CSAM.
I want to have more money than Elon Musk..... sometimes life is not fair and we can not always get what we want...
Any "backdoor" or "frontdoor" in encryption is a total failure of encryption. That is a immutable truth, more fixed in reality than the speed of light is in physics.
This is the part that's really infuriating to me that I've seen in several comments - this implication that the onus is on programmers or people who work in technology to somehow figure out a way to do this impossible thing, as if nobody has tried. And then they have the gall to say something like "I want both privacy and for us not to tolerate CSAM." without proposing any kind of theoretical solution.
If you can't even propose a hypothetical science fiction-esque way of achieving this, much less one that might actually be implementable now or in the near future (like, 5-10 years), you shouldn't get to have that opinion.
There is no parallel to be drawn between better encryption and worse outcomes for kids. Should we also outlaw high-performance cars because these sometimes serve as effective getaway vehicles for criminals?
CSAM producers and consumers should be found and punished via old-fashioned methods. How was this done in the past? Did we just never catch any human traffickers / rapists? No, we had detectives who went around detecting and presumably kicking down doors.
To outlaw large sections of mathematics because of this is absurd. And from the amount of power it would give big governments / big businesses, the fabric of society doesn't stand a chance.
> CSAM producers and consumers should be found and punished via old-fashioned methods. How was this done in the past?
The "old-fashioned methods" that they used in the past included intercepting communications of people that were suspected of crimes, such as by getting a warrant allowing them to force the person's phone company to record and turn over the person's calls, or by getting a warrant to intercept and inspect the contents of the person's mail at the post office.
> To outlaw large sections of mathematics because of this is absurd
No one has or is proposing outlawing large sections of mathematics, or even small sections of mathematics. The laws are outlawing some applications that make use of mathematics.
Calling that outlawing mathematics is as absurd as saying that building codes that won't let me use asbestos insulation in new construction are banning sections of thermodynamics. Or saying that laws that restrict how high I can fly a drone are banning large sections of aerodynamics.
>The laws are outlawing some applications that make use of mathematics.
Stop equivocating. You're banning the mathematics. The mechanism is literally the mechanical implementation of the mathematics.
>Calling that outlawing mathematics is as absurd as saying that building codes that won't let me use asbestos insulation in new construction are banning sections of thermodynamics.
...Except that's not even an analogous comparison? The asbestos is forbidden not because it's too good an insulator/foiler of thermodynamics, but because of it's danger to the health of everyone.
Trying to ban applications that use encryption is exactly banning asbestos because it's too good an insulator, and you're interested in seeing whatever is wrapped in it burn.
> How was this done in the past? Did we just never catch any human traffickers / rapists?
Recently invented encrypted chat rooms allow people to coordinate and transfer CSAM without any government official being able to infiltrate it. And just being able to freely discuss has been shown to make the problem worse as it facilitates knowledge transfer.
This is all completely different to in the past where this would have been done in person. So the argument that we should just do what we did in the past makes no sense. As technology advances we need to develop new techniques in order to keep up.
So these people who are coordinating and transferring CSAM are presumably bringing others into the fold to more effectively distribute things. Otherwise digital technology would not make distribution and coordination easier. So law enforcement just needs to infiltrate these groups exactly the same way that they did in the past. The only difference is they don't even need to meet these creeps face to face until they arrest them.
Absolutely true. If there are new ways to commit crimes, there must be new ways to fight crimes. Child abuse has been accelerated greatly by technology and we are fighting it with sticks and stones because we don't want to accept that the terribleness of these crimes is worth giving up some of our privacy to stop.
> Child abuse has been accelerated greatly by technology
Any source for that statement ? My understanding is that child abuse - while still existing - is at an all time low in modern western societies. Children working in brothel used to be common, children getting abused by the church used to be a well know fact etc.
Overall a using children is harder than ever even if it has not completely disappeared.
Technology - which allows efficient transfer of information, testimonies etc - is instrumental to that evolution.
I think it's a pretty hardline opinion to state law enforcement should be confined to "old-fashioned" methods. Tech is changing the world. Let's not let it be a de-facto lawless world.
Yea, LE/IC clearly have gone too far in many modern tactics.
Yea, it's possible to build a surveillance/police state much more efficiently than ever before.
Yea, we should be vigilant against authoritarianism.
Plenty of people who own performance cars take them to racetracks where driving above 100mph is entirely legitimate. So doing this wouldn't affect zero legitimate use.
Not to distract from the topic but vehicles cost orders of magnitude more could be geofenced like $500 rental scooters so the engine computer would recognize the handful of high-speed tracks in the region.
How much hyperbole will we see before people realize that our entire society is built on nuanced positions? Preventing someone from driving irresponsibly is no more tyranny than saying they can’t fire a gun randomly in a neighborhood but have to go to a range.
> Should we also outlaw high-performance cars because these sometimes serve as effective getaway vehicles for criminals?
What if we change the last bit after the "because" to "these sometimes are used at unsafe speeds and, intentionally or not, kill people who are not in cars?"
Because, at least for me, the answer is an unambiguous yes.
I agree that privacy and security should be available to everyone. But we also shouldn't count on being able to find people who are doing vile things--to children or adults--because the person messed up their opsec. I think Apple is correct here but as an industry we have to be putting our brains to thinking about this. "To outlaw large sections of mathematics" is hyperbole because we use mathematics to do a lot of things, some useful and some not.
>But we also shouldn't count on being able to find people ... because the person messed up their opsec.
How is this different from how police find anybody else who commits a crime? Like if they're trying to solve a murder, they're looking for DNA, clues, etc... They're literally looking for where the person who committed the crime "messed up their opsec" to use your wording.
Governments and law enforcement agencies have access to more information than they've ever had before. They have more cameras, location data, tons of data compiled by data brokers. But it's not enough - of course they have to have this too.
On top of that, there's a long history now of governments buying zero day vulnerabilities or even technology from firms like NSO and guess what? It's not being used to catch pedophiles (cue shocked Pikachu face) but it is being used to target political dissidents.
This is so frustrating, because it feels like a siege on a city. Collectively, people have to fight against bad legislation in various countries constantly. But the other side only has to "win" once.
I'd say that "we" as an industry should be putting out brains to trying to figure out ways to make it even harder for these people to legislate encryption out of existence, than trying to find ways to appease geriatric lawmakers.
I think it just takes some outside-the-box thinking. Compromising E2EE is a much easier solution, but there has to be a harder, yet better one. We can't just sidestep important rights for convenience's sake; the right to privacy is being eroded in so many ways that it must be actively safeguarded. If it needs better detective work, so be it.
semi-hyperbole : keep encryption but make it more expensive (maybe by law enforcement having a larger IT budget), and/or fund/cheapen AI CSAM to outmode encrypted sharing of actual CSAM. win win?
Mass surveillance is never an appropriate solution, let's start with that.
I don't belive tech has an over weighted responsibility to solve society's problems, and in fact it's generally better if we don't try and pretend more tech is the answer.
Advocating for more money and more prioritization for this area of law enforcement is still the way to go if it's a priority area. Policing seems to be drifting towards "mall cop" work, giving easy fines, enabled by lazy electronic surveillance casting a wide net. Let's put resources towards actual detective work.
I would prefer advocating more money for mental health as it would provide additional benefits in other areas of society down the line too? I can't imagine child porn consumption rising from a healthy mind.
When tech creates problems should tech tried to solve it or should tech be limited?
We deceive ourselves honestly by pretending like we have not created new realities which are problematic at scale. We have. They are plentiful. And if people aren’t willing that we walk back tech to reduce the problems and people aren’t willing to accept technical solutions which are invasive then what are we to do? Are we just to accept a new world with all these problems stemming from unintended consequences of tech?
“Tech”? What do you mean by “tech?” Do you expect Apple to remove the camera, storage, and networking capabilities of all their devices? That’s the “tech” that enables this.
I mean "tech" did a lot of messed up things - there is a reason why "what is your favorite big tech innovation: 1) illegal cab company 2) illegal hotel company 3) fake money for criminals 4) plagiarism machine" is a funny joke.
Enabling people to talk to each other without all their communication being wiretapped and archived forever is not one of those, I would say.
Those aren't really "tech innovations", though, aside from maybe the plagiarism machine.
Uber and AirBnB are just using very-widely-available technology—that some taxi services and hotels are also using!—and claiming that they're completely different when the main difference is that they're just ignoring the laws and regulations around their industries.
Cryptocurrencies are using a tech innovation as a front for what's 99.9999% a financial "innovation"...which is really just a Ponzi scheme and/or related scams in sheep's clothing.
LLMs are genuinely a tech innovation, but the primary problem they bring to the fore is really a conversation we've needed to have for a while about copying in the digital age. The signs have been there for some time that such a shift was coming; the only question was exactly when.
In none of these cases is technology actually doing anything "messed up". Companies that denote themselves as being "in the tech industry" do bad things all the time, but blaming the technology for the corporate (and otherwise human) malfeasance is very unhelpful. In particular, trying to limit technological progress, or ban widely useful technological innovations because a small minority of people use them for ill, is horrifically counterproductive.
Enforce the laws we have better, be more willing to turn the screws on people even if they have lots of money, and where necessary put new regulations on human and corporate behavior in place (eg, requiring informed consent to have works you created included in the training set of an LLM or similar model).
> When tech creates problems should tech tried to solve it or should tech be limited?
You haven't explained the problem 'tech' has created, I'm confused as to what your point is?
CSAM isn't a problem caused by 'tech' unless you're going back to the invention of the camera, and I think that toothpaste is well out of the tube.
Additionally, and this is where a whole of arguments about this go wrong, the important part: the actual literal abuse, is human to human. There is no technology involved whatsoever.
Technological involvement may be an escalation of offense, but it's vanishingly secondary.
Mass surveillance is bad, but I think there are versions of it that are far less bad than others.
Apple's proposed solution would have theoretically only reported cases that were much more than likely to be already known instances of CSAM (i.e. not pictures of your kids), and if nothing else is reported, can we say that they were really surveilled? In some very strict sense, yes, but in terms of outcomes, no.
How about we start with this version of surveillance: currently it is almost impossible, and frankly stupid, for kids to ask for help with abuse, because they'll end up in this sort of system (with no way out one might add)
So how about we implement mass-surveillance by giving victims a good reason to report crimes? Starting with not heavily punishing victims that do come forward. Make the foster care system actually able to raise kids reasonably.
Because, frankly, if we don't do it this way, what's the point? Why would we do anything about abuse if we don't fix this FIRST? Are we really going to catch sexual abuse, then put the kids into a state-administered system ... where they're sexually, and physically, and mentally, and financially abused?
WHY would you do that? Obviously that doesn't protect children, it only hides abuse, it protects perpetrators in trade for allowing society to pretend the problem is smaller than it is.
ok, and in theory, with new generative algorithms, do you think it's still ok? Suppose apple implements this, suppose someone finds a way to generate meme images that can trigger apple's algorithm(but human can't see anything wrong), suppose that someone wants to harm you and sends you a bunch of memes and you save them. What will happen? Or what does happen if somebody is using generative algorithm to create csam like images by using people's face as base but the rest of the image is generated, should this also trigger csam?
Also, you can not guarantee that apple/google will use only known instances of csam, what if, govt orders them/google to scan for other type of content under the hood, like documents or god knows what else bc govt want's to screw that person (for the sake of example let's suppose the targeted person is some journalist that discovered shady stuff and govt wants to put em in prison), bc you know, you don't have access to either algorithms and csam scan list that they are using, system could be abused and usually could means 'sometime' it will
These criticisms are reasonable criticisms of a system in general, but Apple's design featured ways to mitigate these issues.
I agree that the basic idea of scanning on device for CSAM has a lot of issues and should not be implemented. What I think was missing from the discourse was an actual look at what Apple were suggesting, in terms of technical specifics, and why that would be well designed to not suffer from these problems.
Mass surveillance isn't necessarily bad. It depends how it's implemented. The solution you describe is basically how it works with the intelligence agencies, in that only a miniscule fraction of the data collected in bulk ever reaches human eyes. The rest ends up being discarded after the retention period.
In terms of outcomes, almost nobody is actually surveilled, as the overall effect is the same as no data having been collected on them in the first place.
That said, I am personally more comfortable with my country's intelligence agencies hoovering up all my online activity than I am with the likes of Apple. The former is much more accountable than the latter.
If your ex-spouse was a contractor for a government agency with access to the mass surveillance machine, would you still feel comfortable "that only a miniscule fraction of the data collected in bulk ever reaches human eyes?"
What if you were a candidate for political office, pushing opinions that angered large swaths of the Intelligence Comminity?
The "minuscule fraction" of content is not surfaced by some random roll of the dice - it's the definitionally most interesting content, in the sense that some human went specifically looking for it in the heap of content caught in the dragnet. And it only needs to be interesting to at least one person with the clearance to search for it.
Maybe that means it's a video of a child being abused, and some morally upstanding federal officer is searching for it because anyone possessing it is ethically and legally culpable for the abuse of that child... Or maybe it's a PDF containing evidence of FBI kidnapping and torturing innocent civilians, and some morally corrupt federal officer is searching for it because anyone possessing it is a liability who needs to be silenced... Or maybe it's a JSON file containing the GPS locations of an individual for the past year, and some emotionally scorned federal contractor is searching for it because that individual is their ex-spouse who's moved onto a new partner.
Are you really prepared to put your faith in the trustworthiness and moral clarity of the population of 100k+ people with federal security clearances?
What leads you to believe that access to search these datasets is some sort of unregulated, unmonitored free-for-all for anyone allowed to wander into an intelligence agency building?
The scenarios you invented sound very far-fetched to me, if these did happen I very much doubt the perpetrator would be able to get away with it.
> At least a dozen U.S. National Security Agency employees have been caught using secret government surveillance tools to spy on the emails or phone calls of their current or former spouses and lovers in the past decade, according to the intelligence agency’s internal watchdog.
> Mass surveillance is never an appropriate solution, let's start with that.
I think we’re well beyond that point now. Whether or not encryption is allowed or not and however private you believe your virtual life to be, in the physical world surveillance is the norm. Your physical location, biometric information, and relationships can and will be monitored and recorded.
Every other aspect of life has been impacted by the computer's ability to process lots of information at speed. To say "no, policing must not use these tools but everyone else can" seems - well, quixotic, maybe?
If illegal data (CP) is being transferred on the net, wiretapping that traffic and bringing hits to the attention of a human seems like a proportional response.
(Yes, I know, it's not going to be 100% effective, encryption etc, but neither is actual detective work.)
If you have reasonable evidence, wiretapping a suspect to gain more evidence is fine. On the other hand wiretapping everyone in hope of finding some initial evidence, that is not okay at all.
But that's just a restatement of the OP's position ("Mass surveillance is never an appropriate solution"). You're not attempting to justify that position.
That's just an axiom for me, no justification needed. My life is my life and it is not the business of the state to watch every step I do as long as I am not affecting others in any relevant way. You convince me that I or society as a whole would be better off if I allowed the state to constantly keep an eye on me, then I might change my opinion and grant the state the permission to violate my privacy.
That's nonsense, every worldview must be grounded in some axioms, that does not make it a religion. I can break it down somewhat more for you. The state has no powers besides the ones granted by its citizens. I value my privacy highly and need very good reasons to grant the state permission to violate it. Catching criminals does not clear the bar, there are other ways to do this that do not violate my privacy.
If you want to get it amended, then by all means, make a case for why it should be amended.
In the meantime you wanted to know why mass surveillance isn’t an option. The answer “because it’s against the law” is a simple, good answer.
If you want to know why we decided as a nation to make that such a fundamental law that it is in our constitution, you could do worse than reading about what prompted the writing of the Bill of Rights.
The answer “because it’s against the law” is a simple, good answer.
While often true, at all times there have also been morally wrong laws, so it would not be unreasonable to counter that being written into law on itself means nothing. So you should always be prepared to pull out and defend the reasoning behind a law, which you also hinted at in your following sentences.
Why is it not okay at all? That's what our intelligence agencies do with their bulk data collection capabilities, and they have an immense positive impact on society.
If you want to argue that they can scan people outside the country and not US citizens, and that that has a benefit, go ahead and make that argument. You might even convince me.
But it’s just begging the question to say there’s immense benefit to them searching US citizens’ communications without a reason.
That’s the whole question.
Show me why we should change the constitution which guarantees us freedom from this sort of government oppression.
I'm writing from a UK perspective so there's no underlying constitutional issue here like there might be in the US. Bulk data collection is restricted by specific laws and this mandates regular operational oversight by an independent body, to ensure that both the collection and each individual use of the data is necessary and proportionate.
Some of this will include data of British citizens, but the thing is, we have a significant home-grown terrorism problem and serious organised criminal gang activity, happening within the country. If intelligence analysts need to look at, for example, which phone number contacted which other phone number on a specific date in the recent past, there's no other way to do this other than bulk collect all phone call metadata from the various telecom operators, and store it ready for searching.
The vast majority of that data will never be seen by human eyes, only indexed and searched by automated systems. All my phone calls and internet activity will be in there somewhere, I'm sure, but I don't consider that in itself to be government oppression. Only if it's used for oppressive purposes, would it become oppressive.
> criminals are using E2EE communication systems to share sexual abuse material
Blah blah blah, the same old argument given by the "think of the children" people.
There are many ways to counter that old chestnut, but really, we only need to remember the most basic fundamental facts:
1) Encryption is mathematics
2) Criminals are criminals
Can you ban mathematics ? No.
Can you stop criminals being criminals ? No.
So, let's imagine you are able to successfully backdoor E2EE globally, on all devices and all platforms.
Sure, the "think of the children" people will rejoice and start singing "Hallelujah". And the governments will rub their hands with glee with all the new data they have access to.
But the criminals ? Do you honestly think they'll think "oh no, game over" ?
No of course not. They'll pay some cryptographer in need of some money to develop a new E2EE tool and carry on. Business as usual.
This mindset—that assigns people into immutable categories, "criminal" and "not criminal"—is actually one of the biggest things that needs to change.
We absolutely can stop criminals from being criminals. We just can't do so by pointing at them and saying "Stop! Bad!" We have to change the incentives, remove the reasons they became criminal in the first place (usually poverty), and make it easier, safer, and more acceptable to go from being a criminal to being a not-criminal again.
> But the criminals ? Do you honestly think they'll think "oh no, game over" ?
> No of course not. They'll pay some cryptographer in need of some money to develop a new E2EE tool and carry on. Business as usual.
I used to think this, I changed my mind: just as it's difficult to do security correctly even when it's a legal requirement, only the most competent criminal organisations will do this correctly.
Unfortunately, the other issue:
> And the governments will rub their hands with glee with all the new data they have access to.
Is 100% still the case, and almost impossible to get anyone to care about.
> only the most competent criminal organisations will do this correctly.
All it takes is for one criminal to write a one-page guide to using GPG and circulate it to the group ....
I know I mentioned paying a cryptographer earlier, but in reality downloading and using GPG is a crude and effective way of defeating an E2EE backdoor.
Are the GPG devs going to backdoor GPG to satisfy governments ? Probably not.
> If cybersecurity was that easy, we wouldn't have so many examples of businesses getting it wrong.
I can only partially agree with this point. Businesses getting cybersecurity wrong has almost no material and significant consequences. At best, they get a tiny slap on the wrist or asked to answer some questions. Nobody in said businesses goes to jail for it or personally pays any fines. Compare that to criminals who have a lot more to lose if they get caught — jail time, fines they have to pay, not having freedom for quite sometime, life not being the same after they’ve served their sentence, and more. Businesses have it extremely easy compared to this. No wonder cybersecurity is so poor among all businesses, including very large ones (like Microsoft, as a recent example).
Fear of these is the reason for the (maliciously compliant) GDPR popups, and that despite discussion about extra-territoriality and relativity limited capacity-to-websites ratio.
The law and threats of punishment are clearly not hugely significant to anyone involved in the specific topic of this thread regardless; in the UK at least, it's the kind of thing where if someone is lynched for it, the vigilantes have to be extremely stupid (like attacking a paediatrician because they can't tell the difference, which happened) to not get public sympathy.
GPG is infamous for being difficult to use correctly and for an antiquated design (IIRC forward secrecy is impossible?). And assuming E2EE backdoor actually exists, the gov is likely to be able to get at your key.
>Are the GPG devs going to backdoor GPG to satisfy governments?
No, but most users are unlikely to verify their GPG build is the right build.
they can just use/switch to unpatched devices with some opensource e2ee without big effort. Result? Criminals will continue doing criminal stuff, rest of the planet will be under surveillance system that can be altered at govt/company's will without your knowledge to either target specific ppl(from govt) or target groups of ppl(for 'relevant' ads)
zero days are irrelevant imo, zero days are for targeted attacks(assuming it's from gov), exploiting all zerodays for all devices is not that productive, csam scan on the other hand can handle both untargeted and targeted surveilance: untargeted by spotting bad actors from a generic csam list, targeted - by adding to that list target's face/specific things to locate it and monitor it.
That's the point, bad actors can circumvent the system if they feel threatened, but system can be exploited by gov/companies once rolled out globally to target any user, so we get something that may be not that effective against bad actors but poses great risk to be misused by gov/company in their own interests without users knowing. I've seen how an authoritarian gov in my country is targeting ppl bc they are uncomfortable for the system and this algorithm opens another potential vector of attack.
> I've seen how an authoritarian gov in my country is targeting ppl bc they are uncomfortable for the system and this algorithm opens another potential vector of attack.
As per my last sentence in my initial comment in this chain:
--
> And the governments will rub their hands with glee with all the new data they have access to.
Is 100% still the case, and almost impossible to get anyone to care about.
How many times are you okay with having your own children taken from you while the thought police make sure your latest family beach vacation wasn't actually trafficking?
How many times will actual abusers be allowed to go free while your own family is victimized by the authorities and what ratio do you find acceptable?
We were not even at a point where that question needs to be asked.
Federal and state police, some of the best funded, equipped and trained police in the world are so inundated with cases that they are forced to limit their investigations to just toddlers and babies. What use is it to add more and more cases to a mountain of uninvestigated crimes? Whats needed is more police clearing the existing caseload.
Same goes for mandatory reporting that teachers do in Australia. The resources available to investigate are only able to cope with those who are in "immediate danger of losing their lives".
That's an unbelievably depressingly low bar for society.
The real issue is that FBI/NSA/CIA has abused our trust in the so completely that we have to make E2E communication. From assasinating people like Fred Hampton to national security letters, the government has completely lost the trust of tech.
That is a bigger problem and it will take a long time to fix. So long that I suspect that anybody reading this is long dead, but its like the saying with planting trees.
> Both of these arguments are absolutely, unambiguously, correct.
Indeed, they are correct. And they were also brought up when Apple announced that they would introduce this Orwellian system. Now they act like they just realized this.
> Is is bad for the individuals who are re-victimised on every share.
Absolutely.
> It is also bad for the fabric of society at large, in the sense that if we don't clearly take a stand against abhorrent behaviour then we are in some sense condoning it.
Much less clear. There's always been the argument: does it provide an outlet with no _new_ (emphasis on "new" so people don't skim that word) victims, or does it encourage people to act out their desires for real?
I don't have any answer to this question; but the answer matters. I do have a guess, which is "both at the same time in different people", because humans don't have one-size-fits-all responses.
Even beyond photographs, this was already a question with drawings; now we also have AI, creating new problems with both deepfakes of real people and ex-nihilo (victimless?) images.
> Does the tech industry have any alternate solutions that could functionally mitigate this abuse?
Yes.
We can build it into the display devices, or use a variation of Van Eck phreaking to the same effect.
We can modify WiFi to act as wall-penetrating radar with the capacity to infer pose, heart rate, and breathing of multiple people nearby even if they're next door, so that if they act out their desires beyond the screen, they'll be caught immediately.
We can put CCTV cameras everywhere, watch remotely what's on the screens, and also through a combination of eye tracking and (infrared or just noticing a change in geometry) who is aroused while looking at a forbidden subject and require such people to be on suppressants.
Note however that I have not said which acts or images: this is because the options are symmetrical under replacement for every other act and image, including (depending on the option) non-sexual ones.
There are places in the world where being gay has the death penalty. And if I remember my internet meme history right, whichever state Mr Hands was in, accidentally decriminalised his sex when the Federal courts decided states couldn't outlaw being gay and because that state had only one word in law for everything they deemed "unnatural".
>>and at rates which they were not previously able to.
your source for proof of that?
>> in the sense that if we don't clearly take a stand against abhorrent behaviour then we are in some sense condoning it.
No. this narrative of "silence is violence" and "no action is support" etc is 100% wrong.
You started out great, but i can not get behind this type of thinking...
>>Does the tech industry have any alternate solutions that could functionally mitigate this abuse?
Why is it a "tech industry" problem
>>Or do we all just shout "yay, individual freedom wins again!"
For me the answer is simple... Yes individual freedom is more important that everything. I will never support curbing individual freedom on the alter of any kind proclaimed government solution to a social safety problem. Largely because I know enough about history to understand that not only will they no solve that social safety problem, many in government are probably participating in the problem and have the power to exempt themselves, while abusing the very tools and powers we give them to fight X, for completely unrelated purposes.
Very quickly any tool we would give them to fight CSAM would be used for Drug Enforcement, Terrorism, etc. It would not be long before the AI based phashes detect some old lady's Tomato plants as weed and we have an entire DEA paramilitary unit raiding her home...
So these people who are coordinating and transferring CSAM are presumably bringing others into the fold to more effectively distribute things. Otherwise digital technology would not make distribution and coordination easier. So law enforcement just needs to infiltrate these groups exactly the same way that they did in the past. The only difference is they don't even need to meet these creeps face to face until they arrest them. They can do it safely behind a screen and they can also scale their own efforts far more effectively.
I don't like how these sentiments are written as if (C)SAM sharing is only type of crimes to ever be committed, which is devastating and PTSD inducing and life crippling, but say, not a murder. It could be one, or it could be major financial crimes, terrorism conspiracy, et cetera.
Yet, the only justification around for mass surveillance for camera pictures, the important societal matter to rest literally on "the other side of the coin", is "some minority of people could be sharing naked photos of young members of society for highly unethical acts of viewing".
> The other side of the coin is that criminals are using E2EE communication systems to share sexual abuse material in ways and at rates which they were not previously able to.
You mean at rates greater than in the 1970s when it was legal in the US and there were a bunch of commercial publications creating and distributing it (prior to the "Protection of Children Against Sexual Exploitation Act" in 1977 (which targeted the production) and "Child Protection and Obscenity Enforcement Act" in 1988 (which targeted the distribution))? I find that doubtful.
> Does the industry feel that it has any responsibility at all to do so?
Why should it? Should every home builder and landlord install a camera in every bedroom to stream back to the government to be sure that no children are being molested? Why not? It's technologically possible today.
The fact that garbage people do garbage things to vulnerable people doesn't mean that the entire world must be commandeered to stop it.
E2E encryption has replaced a lot of in person meeting that also would have been secure against dragnet surveillance. We shouldn't lose our privacy just because it became technically possible to take it away, no different than the cameras in bedrooms mentioned above.
How are individuals re-victimized with every share? That makes no sense. Your LinkedIn profile photo could be on a billboard in rural China, what would it be to you?
There are several cases of victims being harassed and haunted by photos and video of them as a child being sexually abused. This is one of the reasons for Masha's law.
This makes no mention of direct harassment. How easy is it to connect random pictures of children with actual living adults? She's suing people she's never had any direct contact with. The government itself notifies her each time someone is arrested and in possession of an image of her, why, it does not say, but none of this sounds like a necessary healthy resolution to the underlying problem.
>Both of these arguments are absolutely, unambiguously, correct.
Oh, please. As if we couldn't just compare the hashes of the pictures people are storing against a CSAM database of hashes that gets regularly updated
When this was proposed people would respond "But they could just mirror the pictures or cut a pixel off!"
Who cares? You got that picture from some place in the dark web, and eventually someone will stumble upon it and add it to the database. Unless the person individually edits the pictures as they store them, that makes it so that you're never sure if your hashes will posthumously start matching against the DB.
People who wank off to CSAM have a user behavior similar to any other porn user, they don't store 1 picture, they store dozens, and just adding that step makes them likely to trip up, or straight up just use another service altogether
"What if there's a collision?" I don't know, go one step further with hashing a specific part of the file and see if it still matches?
This whole thing felt like an overblown fearmongering campaign from "freedom ain't free" individualists. I've never seen anything wrong with content hosters using a simple hash against you like this.
The hashes can not have collisions anymore, because modern forensics hash with both md5 and sha512, and both hashes must be together for use in any legal case. The odds of both of them having a collision is big enough to flat out say it's not going to happen.
But even if there was an md5 hash collision back when md5 was the only one hash use, it still doesn't matter because upon viewing the image that matched, if it's not csam, it doesn't matter. Having said that, the chance of dozens of images matching hashes known to be associated to csam is also so unlikely as to be unthinkable. Where there is smoke, there is fire.
And further, a hash alone is meaningless, since in court there must be a presentation of evidence. If the image that set off the csam alarm by hash collision is say, an automobile, there is no case to be had. So all this talk about hash issues is absolutely moot.
Source: I have worked as an expert witness and presented for cases involving csam (back when we called it Child Pornography, because the CSAM moniker hadn't come about yet), so the requirements are well known to me.
Having said all that, I am an EFF member, and I prefer cryptography to work, and spying on users to be illegal.
Apple's system used a perceptual hash. Not cryptographic hashes. The hash databases were not auditable and were known to contain false positives. The threshold for viewing reported matches was not auditable and could have been changed at any time. I hope your expert testimony was more careful.
You just said yourself that hash collisions don't matter as "because upon viewing the image that matched, if it's not csam, it doesn't matter"
So when you say "a hash alone is meaningless, since in court there must be a presentation of evidence", you'd just present the image to court.
The hash is the trigger to call the authorities they handle the rest
And with a userbase the size of apple and people as pissy as reddit, you want to completely exclude a possibility of collisions or you'll get a repeat of the scenario we got in 2021
I don't think it's an MD5 or SHA512 hash, since just changing one pixel would be enough to evade the scanner. My understanding is that it's heuristic similarity detection, which has a much wider footprint for collisions.
It's an incredibly bad thing. It's also an incredibly poor excuse to justify backdooring phones.
Cops need to investigate the same way they always have, look for clues, go undercover, infiltrate, find where this stuff is actually being made, etc.
Scanning everyone's phones would make their jobs significantly easier, no doubt, but it simply isn't worth the cost to us as a society and there is simply no good counter-argument to that.
Let's take a step back here and bring in some facts.
"Apple" wasn't scanning your phone, neither was there a "backdoor".
If you would've had iCloud upload enabled (you'd be uploading all your photos to Apple's server, a place where they could scan ALL of your media anyway), the phone would've downloaded a set of hashes of KNOWN and HUMAN VERIFIED photos and videos of sexual abuse material. [1]
After THREE matches of known and checked CSAM, a check done 100% on-device with zero data moving anywhere, a "reduced-quality copy" would've been sent to a human for verification. If it was someone sending you hashbombs of intentional false matches or an innocuous pic that matched because some mathematical anomaly, the actual human would notice this instantly and no action would've been taken.
...but I still think I was the only HNer who actually read Apple's spec and just didn't go with Twitter hot-takes, so I'm fighting windmills over here.
Yes, there is always the risk that an authoritarian government could force Apple to insert checks for stuff other than CSAM to the downloaded database. But the exact same risk exists when you upload stuff to the cloud anyway and on an even bigger scale. (see point above about local checks not being enabled unless iCloud sync is enabled)
[1] It wasn't an SHA-1 hash where changing a single bit in the source would make the hash invalid, the people doing that were actually competent.
You might've read the spec but you're missing the point and your approach is naive. For me it's about crossing the line. If you want to be snooping around my phone or my house, you need a warrant and go to official channels provides by my gov officials. And you really think it's as simple as picking apples from oranges? I mean come on. Yes, it's easy to implement hash check to see if u have some known child porn in your cloud. But was that the use case for the advocates? No. Their use case was to try finding abuse and that would need a more thorough scanning. And once we're there, we have to make hard decisions on what is porn or abuse. If u think it's easy, then you need to it through harder. Think of some picture from sauna where there are naked family, might be harmful? But normal here in Finland. What about a stick figure cartoon what depicts a some shady sexual positions with a smaller child-like figure in it? Or what about grooming, asking for naked pics? How is this system going to prevent that? I mean, I get why ppl would want something like this. But it isn't the right solution imho.
First of all, your analogy of "sending officials to your home without a warrant" is again you completely misunderstanding the feature.
The local scanning would've been enabled only if you would've uploaded the photos to iCloud anyway.
To complete your crappy analogy: You had already agreed to send the information of everything contained in your apartment to a company, and now you're getting into a hissy-fit when they are sending a robot visit your house with a list of known child pornography photos and checking if you have any?
There weren't any "hard decisions" it wasn't a "this has a naked person in it" -scanner. It wasn't an "abuse checked" it was specifically created to find known CP.
It checked your photos against a NeuralHash of KNOWN child pornography. So the only chance your sauna photos would've been flagged if they were included in a set of CP photos distributed widely enough so that they're added to the CSAM database.
And nobody claimed the system would magically cure the world of grooming, where did you get that idea from? Although the current filters in iMessage might help if the abuser is stupid enough to use that for grooming.
Sorry for my previous posts' hostile nature. Thank you for clearing up the actual proposed implementation details. It sounds like I came to pretty much the same conclusion as you did. I might have been quite unclear, but I was actually referring to (in my mind) the inevitable stage 2 of the implementation, if you give the little finger here ... theres no stopping. When I read about this, I was feeling sad, frustrated and a bit angry too. I like my iphone, i switched from android because they aged too quickly. Given the nature of Apple ecosystem, I took this as crossing the line. And i would have to strongly look for alternatives, since this scanning is not something i want to accept and i would vote with my money elsewhere.
Yes, i know they scan all photos already. But to scan MY photos and hold me accountable, that is something i am not okay with. Yes, its only icloud, for now, but i dont see the point of using some 3rd party cloud, i used to, but in apple device it would be suboptimal for my daily use etc etc.
My point was of the original case of protecting children. That is what the advocate groups want. That is what I want. But I just draw the line there what i feel is my personal space, that you don't scan my photos or files, and that is not the answer. Since there are these few behemoth FAANG companies ruling the ecosystems and impacting our lives, the battle of privacy needs to be fought right there. So, for me that would be the line as I see the progression of that path is much worse. That is where I raise my hand and say, I'm out, what next? No smartphone? Maybe. :) Peace.
i thought they can't scan the media in icloud, since media is encrypted, no?
also: If it was someone sending you hashbombs of intentional false matches or an innocuous pic that matched because some mathematical anomaly, the actual human would notice this instantly and no action would've been taken. - if someone is doing this, imagine the scale- thousands of pics that should be human-evaluated, scaled to thousands of people, it'll be just plain ignored, meaning system loses it's purpose.
also, you say that it'll be enabled only if icloud bck is enabled, but it's not guaranteed, this assumption can later change... and it doesn't make sense, for me your 2 statements contradict themselves:
- if apple can scan your photos in icloud AND for this feature to be enabled, you must enable icloud, why they should send hashes to you? they can scan the photos anyway in icloud, since all your photos are backed up there. Unless... they can't scan photos in icloud since these are encrypted, meaning scanning can be done only locally before photos are sent, meaning icloud enabling is not mandatory and it could work without it.
Either way the csam scanning is imo pointless, on one hand bc of privacy reasons(and we've seen that if state is able to use a backdoor, it'll use it when needed) and on the other hand, because of generative algorithms: photos can be manipulated to trigger csam even if human eye can see another thing (aka hashbomb) OR a sick/ill intentioned person can generate a legit csam like photo just by using target ppl's face(or description of their face), in this case I don't even know if they are breaking the law or not, since the image is totally generated but is looking totally illegal
You do know that we currently have "thousands of people" watching for and tagging the most heinous shit people upload to social media, right? There are multiple sources for this how we use outsourced people from Africa and Asia to weed through all of the filth people upload on FB alone.
"Looking illegal" isn't enough to trigger a CSAM check in this case. It's perfectly normal to take pictures of your own kids without clothes in most of Europe for example. Nothing illegal.
That's why the checks would've been explicitly done on known and confirmed CSAM images. It wasn't some kind of check_if_penis() -algorigthm, or one of the shitty ones that trigger if there's too much (white) skin colour in an image.
Again, somebody can train an algorithm to create false positives or real csam like pictures, like close enough to trigger the check. Afaik csam is not about exact match but rather close enough match based on a clever hashing algorithm, and in this case, algorithm can be induced into false positives(and by my limitet knowledge, hashing can have collisions) or even true positives but that are fully generated(and afaik generated images is not illegal, but i guess it depends on country).
Outsourcing work for this(afaik) isn't possible since it's private data, not public and only specific organisations can have full access to potential triggers
But in the end it also doesn't matter because there are other problems too, like how to make the final list easily checkable so that we are sure governments/ companies do not alter the list to target specific ppl/groups for their own interest. Or how algorithm isn't modified under the hood to check not just images but also text/files
People DID get into a huge fuss and started building false positive generators in a huge wave. Like they were proving something or pwning apple.
Nobody read the bit about an actual human verifying results before any law enforcement would be called in.
And outsourcing checking is a huge industry even today[0]. How do you think the huge social media companies keep CSAM, gore etc out of their systems? They're not using Pied Piper's hotdog or not algorithm, that's for sure.
> The video depicts a man being murdered. Someone is stabbing him, dozens of times, while he screams and begs for his life. Chloe’s job is to tell the room whether this post should be removed.
And outsourcing checking is a huge industry even today[0]. How do you think the huge social media companies keep CSAM, gore etc out of their systems? They're not using Pied Piper's hotdog or not algorithm, that's for sure.
- is this done for private data or public? Public data check can be outsourced no prob, not sure abt private.
Nobody read the bit about an actual human verifying results before any law enforcement would be called in. - again, if system can be gamed, human check is useless. Imagine someone will generate 100k false positives, multiply this by 1k ppl and imagine among them there is a real bad person with 10 real csam images and another person with fully generated csam images (like generated porn is a thing now so generating csam like images is possible). How do you think gov will human evaluate 100kk pictures that are triggering the system? Because if they can't, system is useless for this usecase but is still useful for potential gov oppression or company ad targeting
You can imagine all you want, but you're still wrong.
You can generate a billion false positives and it still won't do anything. You need to get them to people's iCloud photo libraries first. Each library needs to have multiple false positive images before triggering a human check. They intentionally didn't tell the exact number needed, but it's not two.
If you have a way to get enough people to get fake CSAM material on their phones to overwhelm the human checkers, why would you waste in on something stupid like that? Just by forcibly inserting advertisements to peoples Photo Libraries would make you a billionaire. Not an ethical one, but still rich.
Oh, and just for reference. FB gets 350 million photos uploaded every day and they keep it moderated just fine. The amount of people you'd need colluding with you to overwhelm the system Apple had designed would be staggering.
And then you've achieved what? Make it possible to share child pornography because you broke the system? Yay, victory?
> "Apple" wasn't scanning your phone, neither was there a "backdoor".
Yes, you're right. But I've seen calls for phones to scan all content before being uploaded or encrypted, and it often feels, at least in some countries, that could still plausible happen. I suppose that's what I had in mind when I wrote my comment.
> But the exact same risk exists when you upload stuff to the cloud anyway and on an even bigger scale.
There's a difference with them actively doing it and announcing they are doing it, vs the possibility they are doing it silently without consent.
ALL cloud providers are actively scanning all of your content right now, unless you specifically encrypt it yourself.
It's a cost of doing business pretty much, you need to give the authorites access to customers data or they can go "but think of the children, there might be child abuse material in there!" and that's really damn hard to argue against.
Thus: checking stuff on-device and keeping the cloud locked so that not even the hoster can access it nor can they create a backdoor.
Of course they need a warrant. Officer Johnson from Randomtown Alabama can't just call up Google and tell they want access to everything in GDrive :D
But the point is that if the data is fully encrypted, no warrant will help against pure mathematics. A cloud provider cannot give something they have no access to.
Right, but your post kind of made it seem like they didn't. And with the CSASM stuff, they didn't need anything to match hashes. So what happens when that gets expanded to other types of content that they still don't need a warrant for? Like torrent files?
> But the point is that if the data is fully encrypted, no warrant will help against pure mathematics. A cloud provider cannot give something they have no access to.
Agreed, but most don't use encryption, and should still have privacy rights for their data.
If they would have expanded it beyond CSAM, in that instance I would have joined the rest of the internet on the picket lines protesting against the system.
The point of the system would've been that EVERYONE'S content would've been encrypted in the cloud without them needing to do anything.
If CSAM was still done the way it "always has been", then "cops" relying on the methods they always had would be a valid answer. But since tech has enabled the distribution of CSAM at unprecedented scales, I think the requests by law enforcement to also make their job a bit easier have some merit...
It hasn't particularly changed production, which is where the actual abuse happens. There are still actual people abusing and filming actual children, and those can be found by the police by the same old-fashioned methods they've always had available. (Plus many new ones that don't violate everyone's civil liberties or destroy the security of every networked device.)
Well yes, let's absolutely increase taxes for the ultra-wealthy why don't pay nearly enough. I don't see the problem.
That aside, we could also, at least in the US, stop taking such a ridiculous stance against drugs and instead prioritize finding makers of CSASM materials.
The people are there, they are just not being utilized effectively.
Creating backdoors that allow encryption schemes to be subverted is _fundamentally_ going to cause harm on the internet, and eventually fail the weakest users/those that need privacy/security the most.
A mechanism that can subvert cryptographic protocols can be used by any party, including oppressive regimes, private entities etc. that have the resources/will/knowledge to use the backdoor etc. Backdoors harm both the trust on the web (which can have an impact on economic transactions among many others) and the people that need security/privacy the most. In the meantime, criminals will wise up and move their operations elsewhere where no backdoors exist.
We basically end up with a broken internet, we are putting people in harm's way and the criminals we are targeting are probably updating their OPSEC/MO not to rely on E2EE.
I’m sorry that there are victims, and it’s a horrible crime but no matter what I don’t believe my privacy and my freedoms should be sacrificed for this.
A small percentage of people are involved in this crime and subjecting every single person to illegal searches and possibly getting wrongly identified or nation states using this to imprison its enemies is wrong.
Sometimes there is no solution that satisfies everyone and sometimes the only good solution is the least shittiest solution but still a shitty solution. In this case, I believe that my and everyone else’s freedoms and privacy are worth it and we should instead spend much money time and effort trying to catch these criminals instead of scanning everyone’s phones which won’t ultimately work.
> Both of these arguments are absolutely, unambiguously, correct.
I don't really buy any "slippery slope" arguments for this stuff. Apple already can push any conceivable software it wants to all of its phones, so the slope is already as slippery as it can possibly be.
It just doesn't make sense to say "Apple shouldn't implement this minimal version of photo-scanning now even though I don't think it's bad, because that's a slippery slope for them to implement some future version of scanning that I do think is bad." They already have the capability to push any software to their phones at any time! They could just skip directly to the version you think is bad!
Your comment confused me. Isn't Apple still scanning iPhones for CSAM just not the iCloud? I don't see any additional threat vectors by doing it locally.
> The other side of the coin is that criminals are using E2EE communication systems to share sexual abuse material in ways and at rates which they were not previously able to.
...regardless of whether Apple rolls out E2EE right? End to end encryption is available through a whole host of open-source tools, and should Apple deploy CSAM scanning the crooks will just migrate to a different chat tool.
>The other side of the coin is that criminals are using E2EE communication systems to share sexual abuse material in ways and at rates which they were not previously able to.
I think that companies might need to enable some kind of mechanism for offline investigation of the devices though. The CSAM is a real problem, there are real predators out there, the only risks isn't CSAM and law enforcement does really need to have a way to investigate devices. Previously, my proposal was the ability to force the device scan the user content for fingerprints of the suspected content but only with physical access. Physical access enforces the law enforcement to actually have a real and official investigation with strong enough reasons to spend resources and risk repercussions when done improperly.
However, the project of scanning all the user content for policing the users was one thing that irked and later relieved when Apple abandoned the project.
Apple's explanation is good and I agree with them but IMHO the more important aspects are:
1) Being able to trust your devices being on your side. That is, your device shouldn't be policing you and shouldn't be snitching on you. At this time you might think that the authorities who would have controlled yor device are on your side but don't forget that those authorities can change. Today the devices may be catching CSAM, some day the authorities can start demanding catching people opposing vaccines and an election or a revolution later they can start catching people who want to have an abortion or having premarital sexual relations or other non-kosher affairs.
2) Being free of the notion that you are always watched. If your device can choose to reveal your private thoughts or business, be it by mistake or by design, you can no longer have thoughts that are unaligned with the official ones. This is like the idea of a god who is always watching you but instead of a creator and angels you get C level businessmen and employees who go through your stuff when the devices triggers decryption of your data(by false positives or by true positives).
Anyway, policing everyone all the time must be an idea that is rejected by the free world, if the free world doesn't intent to be as free as Democratic People's Republic of Korea is democratic.
It is also bad for the fabric of society at large, in the sense that if we don't clearly take a stand against abhorrent behaviour then we are in some sense condoning it. Does the tech industry have any alternate solutions that could functionally mitigate this abuse?
I'd suggest there's a lot the not-tech industry could do to stop condoning abhorrent behavior that stops short of installing scanners on billions of computing devices. It's become a bit of a trope at this point, but it's bizarre to see a guy who is/was spokesman for a "minor attracted persons" (i.e. pedos) advocacy group getting published negatively reviewing the controversial new sex trafficking movie ... in Bloomberg:
His 501(c)(3) non-profit also advocates in favor of pedophilic dolls and has a "No Children Harmed certification seal" program for pedophilic dolls/etc:
I'm not sure you can criminalize stuff like this, but it sets off my alarm bells when pedophile advocates are being published in mainstream news at the same time there's a moral panic around the need to scan everyone's hard drives. Is society actually trying to solve this problem, or is this more like renewing the Patriot Act to record every American's phone calls at the same time we're allied with al Qaeda offshoots in Syria? Interesting how terrorism has been the primary other argument for banning/backdooring all encryption.
----
As an aside I couldn't find ~anything about this group "Heat Initiative" Apple is responding to? Other than a TEDx talk by the founder a couple years ago which again seems very focused on "encrypted platforms" as the primary problem that needs solving: https://www.ted.com/talks/sarah_gardner_searching_for_a_chil...
Can't solve social problems with technology, as they say. And as mentioned elsewhere, most child abuse is perpetrated by family members and other close connections.
I'd very much like a source on your claim that "[...] criminals are using E2EE communication systems to share sexual abuse material in ways and at rates which they were not previously able to."
> Does the tech industry have any alternate solutions that could functionally mitigate this abuse? Does the industry feel that it has any responsibility at all to do so? Or do we all just shout "yay, individual freedom wins again!" and forget about the actual problem that this (misguided) initiative was originally aimed at?
The issue at large here is not the tech industry, but law enforcement agencies and the correctional system. Law enforcement has proven time and time again themselves that the most effective way to apprehend large criminal networks in this area is by undercover investigation.
So no, I don't think it is the tech industries role to play the extended arm of some ill conceived surveillance state. Because Apple is right: This is a slippery slope and anyone that doesn't think malicious political actors will use this as a foot in the door to argue for more invasive surveillance measures, using this exact pre-filtering technology are just naive idiots, in my oppinion.
We eventually have to ban accounts that won't stop breaking the rules. I don't want to ban you, so if you'd please review https://news.ycombinator.com/newsguidelines.html and stick to them, we'd appreciate it.
This morality may not be so unusual outside the tech "filter bubble". And wherever someone, like the OP, appears to be serious, my own personal morality says the absolute least they deserve is an equally serious answer.
I'm confused by what you mean by "morality" here. The only moral position that I am communicating is that child sexual abuse is a real thing that really happens, and it is bad for both the individual and for society. That's it. There's no subtext. There is explicitly no refutation of the arguments against client-side CSAM scanning which, I will say again, are unambiguously correct.
Is being against child sexual abuse really an unusual opinion in the tech industry? Have we really all gone so far along the Heinlein/Rand road that any mention of a real negative outcome gets immediately dismissed with the empathy-free thought-terminating-cliche "think of the children?"
I don't think anyone would disagree with you that child abuse exists - and if they did, that's an empirical question, and it resolves to you being correct.
The moral part is whether and how much society / the state / the tech industry should invest in combating it, and how the advantages and disadvantages of mandating government access to E2E encrypted communications or people's cloud storage weigh up.
For what it's worth, my own position is that the state should do more about it, and should in principle have more resources allocated to do so. I would support higher taxes in exchange for more police (and better trained police), who could do more about many kinds of crime including child abuse. I wouldn't mind more resources being allocated to policing specifically for fighting child abuse, too. But I could think of a lot of other places besides legislating access to people's messenger apps where such resources could be invested.
I'm still undecided on whether legally mandated backdoors in E2E encrypted storage and communications would be _effective_ in fighting child abuse, which is a question I would need more technical knowledge on before I could take an informed position (I know a fair bit about cryptography but less about how organised crime operates). If it turns out that this would be an ineffective measure (maybe criminals fall back on other means of communication such as TOR relays) then it would be hard to justify such a measure morally, especially as it could have a lot of disadvantages in other areas.
> Is being against child sexual abuse really an unusual opinion in the tech industry?
Nobody said that. You are being manipulative an trying to make it look that people who disagree with you are somehow pro-child abuse.
In saying "Does the tech industry have any alternate solutions that could functionally mitigate this abuse?" you are trying to pain a picture in which child abuse is somehow "tech industry" fault.
You are also trying to paint a complete Panopticon in which every interpersonal communication is subjected to surveillance by the state as somehow the default, that end to end encrypted electronic communication is changing - while the truth is that personal communication was private for hundreds of years, because it was impossible for the state to listen in on everything.
This thread is tending towards flamewar so I'll try to dial back, but I do want to respond.
> You are being manipulative an trying to make it look that people who disagree with you are somehow pro-child abuse.
I am not doing that. You described my position as an "alien morality", to which another poster seemed to agree. I was responding to that by clarifying the actual moral point I was making. For the avoidance of doubt, I am not arguing that you are pro-child abuse.
> In saying "Does the tech industry have any alternate solutions that could functionally mitigate this abuse?" you are trying to pain a picture in which child abuse is somehow "tech industry" fault.
Yes, this is basically a correct understanding of my position. I am stating that the problem has been massively exacerbated by the adoption of E2EE by the tech industry, and that the tech industry therefore has a moral responsibility to deal with the unintended consequences of its own action.
> the truth is that personal communication was private for hundreds of years, because it was impossible for the state to listen in on everything.
> I am stating that the problem has been massively exacerbated by the adoption of E2EE by the tech industry
I understand that most information on how the state fights organised crime will be classfied, but if there is any publicly available evidence for this claim that you can share, I would be interested in reading it (and I hope others on this thread would be too). I'm not saying I doubt you - you give the impression you know what you're talking about - please take this in the spirit it's intended, as one of my former supervisors once said "In academia, asking for citations/references is an expression of interest, not of doubt".
It's completely reasonable to ask for evidence. Don't apologise for asking!
I'm not part of any state, and I don't have access to any special knowledge that you can't find on the internet.
I'm also not aware of any study that provides the very direct link you're asking for. Because of the nature of E2EE, I don't know if it would be possible to produce one. What I can do is link to evidence such as https://www.weprotect.org/global-threat-assessment-21/#repor..., which has (to me) some fairly compelling data showing that the magnitude of the problem is increasing.
Most child abuse is perpetrated by family members and close connections. While that may not be true in the future, I think there are better avenues of action than jumping to completely backdooring an extremely valuable tool that allows people to exercise their rights more effectively.
I honestly hadn't considered that when I asked someone to use Signal or Threema instead of Facebook Messenger that they would think I was a pedophile or drug addict. Food for thought.
For what it's worth, I don't think using Signal or Threema is enough to make you an E2EE enthusiast, and wanting to speak without your speech later used against you is maybe the purest reason for E2EE. I meant more so the type of people who are into Tor or I2P.
I agree that those statements are correct, however my reading of the proposed Apple implementation was that it struck a good balance between maximising the ability to discover CSAM, minimising the threat vectors, minimising false positives, and minimising the possibility that a malicious government could force Apple to implement bulk surveillance.
I'm all for privacy, but those who put it above all else are already likely not using Apple devices because of the lack of control. I feel like for Apple's target market the implementation was reasonable.
I think Apple backed down on it because of the vocal minority of privacy zealots (for want of a better term) decided it wasn't the right set of trade-offs for them. Given Apple's aim to be a leader in privacy they had to appease this group. I think that community provides a lot of value and oversight, and I broadly agree with their views, but in this case it feels like we lost a big win on the fight against CSAM in order to gain minor, theoretical benefits for user privacy.
But "the ability to discover CSAM" is by itself an excuse for mass surveillance, not a bona fide goal.
It is certainly possible, instead, to investigate, then find likely pedophiles, and then get a search warrant.
Discovering users sharing CSAM is a goal isn't it? That's why governments around the world require cloud storage providers to scan for it – because waiting until the police receive a report of someone is not really feasible. A proactive approach is necessary and mandated in many countries.
imo diminishing ppl's privacy is a goal. Apple's csam could be tricked in different ways, esp with generative algorithms, like an malicious person will send you an album with 100+ normal looking photos(to the eye) but altered to trigger csam, now govt needs to check 100+ photos per person per send and dismiss the false positives. Since this can be replicated, imagine gov't will need to scan 100k similar usecases just for 1k ppl? that's insane, they'll either not check them, so system became obsolete(bc in this case ill intentioned ppl can just send an album of 5k photos, all triggering csam and only a bunch will be real csam. multiplied by nr of these ill ppl, you understand system is easy to game, or they spend thousands of hours checking all this photos and checking each person. Another vector of attack is generation of legit looking csam, bc, generating algorithms are too good now, but in this case(afaik) it's not a crime, since image is fully generated(either by only using ppl's face as starting point or using the description of their face tweaked enough to look realistic).
So what we get is:
- a system that can be gamed in different ways
- a system that's not proved to be effective before releasing
- a system that may potentially drive those ppl to other platforms with e2ee that don't have the csam scan(i assume since they know what e2ee is, they can find a platform without csam), so again obsolete
AND:
- a system that can't be verified by users (like is the csam list legit, can it trigger other things, is the implementation safe?)
- a system that can be altered by govt by altering the csam list to target specific ppl (idk snowden or some journalist that found something sketchy)
- a system that can be altered by apple/other company by altering csam list for ad targeting purposes
Idk, maybe i'm overreacting, but I've seen what a repressive gov can do, and with such an instrument it's frightening what surveillance vectors can be opened
> a new child safety group known as Heat Initiative
Doesn't even have a website or any kind of social media presence; it literally doesn't appear to exist apart from the reporting on Apple's response to them, which is entirely based on Apple sharing their response with media, not the group interacting with media.
Because when they couldn't win the war on porn, some right Christians decided to cloak their attack in "concerns" of "abuse". See project Excedus. Of course it has nothing to do with abuse and everything to do with their attempts to keep people from seeing pixels of other people having sex. Backpage was shut down despite being good at removing underage and trafficed women - which meant that sex workers had to find other places that didn't have nearly as good protections.
So yeah. When these things pop up I assume malicious intent.
But being critical of pornography and considering it to be abuse isn't a view limited to right-wing Christians. For example, here's what Noam Chomsky has to say about it:
> Pornography is humiliation and degradation of women. It's a disgraceful activity. I don't want to be associated with it. Just take a look at the pictures. I mean, women are degraded as vulgar sex objects. That's not what human beings are. I don't even see anything to discuss.
> Interviewer: But didn't performers choose to do the job and get paid?
> The fact that people agree to it and are paid, is about as convincing as the fact that we should be in favour of sweatshops in China, where women are locked into a factory and work fifteen hours a day, and then the factory burns down and they all die. Yeah, they were paid and they consented, but it doesn't make me in favour of it, so that argument we can't even talk about.
> As for the fact that it's some people's erotica, well you know that's their problem, doesn't mean I have to contribute to it. If they get enjoyment out of humiliation of women, they have a problem, but it's nothing I want to contribute to.
> Interviewer: How should we improve the production conditions of pornography?
> By eliminating degradation of women, that would improve it. Just like child abuse, you don't want to make it better child abuse, you want to stop child abuse.
> Suppose there's a starving child in the slums, and you say "well, I'll give you food if you'll let me abuse you." Suppose - well, there happen to be laws against child abuse, fortunately - but suppose someone were to give you an argument. Well, you know, after all a child's starving otherwise, so you're taking away their chance to get some food if you ban abuse. I mean, is that an argument?
> The answer to that is stop the conditions in which the child is starving, and the same is true here. Eliminate the conditions in which women can't get decent jobs, not permit abusive and destructive behaviour.
The main impetus behind "child safety" advocacy nowadays seem to be by cells of extremist right-wing Christian / QAnon types who believe in conspiracy theories like Pizzagate and the "gay groomer" panic. It's a reasonable assumption to make about any such group mentioned in the media that doesn't have an established history at least prior to 2016.
It sounds like an entirely unreasonable assumption to me. Advocating for child safety is something that transcends political differences, and generally unifies people across the political spectrum.
I mean, there aren't many people who want paedophiles to be able to amass huge collections of child abuse imagery from other paedophiles online. And pretty much every parent wants their child to be kept safe from predators both online and offline.
I didn't claim otherwise. The fact remains that a specific subset of a specific political party has been using "advocating for child safety" as a pretext to accelerate fear of and harassment against the LGBT community and "the left" in general for years now, and they put a lot of effort into appearing legitimate.
And yes, because their politics are becoming normalized within American culture, it is necessary to be skeptical about references to any such group. Assuming good faith is a rule on HN but elsewhere, where bad faith is what gets visibility, it's naive.
Well, paedophiles hijacking leftist movements for their own ends is a known problem, it's happened before and it will happen again. One particularly infamous instance occurred in the UK back in the 1970s:
So if there are indeed some right-wing groups talking about this, maybe it's best not to brush off their claims without some scrutiny first. And I say this as someone who mostly agrees with the left on most things.
Anyway I don't think that any of this has much to do with Apple being asked to implement specific technical measures for detecting child abuse imagery.
> So if there are indeed some right-wing groups talking about this, maybe it's best not to brush off their claims without some scrutiny first. And I say this as someone who mostly agrees with the left on most things.
Figlet indeed.
> figlett 5 months ago [flagged] [dead] | parent | context | prev [–] | on: Florida courts could take 'emergency' custody of k...
> This is excellent news for children at risk of being abused by militant transgenders and the medical establishment who are enabling them. Thank you Florida for trying to put an end to this menace.
Exactly, this is one area where the political left, particularly in the US, are failing terribly on child safety.
I'm in the UK and we're doing better here though, the main left-wing party is backing away from the particular ideology that has enabled this. I was going to vote for them anyway as we desperately need our public services to be restored and welfare for those less fortunate in society to be improved, but I'm pleased they're moving towards a sensible, harm-reducing stance on this issue rather than assuming everything the gender activists say is reasonable.
Speculation: they did a trial on random accounts from all over the world and found out so much illegal content that it will make them do enormous amount of policing on scale and lose troves of customers.
The vast majority (99%+) of iCloud Photos are not e2ee and are readable to Apple.
You can rest assured that they are scanning all of it serverside for illegal images presently.
The kerfuffle was around clientside scanning, something that it has been reported that they dropped. I have thus far seen no statements from Apple that they actually intended to stop the deployment of clientside scanning.
Serverside scanning has been possible (and likely) for a long time, which illuminates their "slippery slope" argument as farce (unless they intend to force migrate everyone to e2ee storage in the future).
e2ee for iCloud is currently opt-in, without prompts/nudging. Most power users don't even have it turned on or are aware of its existence. The setting is buried/hidden in submenus.
Approximately no one uses it.
Hopefully Apple will begin promoting users to migrate in future updates.
Then how would they respond to warrants asking for all user identities that upload image x? "Sorry, no"? I don't think so. If they are served a valid warrant that isn't overbroad and they have the data, they are legally compelled to provide it.
Whatever they said, it was probably worded to give you this impression, without actually saying that. Apple is extremely careful and goes to great pains to actively mislead and deceive whilst avoiding actual lies.
Could you please link to or quote these statements from Apple? I would bet any money they say something different than what you claim, a "not wittingly"-style hedge.
Apple clearly has very limited ongoing scanning because they report on the order of hundreds of instances of CSAM to NCMEC every year whereas other services with pervasive scanning report on the order of tens of millions of instances.
It’s ok, you are one of the vast majority of people commenting on this topic while reasoning from false premises.
> Then how would they respond to warrants asking for all user identities that upload image x?
Dragnet warrants like this aren’t even legally permissible in the United States, and if they were Apple would obviously reject them. They state that they only provide specific named user account information in response to warrants.
Nothing in this article from Apple says that they don't scan iCloud Photos, and many things strongly suggest that they do.
The headline says they don't scan iCloud Photos, but I don't see the statements from Apple saying that. The media often misreports on Apple's comments because Apple's expert at inducing the media to misreport in ways that are favorable to Apple.
The author is repeating a confirmation they received directly from Apple, as described in the first sentence. Ben is a long time journalist who's been on the beat for decades. When he says, "Apple has confirmed to me" it's a newsworthy report of what Apple has confirmed to him. The statement doesn't have to be attributed to an Apple spokesperson to confirm that, and Ben is not prone to "misreporting" like you claim without evidence.
Apple has full control over their customers devices, so they can access all encryption keys and device local files anyway. That e2ee setting seems pretty pointless to me...
You can enable device wipe after 10 wrong passcodes, and E2EE gives Apple pretty broad cover to deny government requests to your data. The appeal to that for me isn't US government (who have easy access to everything else about you), but other governments around the world with worse human rights records. It's terrifying that while traveling a policeman could make up something and get details about your home life they aren't entitled to.
Not really? It looks like the nudity detection features are all on device, aren't CSAM specific, and seem to be mostly geared towards blocking stuff like unsolicited dick pics.
The earlier design was a hybrid model that scanned for CSAM on device, then flagged files were reviewed on upload.
No, the terrible misfeature that this group wants is “government provides a bunch of opaque hashes that are ‘CSAM’, all images are compared with those hashes, and if the hashes match then the user details are given to police”
Note that by design the hashes cannot be audited (though in the legitimate case I don’t imagine doing so would be pleasant), so there’s nothing stopping a malicious party inserting hashes of anything they want - and then the news report will be “person x bought in for questioning after CSAM detector flagged them”.
That’s before countries just pass explicit laws saying that the filter must includE LGBT content (in the US several states consider books with lgbt characters to be sexual content, so a lgbt teenager would be de facto CSAM), in the UK the IPA is used to catch people not collecting dog poop so trusting them not to expand scope is laughable, in Iran a picture of a woman without a hijab would obviously be reportable, etc
What Apple has done is add the ability to filter content (eg block dick picks) and for child accounts to place extra steps (incl providing contact numbers I think?) if a child attempts to send pics with nudity, etc
It was passed to stop terrorism, because previously they found that having multiple people (friends and family etc) report that someone was planning a terrorist attack failed to stop a terrorist attack.
Hypothetically you have hashes for two people of gender X (lets be honest, based on popularity of different types of porn two men). This is not meaningfully different from opaque hash of "CSAM".
But you're missing the point:
Step 1. generate some opaque hash of the "semantics" of an image
Step 2. compare those hashes to some list of hashes of "CSAM", which again fundamentally cannot be audited
Step 3. report any hits to law enforcement
Step 4. person X is being investigated due to reported violations of laws against child abuse.
Basically: how do you design a system in which the state provides "semantic" hashes of "CSAM" that cannot be trivially abused by inclusion of non-CSAM as "CSAM", or by laws mandating inclusion of things that are objectively not-CSAM. Hypothetically: hashes that match christian crosses, star of David, muslim star and/or crescent, etc. Or in the US DNC, RNC, pride, etc flags. Recall that definitionally no one can audit the hashes that would trigger notifying law encforcement.
Except this system wouldn't have looked at "semantics". You can't simply match a hash of a cross or star or flag, you have to match a specific photograph. Which photograph do you use?
Yes - and there’s a huge difference between the two.
In a word, decentralization.
By detecting unsafe material on-device / while it is being created, they can prevent it from being shared. And because this happens on individual devices, Apple doesn’t need to know what’s on people’s iCloud. So they can offer end-to-end encryption, where even the data on their servers is encrypted. Only your devices can “see” it (it’s a black box for Apple servers, gibberish - without the correct decryption key).
> Child sexual abuse material is abhorrent and we are committed to breaking the chain of coercion and influence that makes children susceptible to it.
It is amazing that so much counter-cultural spirit remains in Apple. They are probably going to ban likes and other vanity features in all iOS applications, prohibit access to popular media, put “pop stars” into rehabs, and teach their users to disobey (the hardest of all tasks).
A lot of people try really hard not to see that “unusual” abuse of the children is the same as “usual” abuse of everyone. Conveniently, the need for distinction creates “maniacs” that are totally, totally different from “normal people”, and cranks up the sensation level. The discussion of “external” evil then can continue ad infinitum without dealing with status quo of “peaceful, normal life”.
> "It would also inject the potential for a slippery slope of unintended consequences. Scanning for one type of content, for instance, opens the door for bulk surveillance and could create a desire to search other encrypted messaging systems across content types."
Yes, and it was patently obvious from the onset. Why did it take a massive public backlash to actually reason about this? Can we get a promise that future initiatives will be evaluated a bit more critically before crap like this bubbles to the top again? Come on you DO hire bright people, what's your actual problem here?