US Government: We suspect the person in this photo of committing a crime. Here is your subpoena, Apple. You are directed to scan all iPhone and iCloud storage for any pictures matching this NeuralHash and report to us where you find them.
Chinese Government: Here is the NeuralHash for Tienanmen square. Delete all photos you find matching this or we will bar you from China.
Apple has at this point already admitted this is within is capability. So regardless of what they do now, the battle is already lost. Glad I don't use iThings.
Yes so far. Once the mechanism is in place it will be rolled out. I'm in the UK and we have a fairly nasty set of state legislation on censorship already and an increasingly authoritarian government so this is a big worry.
Apple will try again with something that's wrapped up to be more palatable to consumers. A change of heart in the next 90 days does not mean that Apple is a good actor.
Well it’s now common knowledge that they can and will do this. So oppressive governments will almost certainly mandate that its required to do business in those countries. Thus it’s a no win game for the end user. Unless you choose not to play.
Yes that. I'd rather take them back to the Apple store and have a large fire out the front while holding up a doom saying sign about their laughable privacy stance, but that'd detract from the point.
You must be trolling...Provoking...Joking...Or you are Tim Cook :-) Are the personal ethics and values disconnected from daily business, to be changed based on the place of business or Apple must do it because they are just complying with local laws in China ?
Because IBM was also in 1939...Just the local laws.
When Apple started working in China, the prevailing western belief was that economic liberalization would cause political liberalization in China, so even though China was still relatively repressive, doing business there would help move things in a better direction.
At the time, this was a very reasonable and widespread belief, which turned out to be wrong.
Betting on other people and turning out to be wrong doesn’t make you a hypocrite. It makes you naive.
Naive is not something I would associate with Apple, but I guess they have seen it now. I guess they must be ready to pull off any time...Or is their CEO too busy, lecturing others on the righteous paths in other jurisdictions?
"Apple's Good Intentions Often Stop at China's Borders"
> Naive is not something I would associate with Apple, but I guess they have seen it now.
It wasn’t just Apple who was naïve, it was the entirety of US and European foreign policy too. Do you want to claim that the west didn’t expect political liberalization in China.
That article doesn’t change anything. Apple didn’t go into China thinking they were helping to strengthen an authoritarian regime. They went in thinking they were helping to liberalize it.
Depends on the company. If it’s a Chinese company like TikTok, data is stored in China and therefore property of the state. Things are about to get worse and they crack down on private companies in China.
And it’s not just Americans who worry about this. When word got out that Line was storing user data in China servers, it blew up in the news in Japan and Taiwan and went into immediate investigation. Line ended up moving Japanese user data to South Korea. Chinese aggression and Xi’s obsession with power is not just something Americans spout off in Reddit and YouTube comments. They’re a legit threat to Taiwan, Japan, Australia and India.
ByteDance's Douyin product has Chinese employees that are based in China. TikTok employees are also ByteDance employees which means a ByteDance employee that passes through their "strict controls" can access whatever TikTok data they want. Even if that's a dozen Chinese nationals that can get access that's a dozen people required by Chinese law to help the state security aparatus.
I don't see any reason to give them the benefit of the doubt considering they already moderate content the Chinese government doesn't like [0] as a matter of company policy.
That is what I meant. The point is that everyone talking about how this policy introduces a backdoor which can be exploited by totalitarian states are wrong.
There's no need. Apple willfully obliges local laws.
What will Apple do when Iran or Qatar (or any of the other 71 countries where homosexuality is illegal) upload photos to the CSAM database that they consider illegal acts?
In some of those countries same-sex sex is punishable by death.
Nothing is stopping these countries from doing this already. China, Saudi Arabia, Iran already consider forcing tech companies to track user activity. At the end of the day these companies are subject to laws of the country they do business in and this has already screwed over HK, Uigher, Iranian, Egyptian citizens. Laws forcing data to be stored in given regions alongside encryption keys has already made it dangerous to be homosexual in these countries you’ve mentioned (except Iran which most businesses cannot do business in)
> You are directed to scan all iPhone and iCloud storage for any pictures matching this NeuralHash and report to us where you find them.
I think the US Govt (and foreign) would actually send Apple tens of thousands of NeuralHashes a week. Why would they limit themselves? False positives are "free" to them.
Hasn’t Google been parsing Google Photos images, email content, and pretty much everything else since forever? Do you just stay off of smartphones and cloud platforms entirely?
Before it was "only content uploaded to iCloud is scanned" and now it's "photos are scanned on-device". That's frog boiling that tomorrow easily becomes "arbitrary files are scanned anywhere on the device".
Only photos being uploaded to iCloud are scanned on device for CE imagery. This is the alternative to having cloud storage having broad decryption ability to do scanning in-service (as say Microsoft, Google, Twitter, and Facebook do)
They could have just had a local failure. I suspect there were a lot of arguments around this point - should they be making an attempt merely to prevent such content from their servers, or to detect/report behaviors which may be illegal and harmful.
I get this sentiment but the known-bad-image-scanning technology is not new. That’s not what Apple announced. Many tech services already do that, which already enables the slippery slope you’re illustrating here.
I’m not trying to minimize the danger of that slope. But as someone who is interested in the particulars of what Apple announced specifically, it is getting tiresome to wade through all the comments from people just discovering that PhotoDNA exists in general.
"Its just when you upload to icloud or recieve an iMessage" they say. BUT the software's on your device forever. What about next year? What about after an authoritarian goverment approaches them? By 2023 it might be searching non-uploaded photos.
You can always not use cloud tech like PhotoDNA but you can't not use software BUILT IN to your device. Especially when its not transparent.
It would probably kill the battery to do that constantly, as the processor would not be able to sleep ever. Although relying on energy efficiency to not improve is not a good long term situation.
You can be selective about it. There is GPS to tell you where you are, microphone to tell you how many people are around, motion sensor, light sensor to sense if you’re in a pocket, etc.
It doesn’t have to be constant, it could even be on demand.
Personally, I am resigned to the fact that this kind of surveillance will happen. Technology cannot be slowed down or stopped artificially. There are just too many actors (not just Apple) to expect all of them to act in good faith. On top of that, the governments all over the world are salivating over this technology too. I don’t trust governments will curtail technology - and thus themselves - via policy. And even if one does, there is always another. This is almost nothing new in the UK, for example.
I’m not optimistic in the direction we have been heading for the ladt 15 years or so.
Nothing stopping countries from demanding every tech organization does this anyway, it didnt just become a possibility now Apples running code on-device. Also this code can and probably will be able to be activated/deactivated/removed remotely (for better or worse!)
Have they? The whitepaper made it sound like the encrypted security voucher is created from the database of known cp on the device at the time the image is uploaded, and the only way for Apple to decrypt something is if it matched something in that database.
It did not sound like they could retroactively add additional hashes and decrypt something that already was uploaded. They could theoretically add something to the list of hashes and catch future uploads of it but my understanding was they cannot do this for stuff that has already been stored.
> ...the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child safety organizations. Apple further transforms this database into an unreadable set of hashes that is securely stored on users’ devices.
>... The device creates a cryptographic safety voucher that encodes the match result along with additional encrypted data about the image. This voucher is uploaded to iCloud Photos along with the image.
> they cannot do this for stuff that has already been stored.
That's a very simple software update.
Every year iOS gets more images the automatic tagging supports (dogs, shoes, bridges, etc). And if you add a friend's face to known faces, it'll go and search every other photo for that face.
It sounds absolutely trivial to re-scan when the hash DB gets updated.
1. This is a serious attempt to build a privacy preserving solution to child pornography.
2. The complaints are all slippery slope arguments that governments will force Apple to abuse the mechanism. These are clearly real concerns and even Tim Cook admits that if you build a back door, bad people will use it.
However:
Child pornography and the related abuse is widely thought of as a massive problem, that is facilitated by encrypted communication and digital photography. People do care about this issue.
‘Think of the children’ is a great pretext for increasing surveillance, because it isn’t an irrational fear.
So: where are the proposals for a better solution?
I see here people who themselves are afraid of the real consequences of government/corporate surveillance, and whose fear prevents them from empathizing with the people who are afraid of the equally real consequences of organized child sexual exploitation.
‘My fear is more important than your fear’, is the root of ordinary political polarization.
What would be a hacker alternative would be to come up with a technical solution that solves for both fears at the same time.
This is what Apple has attempted, but the wisdom here is that they have failed.
Can anyone propose anything better, or are we stuck with just politics as usual?
Edit: added ‘widely thought of as’ to make it clear that I am referring to a widely held position, not that I am arguing for it.
So yes, there is a problem, but I would love some to see some justification that the problem is actually as massive as people imply, or getting worse. Or if they have a political goal in mind.
And it feels to me like we talk about it a lot more, there is less taboo or social protections for perpetrators about it, which must help to fight this more efficiently than in the past.
> In the US, it looks like the number of child pornography cases is going down each year
Prosecuting fewer cases doesn't mean less abuse is occurring.
It doesn't even mean that the number of instances of abuse for which cases are prosecuted are going down, just that the number of offenders prosecuted is going down.
Agree. It may well be a bad proxy. But it does not mean the problem is getting worse either, one would need to back it up with some arguments/estimates. The number of pictures on the internet does not seems a good one to me (cats population has not grown that big).
I am thinking that people don't seem to know if the problem is really getting bigger. They just say it. If we don't know that, we cannot use growing child pornography as an argument to reducing civil liberties.
Yes, they said it in 2011. With just one data point and no trend. Nothing shows here it exploded since. So the FBI and the whole society may have found somewhat effective ways without scanning phones.
>One winning move would be to take the problem seriously
What makes you think that the problem is not taken seriously? I am no specialist, but it looks like cases are investigated and people are going to jail. You say that it is not the case, so I am curious what leads you to this statement.
Yes, policies can be based on fears and opinion, but some numbers and rationality usually help make better decisions. I'd love to hear something more precise than fears and personal opinion, including the FBI of course.
> Yes, policies can be based on fears and opinion, but some numbers and rationality usually help make better decisions.
Sure, but you aren’t the decision maker. This is a political decision, and so will be based on the balance of fears and forces at play, not a rational analysis.
It doesn’t matter how right or rational you are. It only matters how many people think you understand what they care about.
If the Hacker position is ‘people shouldn’t care about child pornography because the solutions are all to invasive’ so be it. I just don’t think that’s an effective position.
That is only partially true. If we both write, other people may read, then you and me possibly influence people. Wa are social beings, we are influenced by people around us. So we're both a tiny part of the decision process. And you talk about political decision: politicians keep listening to people through polls, phone calls received, and other ways. Even dictators can be, when numbers are high enough.
So it does matter how rational we are. Over time and on average, people are more often convinced by rational discourse than irrationality. Probably because rationality has a higher likelihood of being right. But yes, we'll go through periods where irrationality wins... Still, it's very hard to be on the opposite side of logic and facts for a long time.
>If the Hacker position is "people shouldn’t care about child pornography"
I have not read that here. If you believe that child pornography is a massive issue, I respect that. I just would have hoped you could better describe the actual size of the problem and its evolution. You could have influenced me more effectively.
> If you believe that child pornography is a massive issue, I respect that. I just would have hoped you could better describe the actual size of the problem and its evolution.
I don’t mind whether you are convinced. I’m not trying to convince you about the size of the problem.
My position isn’t about how large the threat is. I don’t have any information that you don’t have access to. My position is that if we care about privacy we have to accept that people other than ourselves think the problem is big enough to warrant these measures.
You have already lost this battle because enough people are convinced about the problem that tech companies are already doing what you don’t want them to do.
>if we care about privacy we have to accept that people other than ourselves think the problem is big enough to warrant these measures.
Not sure what you mean here by "accept". Accept the fact those people and opinions exist? Sure! Accept their opinion without challenging, without asking questions? No.
Accept this opinion is big and rational enough for the majority to follow and make laws? No.
>You have already lost this battle
What makes you think that? That's just your opinion. You know, even when battles are lost, wars can be won later won. "Right to Repair", "Net neutrality", "Global Warming", and here: "open source hardware". All those battles have been fluid. Telling people it is over, it is too late, is a very common trick to try to influence/convince people to accept the current state. That certainly does not make it true.
I understand you may try to convince readers that it is over, because it may be your opinion. If that's the case, just be frank about it, and speak proudly for yourself of what you wish. Don't hide behind "politicians", "tech companies" and "other people".
> What makes you think that? That's just your opinion.
It’s not just my opinion that Apple has implemented a hash based mechanism to scan for child pornography that runs on people’s phones. People complaining about it have definitely lost the battle already. It is already here.
> I understand you may try to convince readers that it is over, because it may be your opinion.
That is not an accurate understanding of my argument.
My position is to agree with those who see this as a slippery slope of increasingly invasive surveillance technology, and to point out that simply arguing against it has been consistently failing over time.
I am also pointing out that one reason it’s failing is that even if the measures are invasive and we think that is bad, the problems they are intended to solve are real and widely perceived as justifying the measures.
What I advocate is that we accept that this is the environment, and if we don’t like Apple’s solution, we develop, or at least propose alternative ways of addressing the problem.
That way we would have a better alternative to argue in favor of rather than just complaining about the solution which Apple has produced and which is the only proposal on the table.
There is a child molestation problem everywhere in the world, including online. I have seen nothing explaining it is getting bigger / worse. I have read that most of the cases are family members, in the real world.
So when I hear Apple and Government explain "because of the children" they want to monitor our phones more, in the context of growing assumed dictatorships, Pegasus, Snowden reveleation, do you really think that solving the child pornography issue will help refrain them, or slow them down? Open source hardware, political pressure, consumer pressure, and regulation, possibly monopoly break-ups. In the US, it starts with the people.
But doing better with child pornography won't change anything there, it juts moves the discussion to some other topic. Distraction. That is my point all along. There is no data that shows that all of a sudden child pronography has progressed leaps and bounds. So people suddenly concerned by that are most likely not truthful,a dn they have a very strong agenda. That's what we need to focus on, not their "look at the children" distraction.
> Are you falling into their trap knowingly or not?
This is a false dichotomy and a false assumption.
> There is a child molestation problem everywhere in the world, including online.
Agreed.
> I have seen nothing explaining it is getting bigger / worse. I have read that most of the cases are family members, in the real world.
Have you listened to Sam Harris, or heard the FBI? They have a very different view.
It could be that both are true: there is a child porn problem and governments are using it as an excuse.
The only thing you seem to be going on is a story you once heard, that may have been true at the time, but may not be now.
> So when I hear Apple and Government explain "because of the children" they want to monitor our phones more, in the context of growing assumed dictatorships, Pegasus, Snowden reveleation, do you really think that solving the child pornography issue will help refrain them, or slow them down?
That would misleading sense given that you are assuming child porn is not a growing problem.
Porn in general is growing hugely why wouldn’t child porn also be growing?
Generally Apple has resisted overreach, but I agree that they are slowly moving in the wrong direction.
Apple is not the government.
> Open source hardware,
> political pressure, consumer pressure, and regulation, possibly monopoly break-ups. In the US, it starts with the people.
You contradict yourself here. You seem to think the government can’t be slowed and yet political pressure will work. Which is it?
> But doing better with child pornography won't change anything there,
I agree - it won’t eliminate the forces that want to weaken encryption etc.
But a more privacy respecting solution would still help.
> it juts moves the discussion to some other topic. Distraction. That is my point all along.
> There is no data that shows that all of a sudden child pronography has progressed leaps and bounds. So people suddenly concerned by that are
Isn’t there? The FBI claims it is growing.
> most likely not truthful,
Ok, we know you don’t trust the FBI.
But enough people do that we can’t ignore them. Even if the problem isn’t growing as Sam Harris claims it is, trying to persuade people that the problem doesn’t need to be solved seems like a good way to undermine the causes you support.
> a dn they have a very strong agenda. That's what we need to focus on, not their "look at the children"
As I say, I agree there are people trying to exploit ‘look at the children’ in support of their own agenda.
I just don’t think that means there isn’t a real problem with child porn. Denying that there is a problem seems equally agenda driven.
There is no end to the loss of privacy in the name of safety if your bogeyman is increasingly sophisticated actors.
The more sophisticated child molesters out there will find out about what Apple is doing and quickly avoid it. Isn’t that what the FBI is complaining about, that they have grown increasingly sophisticated?
The more sophisticated privacy adherents will also avoid Apple and resort to end to encryption and offline tools.
What is the actual outcome? You won’t get more arrests of child molesters. Instead, you get a security apparatus that can be weaponized against the masses. Furthermore, you will have the FBI complaining that child molesters are increasingly out of their reach and demanding greater powers. They will then try to mandate child porn detectors built into every phone.
This creep has been occurring for years. Go read the Snowden disclosures.
First your cell phone companies worked with the government for mass harvesting of data. No need for any suspicion because they promise not to look unless there is one. That wasn’t enough because the data was encrypted.
Second they had the companies holding the data snoop in on data that was shared. That wasn’t enough.
Third they had the companies holding the data snoop in on data EVEN when it wasn’t shared, just when it was uploaded to them. Not enough for them!
Now they will have it done on device prior to uploading. Does this mean that if it fails to upload, it gets scanned anyway. Why yes!
Next they will have it done on device even if it never is held by the company and never shared and never even intended to be uploaded.
The obvious goal is that the government has access to ALL data in the name of safety. No need for warrants. Don’t worry about it. They won’t look unless there was any suspicion. Opps never mind that, we will just have tools look.
There is no end to the loss of privacy in the name of safety if your bogeyman is increasingly sophisticated actors.
> The more sophisticated child molesters out there will find out about what Apple is doing and quickly avoid it.
I think this is exactly what Apple wants to he the result of their their iMessage scanning.
They are not in this to make arrests. They just want parents to feel safe letting children use their products. Driving predators to use different tools is fine with them.
As far as the FBI goes, this is presumably not their preference, but it’s still good for them if it makes predation a little harder.
My point is that the FBI will just use that as a pretext for greater intrusion into privacy. Why stop at users who have iCloud photos turned on? Why not scan all photos?
Why limit it to child predators? Why not use this too for terrorists and anyone the FBI or any other government deems as subversive?
In fact, if you just look at what the FBI has been saying over the years, that is exactly what they intend to do.
People who say that this is a slippery slope argument don’t even notice that they have been sliding down that slippery slope for decades.
> Why stop at users who have iCloud photos turned on? Why not scan all photos?
If they wanted to, they could have done this silently, years ago when the Neural Engine was first released. This is at least an attempt at a transparent, privacy-oriented approach. And it opens the door to more E2EE content in iCloud without the government openly accusing them of enabling distribution of abusive content.
Where is anyone arguing against you? The point is that the demands for solutions may be neverending, but that doesn’t mean the problems are non-existent.
If we want to limit the damage caused by the dynamic, we need to offer better solutions, not just complain about the FBI.
The problem of child molesters is a fixed quantity. The loss of privacy is ever increasing. When you see this dynamic, you know that a con is afoot.
The solution for child porn isn’t found in snooping on everyone. It is in infiltrating the networks. Go have the FBI infiltrate the rings. Here is an example of why the FBI is disingenuous. This child porn ring wasn’t using phones. Guess what they were using?
Like I said, sophisticated actors aren’t the targets.
Another example. Osama bin Laden. He was air gapped. No cell phones or computers for him. No one even calling on a phone nearby. Osama bin Laden was found via an informant.
The next actor will be even more sophisticated. Probably single use dumb paired cell phones with call locations randomized. Probably plastic surgery. Locations that spy satellites cannot see.
Did snooping on cell phone data help find Osama? When that wasn’t enough, did grabbing all online data help? How about grabbing data straight from smart phones? Nope. Nope. Nope. Yet governments want more, more, more. Why do you think snooping helps against people who don’t even use phones for their most private communications?
You were saying that this is a slippery slope argument which implies that it is a fallacious argument. I am saying that isn’t the case. We have been sliding on this slope for decades which deems that the argument is valid and not fallacious.
From Wikipedia’s entry for slippery slope: “The fallacious sense of "slippery slope" is often used synonymously with continuum fallacy, in that it ignores the possibility of middle ground”
This isn’t a middle ground. Every year the supposed middle ground shifts toward less and less privacy. Notice the slippery slope? The very presence of this means that the supposed middle ground just slipped further?
Can you describe the middle ground that I have ignored?
What disingenuous argument have I made?
The only thing I have argued for is for hackers to attempt technical solutions that are more to their liking than Apple’s, because arguments are not preventing the slide.
I am saying that you are promoting a slippery ground falsely as middle ground.
Basically the argument I hear from you is “If you build a back door, then people will use it. So let’s build it anyway because it is a middle ground.” The problem I have with it is the “let’s build it anyway”.
That seems as clear as day. Why do I have to keep repeating myself? Don’t be an apologist.
> If you build a back door, then people will use it. So let’s build it anyway because it is a middle ground.
This looks a completely made up position that has nothing to do with anything I have said. If you can find a comment where I am advocating building back doors, I invite you to quote it.
> That seems as clear as day. Why do I have to keep repeating myself?
If it was clear you’d be able to support it with a quote. I’m pretty sure you can’t.
> Don’t be an apologist.
It doesn’t seem like you have been following my argument, so it’s unclear why you’d stoop to a personal attack.
Sure, I am quite willing to hang you with your own words.
zepto
“As far as I can see:
1. This is a serious attempt to build a privacy preserving solution to child pornography.”
“ > The more sophisticated child molesters out there will find out about what Apple is doing and quickly avoid it. <- this is you quoting me
I think this is exactly what Apple wants to he the result of their their iMessage scanning.
They are not in this to make arrests. They just want parents to feel safe letting children use their products. Driving predators to use different tools is fine with them.”
So according to your very own words, this ISN’T a serious attempt to build a privacy supporting solution to child pornography. First, it isn’t a solution because as you stated, it won’t actually catch anyone. Second, it isn’t serious because it was never intended to catch anyone.
“2. The complaints are all slippery slope arguments that governments will force Apple to abuse the mechanism. These are clearly real concerns and even Tim Cook admits that if you build a back door, bad people will use it.”
So according to your words these are not slippery slope arguments (the invalid argument sense) since, as you state, if you build a back door, bad people will use it. Don’t subtly use negative connotations to try to advance your argument.
Next you disingenuously frame the problem as a conflict between privacy and child pornography. That is an unsupported dichotomy.
“ However:
Child pornography and the related abuse is widely thought of as a massive problem, that is facilitated by encrypted communication and digital photography. People do care about this issue.
‘Think of the children’ is a great pretext for increasing surveillance, because it isn’t an irrational fear.”
Lastly you call for better solutions for a “solution” that actually isn’t a solution.
“So: where are the proposals for a better solution?”
“ Apple’s solution is the best on offer. ”
Another unsupported dichotomy and a false assignment of responsibility.
If this solution is bad, then toss it out. You don’t need another proposal in its place. You don’t need to deploy this backdoor
in the meantime.
It is NOT our responsibility to do the FBI’s job. It is THEIR responsibility to come up with better proposals.
If you do actually want a solution, my recommendation is to concentrate on real harm like child molestation and child trafficking. Trace how children have been trafficked historically. See how you can shut that down.
I feel dirty analyzing all the dirty tricks that you employed. Are you a politician or do you work for one? Work on policy?
> Sure, I am quite willing to hang you with your own words.
That isn’t what you’ve done.
“As far as I can see: 1. This is a serious attempt to build a privacy preserving solution to child pornography.”
- False as you even argued
I don’t argue that.
>> They are not in this to make arrests. They just want parents to feel safe letting children use their products. Driving predators to use different tools is fine with them.”
> So according to your very own words,
Erm..
> this ISN’T a serious attempt to build a privacy supporting solution to child pornography.
These are your words, not what you quoted of mine.
It’s absolutely a solution to the problem of child porn on their platform. They care about making it safe for their users. Who is expecting Apple to solve the problem beyond that?
> It is at best something to keep the FBI at bay
These are your words, not something I have said.
> even though as you say it also introduces a back door.
Where do I say it introduces a back door?
> “2. The complaints are all slippery slope arguments that governments will force Apple to abuse the mechanism. These are clearly real concerns and even Tim Cook admits that if you build a back door, bad people will use it.”
> False, these are not slippery slope arguments
It is your opinion that they are not slippery slope arguments. I think they are, and as you have quoted I think they are reasonable. Slippery slope arguments are fallacies in the sense that the conclusions don’t logically follow, but that doesn’t mean that they are always wrong.
You haven’t hung me with anything - you’ve just voiced some misrepresentations of your own interleaved with quotes of me.
Resplendent. I won’t even add my own words. I will let you speak for yourself.
““ As far as I can see: 1. This is a serious attempt to build a privacy preserving solution to child pornography.”
I don’t argue that.””
“ Apple’s solution is the best on offer. ”
“ I think this is exactly what Apple wants to he the result of their their iMessage scanning. They are not in this to make arrests. They just want parents to feel safe letting children use their products. Driving predators to use different tools is fine with them.”
“2. The complaints are all slippery slope arguments that governments will force Apple to abuse the mechanism. These are clearly real concerns and even Tim Cook admits that if you build a back door, bad people will use it.”
“So: where are the proposals for a better solution?”
Let the viewer decide. Are you going to argue that I have misrepresented your words? Feel free to argue with yourself.
Zepto complained that it was out of context. Here are entire comments.
“ As far as I can see:
1. This is a serious attempt to build a privacy preserving solution to child pornography.
2. The complaints are all slippery slope arguments that governments will force Apple to abuse the mechanism. These are clearly real concerns and even Tim Cook admits that if you build a back door, bad people will use it.
However:
Child pornography and the related abuse is widely thought of as a massive problem, that is facilitated by encrypted communication and digital photography. People do care about this issue.
‘Think of the children’ is a great pretext for increasing surveillance, because it isn’t an irrational fear.
So: where are the proposals for a better solution?
I see here people who themselves are afraid of the real consequences of government/corporate surveillance, and whose fear prevents them from empathizing with the people who are afraid of the equally real consequences of organized child sexual exploitation.
‘My fear is more important than your fear’, is the root of ordinary political polarization.
What would be a hacker alternative would be to come up with a technical solution that solves for both fears at the same time.
This is what Apple has attempted, but the wisdom here is that they have failed.
Can anyone propose anything better, or are we stuck with just politics as usual?
Edit: added ‘widely thought of as’ to make it clear that I am referring to a widely held position, not that I am arguing for it.”
> The more sophisticated child molesters out there will find out about what Apple is doing and quickly avoid it.
I think this is exactly what Apple wants to he the result of their their iMessage scanning.
They are not in this to make arrests. They just want parents to feel safe letting children use their products. Driving predators to use different tools is fine with them.
As far as the FBI goes, this is presumably not their preference, but it’s still good for them if it makes predation a little harder.
Also in contrast to zepto, here are words from Edward Snowden: No matter how well-intentioned,
@Apple
is rolling out mass surveillance to the entire world with this. Make no mistake: if they can scan for kiddie porn today, they can scan for anything tomorrow.
They turned a trillion dollars of devices into iNarcs—without asking.
Widespread education about how child abuse usually works in the real world outside of movies, public examination of what tools we have available and whether they're being used effectively, very public stats about how prevalent the problem is and what direction it's moving in.
Better media that more accurately reflects how these problems play out for real people and that doesn't use them as a cheap gimmick to pray on people's fears to raise narrative stakes.
Better education about the benefits of encryption and the risks of abuse, better education about how back doors play out in the real world and how they are abused both by governments and by individual abusers. Better education about the fail-rate of these systems.
> Child pornography and the related abuse is widely thought of as a massive problem, that is facilitated by encrypted communication and digital photography.
Most people don't have a basis for this instinct, they believe it because they were taught it, because it's what the movies tell them, because it fits their cultural perception of technology, because it's what the CIA and FBI tell them. Unless they're basing their fear on something other than emotion and instinct, there is no technical or political solution that will reduce their fear. It's all useless. Only education and cultural shifts will make them less afraid.
If you survey the US, over 50% of adults will tell you that crime today is at the same level or worse than it was in 1950s, an absurd position that has no basis in reality. So what, should we form a police state? At some point, caring about the real problems we have in the real world means learning to tell people that being scared of technology is not a good enough justification on its own to ban it.
Nothing else will work. People's irrational fears by and large will not be alleviated by any technical solutions. They won't even be alleviated by what Apple is doing, parents who have an irrational fear of privacy are not going to have that fear lessened because Apple introduced this scanning feature. They will still be afraid.
Yes. “We must do something, “It’s for the children”, “Even if it saves just one person”. No one has ever used this triple fallacy before.
How big do you really think the market is on kiddie porn that there are people storing it plainly on their iPhones and are “safe” because they aren’t uploading it to iCloud?
This is bullshit through and through.
The best case is this is the step Apple needs to get E2E iCloud storage, to prove they’re doing enough while maintaining privacy. The worst case is that if their is potential for a list and reporting to be abused, it will.
There seems to be no scenario for the best case to exist without the worst case.
Do you believe you can stop people from sharing imagines pre-determined to be exploitative and illegal by a central authority who has already logged and recorded the hash of that photo with children by deploying a universal scan system to all phones - when phones aren't a primary tool of trafficing illegal content?
>It seems like you are simply stating that the other side’s priorities are wrong.
This is a juvenile attempt at a nobility argument. That you can do anything as long as the goal is noble. Just as I wrote first, anything if it’s “for the children”. This is how all sorts of abuses are carried out under the four horsemen.
> when phones aren't a primary tool of trafficing illegal content?
Are they not? This is just an assumption you have made. It doesn’t matter what I think about it
I asked:
> Do you believe that sexual predators sharing images with children on iPhones is a problem?
You haven’t answered.
>>It seems like you are simply stating that the other side’s priorities are wrong.
> This is a juvenile attempt at a nobility argument.
No it isn’t. It is an honest attempt to characterize your approach. It seems clear that you think the other sides priorities are wrong.
That’s fine. I have to assume everyone thinks that. The point is that it doesn’t matter that you think they are wrong. You have already lost. The things you don’t want are already happening.
My argument is that since that debate is lost, any attempt to restore privacy must accept that other people’s priorities are different. Simply trying to get the other side to change priorities when you are already losing doesn’t seem like a good approach.
"Think of the children" will always work, no matter what the context is, no matter what the stats are, and no matter what we do. That does not mean that we should not care about the children, and it does not mean that we shouldn't care about blocking CSAM. We should care about these issues purely because we care about protecting children. If there are ways for us to reduce the problem without breaking infrastructure or taking away freedoms, we should take those steps. Similarly, we should also think about the children by protecting them from having their sexual/gender identities outed against their wishes, and by guaranteeing they grow up in a society that values privacy and freedom where they don't need to constantly feel like they're being watched.
But while those moral concerns remain, the evergreen effectiveness of "think of the children" also means that compromising on this issue is not a political strategy. It's nothing, it will not ease up on any pressure on technologists, it will change nothing about the political debates that are currently happening. Because it hasn't: we've been having the same debates about encryption since encryption was invented, and I would challenge you to point at any advancement or compromise from encryption advocates as having lessened those debates or having appeased encryption critics.
Your mistake here is assuming that anything that technologists can build will ever change those people's minds or make them ease up on calls to ban encryption. It won't.
Reducing the real-world occurrences for irrational fears doesn't make those fears go away. If we reduce shark attacks on a beach by 90%, that won't make people with a phobia less frightened at the beach, because their fear is not based on real risk analysis or statistics or practical tradeoffs. Their fear is real, but it's also irrational. They're scared because they see the deep ocean and because Jaws traumatized them, and you can't fix that irrational fear by validating it.
So in the real world we know that the majority of child abuse comes from people that children already know. We know the risks of outing minors to parents if they're on an LGBTQ+ spectrum. We know the broader privacy risks. We know that abusers (particularly close abusers) often try to hijack systems to monitor and spy on their victims. We would also in general like to see more stats about how serious the problem of CSAM actually is, and we'd like to know whether or not our existing tools are being used effectively so we can balance the potential benefits and risks of each proposal against each other.
If somebody's not willing to engage with those points, then what makes you think that compromising on any other front will change what's going on in their head? You're saying it yourself, these people aren't motivated by statistics about abuse, they're frightened of the idea of abuse. They have an image in their head of predators using encryption, and that image is never going to go away no matter what the real-world stats do and no matter what solutions we propose.
The central fear that encryption critics have is a fear of private communication. How can technologists compromise to address that fear? It doesn't matter what solutions we come up with or what the rate of CSAM drops to, those people are still going to be scared of the idea of privacy itself.
Nobody in any political sphere has ever responded to "think of the children" with "we already thought of them enough." So the idea that compromising now will change anything about how that line is used in the future -- it just seems naive to me. Really, the problem here can't be solved by either technology or policy. It's cultural. As long as people are frightened of the idea of privacy and encryption, the problem will remain.
> Your mistake here is assuming that anything that technologists can build will ever change those people's minds or make them ease up on calls to ban encryption. It won't.
What makes you think I think that? You have misrepresented me here (effectively straw-manning), but I will assume an honest mistake.
You are right that there are people who will always seek to ban or undermine encryption no matter what, and who use ‘think of the children’ as an excuse regardless of the actual threat. ‘Those people’ as you put it, by definition will never have their minds changed by technologists. Indeed there is no point in technologists trying to do that.
However I don’t think that group includes Apple, nor does it include most of Apples customers. Apple’s customers do include many people who are worried about sexual predators reaching their children via their phones though. These people are not ideologues or anti-encryption fanatics.
Arguing that concerns about children are overblown or being exploited for nefarious means may be ‘true’, but it does nothing to provide an alternative that Apple could use, not does it do anything to assuage the legitimate fears of Apple’s customers.
Perhaps you believe that there is no way to build a more privacy preserving solution than the one Apple has.
I would simply point out in that case, that the strategy of arguing against ‘think of the children’, has already lost, and commiserate with you.
I’m not convinced that there is no better solution. Betting against technologists to solve problems usually seems like a bad bet, but even if you don’t think it’s likely, it seems irrational not to hedge, because the outcome of solving the problem would have such a high upside.
It’s worth pointing out that Public Key cryptography is a solution to a problem that at one time seemed insoluble to many.
> Arguing that concerns about children are overblown or being exploited for nefarious means may be ‘true’, but it does nothing to provide an alternative that Apple could use
- If the stats don't justify their fears
- And I come up with a technological solution that will make the stats even lower
- Their fears will not be reduced
- Because their fears are not based on the stats
----
> Apple’s customers do include many people who are worried about sexual predators reaching their children via their phones though
Are they worried about this because of a rational fear based on real-world data? If so, then I want to talk to them about that data and I want to see what basis their fears have. I'm totally willing to try and come up with solutions that reduce the real world problem as long as we're all considering the benefits and tradeoffs of each approach. We definitely should try to reduce the problem of CSAM even further.
But if they're not basing their fear on data, then I can't help them using technology and I can't have that conversation with them, because their fear isn't based on the real world: it's based on either their cultural upbringing, or their preconceptions about technology, or what media they consume, or their past traumas, or whatever phobias that might be causing that fear.
Their fear is real, but it can not be solved by any technological invention or policy change, including Apple's current system. Because you're telling me that they're scared regardless of what the reality of the situation is, you're telling me they're scared regardless of what the stats are.
That problem can't be solved with technology, it can only be solved with education, or emotional support, or cultural norms. If they're scared right now without knowing anything about how bad the problem actually is, then attacking the problem itself will do nothing to help them -- because that's not the source of their fear.
> Their fear is real, but it can not be solved by any technological invention or policy change, including Apple's current system. Because you're telling me that they're scared regardless of what the reality of the situation is, you're telling me they're scared regardless of what the stats are.
Not really.
I’m agreeing that parents will be afraid for their children regardless of the stats, and are unlikely to believe anyone who claimed they shouldn’t be. The ‘stats’ as you put it won’t change this.
Not because the stats are wrong, but because they are insufficient, and in fact predation will likely continue in a different form even if we can show a particular form to not be very prevalent. The claim to have access to ‘the reality of the situation’ is not going to be accepted.
You won’t be able to solve the problem through education or emotional support because you can’t actually prove that the problem isn’t real.
You actually don’t know the size of the problem yourself, which is why you are not able to address it conclusively here.
What I am saying is that we need to accept that this is the environment, and if we want less invasive technical solutions to problems people think are real, and which you cannot prove are not, then we need to create them.
> What I am saying is that we need to accept that this is the environment, and if we want less invasive technical solutions to problems people think are real, and which you cannot prove are not, then we need to create them.
And what I'm saying is that this is a giant waste of time because if someone has a phobia about their kid getting abducted, that phobia will not go away just because Apple started scanning photos.
You want people to come up with a technical solution, but you don't even know to define what a "solution" is. How will we measure that solution absent statistics? How will we know if it's working or not? Okay, Apple starts scanning photos. Are we done? Has that solved the problem?
We don't know if that's enough, because people's fears here aren't based on the real world, they're based on Hollywood abduction movies, and those movies are still going to get made after Apple starts scanning photos.
You are completely correct that the stats are insufficient to convince these people. But you're also completely wrong in assuming that there is some kind of escape hatch or technological miracle that anyone can pull off to make those fears go away, because in your own words: "parents will be afraid for their children regardless of the stats."
If Apple's policy reduces abuse by 90%, they'll still be afraid. If it reduces it by 10%, they'll still be afraid. There is no technological solution that will ease their fear, because it's not about the stats.
----
I'm open to being proven wrong that predation is a serious problem that needs drastic intervention. I'm open to evidence that suggests that encryption is a big enough problem that we need to come up with a technological solution. I just want to see some actual evidence. People being scared of things is not evidence, that's not something we can have a productive conversation about.
If we're going to create a "solution", then we need to know what the problem is, what the weak points are, and what metrics we're using to figure out whether or not we're making progress.
If that's not on the table, then also in your words, we need to "accept that this is the environment" and stop trying to pretend that coming up with technical solutions will do anything to reduce calls to weaken encryption or insert back doors.
> But you're also completely wrong in assuming that there is some kind of escape hatch or technological miracle that anyone can pull off to make those fears go away,
I can’t be wrong about that since I’m not claiming that anywhere or assuming it.
> because in your own words: "parents will be afraid for their children regardless of the stats."
Given that I wrote this, why would you claim that I think otherwise?
> There is no technological solution that will ease their fear, because it's not about the stats.
Agreed, except that I go further and claim that the stats are not sufficient, so making about the stats can’t solve the problem.
> People being scared of things is not evidence,
It’s evidence of fear. Fear is real, but it’s not a measure of severity or probability.
> that's not something we can have a productive conversation about.
I don’t see why we can’t take into account people’s fears.
> If we're going to create a "solution", then we need to know what the problem is, what the weak points are, and what metrics we're using to figure out whether or not we're making progress.
Yes. One of those metrics could be ‘in what ways does this compromise privacy’, and another could be ‘in what ways does this impede child abuse use cases’. I suspect Apple is trying to solve for those metrics.
Perhaps someone else can do better.
> If that's not on the table, then also in your words, we need to "accept that this is the environment"
This part is unclear.
> stop trying to pretend that coming up with technical solutions will do anything to reduce calls to weaken encryption or insert back doors.
It’s unclear why you would say anyone is pretending this, least of all me. I have wholeheartedly agreed with you that these calls are ‘evergreen’.
I want solutions to problems like the child abuse use cases, such that when calls to weaken encryption or insert back doors are made as they always will be, we don’t have to.
> except that I go further and claim that the stats are not sufficient, so making about the stats can’t solve the problem.
Statistics are a reflection of reality. When you say that the stats don't matter, you are saying that the reality doesn't matter. Just that people are scared.
You need to go another step further than you are currently going, and realize that any technological "solution" will only be affecting the reality, and by extension will only be affecting the stats. And we both agree that the stats can't solve the problem.
It's not that making this about the stats will solve the problem. It won't. But neither will any technological change. You can not solve an irrational fear by making reality safer.
----
Let's say we abandon this fight and roll over and accept Apple moving forward with scanning. Do you honestly believe that even one parent is going to look at that and say, "okay, that's enough, I'm not scared of child predators anymore."? Can you truthfully tell me that you think the political landscape and the hostility towards encryption would change at all?
And if not, how can you float compromise as a political solution? What does a "solution" to an irrational fear even look like? How will we tell that the solution is working?
You say the stats don't matter; then we might as well give concerned parents fake "magic" bracelets and tell them that they make kids impossible to kidnap. Placebo bracelets won't reduce actual child abuse of course, but as you keep reiterating, actual child abuse numbers are not why these people are afraid. Heck, placebo bracelets might help reduce parent's fear more than Apple's system, since placebo bracelets would be a constantly visible reminder to the parents that they don't need to be afraid, and all of Apple's scanning happens invisibly behind the scenes where it's easy to forget.
----
> I want solutions to problems like the child abuse use cases, such that when calls to weaken encryption or insert back doors are made as they always will be, we don’t have to.
Out of curiosity, how will you prove to these people that your solutions are sufficient and that they work as substitutes for weakening encryption? How will you prove to these people that your solutions are enough?
Will you use stats? Appeal to logic?
You almost completely understand the entire situation right now, you just haven't connected the dots that all of your technological "solutions" are subject to the same problems as the current debate.
No, they are the output of a process. Whether a process reflects ‘reality’ is dependent on the process and how people understand it. This is essential to science.
Even when statistics are the result of the best scientific processes available, they are typically narrow and reflect only a small portion of reality.
This is why they are insufficient.
> When you say that the stats don't matter,
I never said they don’t matter. I just said they were insufficient to convince people who are afraid.
> you are saying that the reality doesn't matter.
Since I’m not saying they don’t matter, this is irrelevant.
> It's not that making this about the stats will solve the problem. It won't. But neither will any technological change. You can not solve an irrational fear by making reality safer.
Can you find a place where this contradicts something I’ve said? I haven’t argued to the contrary anywhere. I don’t expect to get the fears to go away.
As to whether they are rational are not, some are, and some aren’t. We don’t know which are which because you don’t have the stats, so we have to accept that there is a mix.
> Will you use stats? Appeal to logic?
Probably a mix of both, maybe some demos, who knows. I won’t expect them to be sufficient to silence the people who are arguing in favor of weakening encryption, not make parents feel secure about their children being protected against predation forever.
> You almost completely understand the entire situation right now, you just haven't connected the dots that all of your technological "solutions" are subject to the same problems as the current debate.
Again you misrepresent me. Can you find a place where I argue that technological solutions are not subject to the same problems as the current debate?
I don’t think you can find such a place.
I have fully agreed that you can’t escape the vicissitudes of the current debate. Nonetheless, you can still produce better technological solutions. This isn’t about prevailing over unquantifiable fears and dark forces. It’s about making better technologies in their presence.
Okay, fine. Are you claiming that people who are calling to ban encryption are doing so on a scientific basis?
Come on, be serious here. People call to ban encryption because it scares them, not because they have a model of the world based on real data or real science that they're using to reinforce that belief.
If they did, we could argue with them. But we can't, because they don't.
> Can you find a place where this contradicts something I’ve said?
Yes, see below:
> such that when calls to weaken encryption or insert back doors are made as they always will be, we don’t have to
I'm open to some kind of clarification that makes this comment make sense. How are your "solutions" going to make people less afraid? On what basis are you going to argue with these people that your solution is better than banning encryption?
Pretend that I'm a concerned parent right now. I want to ban encryption. What can you tell me now to convince me that any other solution will be better?
>> Okay, fine. Are you claiming that people who are calling to ban encryption are doing so on a scientific basis?
No. Did I say something to that effect?
> Come on, be serious here. People call to ban encryption because it scares them, not because they have a model of the world based on real data or real science that they're using to reinforce that belief.
You say this as if you are arguing against something I have said. Why?
> If they did, we could argue with them. But we can't, because they don't.
We can still argue with them, just not with science.
> Can you find a place where this contradicts something I’ve said?
> Yes, see below:
You’ll need to explain what the contradiction is. You have said you don’t understand it, but you not understanding doesn’t make it a contradiction.
>> such that when calls to weaken encryption or insert back doors are made as they always will be, we don’t have to
> I'm open to some kind of clarification that makes this comment make sense.
It makes sense to have solutions that don’t weaken privacy. Wouldn’t you agree?
> How are your "solutions" going to make people less afraid?
They won’t.
> On what basis are you going to argue with these people that your solution is better than banning encryption?
Which people? The parents, the nefarious actors, apple’s customers?
> Pretend that I'm a concerned parent right now. I want to ban encryption. What can you tell me now to convince me that any other solution will be better?
Of course not because you are going to play the role of an irrational parent who cannot be convinced.
Neither of us disagree that such people exist. Indeed we both believe that they do.
> Neither of us disagree that such people exist. Indeed we both believe that they do.
> Why does changing such a person’s mind matter?
Okay, finally! I think I understand why we're disagreeing. Please tell me if I'm misunderstanding your views below.
> You’ll need to explain what the contradiction is.
I kept getting confused because you would agree with me right up to your conclusion, and then suddenly we'd both go in opposite directions. But here's why I think that's happening:
You agree with me that there are irrational actors that will not be convinced by any kind of reason or debate that their fears are irrational. You agree with me that those people will never stop calling to ban encryption, and that they will not be satisfied by any alternative you or I propose. But you also believe there's another category of people who are "semi-rational" about child abuse. They're scared of it, maybe not for any rational reason. But they would be willing to compromise, they would be willing to accept a "solution" that targeted some of their fears, and they might be convinced than an alternative to banning encryption is better.
Where we disagree is that I don't believe those people exist -- or at least if they do exist, I don't believe they are a large enough or engaged enough demographic to have any political clout, and I don't think it's worth trying to court them.
My belief is that by definition, a fear that is not based on any kind of rational basis is an irrational fear. I don't believe there is a separate category of people who are irrationally scared of child predators, but fully willing to listen to alternative solutions instead of banning encryption.
So when you and I both say that we can't convince the irrational people with alternative solutions, my immediate thought is, "okay, so the alternative solutions are useless." But of course you think the alternative solutions are a good idea, because you think those people will listen to your alternatives, and you think they'll sway the encryption debate if they're given an alternative. I don't believe those people exist, so the idea of trying to sway the encryption debate by appealing to them is nonsense to me.
In my mind, anyone who is rational enough to listen to your arguments about why an alternative to breaking encryption is a good idea, is also rational enough to just be taught why banning encryption is bad. So for people who are on the fence or uninformed, but who are not fundamentally irrationally afraid of encryption, I would much rather try gently reaching out to them using education and traditional advocacy techniques.
----
Maybe you're right and I'm wrong, and maybe there is a political group of "semi-rational" people who are
A) scared about child abuse
B) unwilling to be educated about child abuse or to back up their beliefs
C) but willing to consider alternatives to breaking encryption and compromising devices.
If that group does exist, then yeah, I get where you're coming from. BUT personally, I believe the history of encryption/privacy/freedom debates on the Internet backs up my view.
Let's start with SESTA/FOSTA:
First, Backpage did work with the FBI, to the point that the FBI even commented that Backpage was going beyond any legal requirement to try and help identify child traffickers and victims. Second, both sex worker advocates and sex workers themselves openly argued that not only would SESTA/FOSTA be problematic for freedom on the Internet, but that the bills would also make trafficking worse and make their jobs even more dangerous.
Did Backpage's 'compromise' sway anyone? Was there a group of semi-reasonable people who opposed sites like Backpage but were willing to listen to arguments that the bills would actively make sex trafficking worse? No, those people never showed up. The bills passed with broad bipartisan support. Later, several Senators called to reexamine the bills not because alternatives were proposed to them, but because they put in the work to educate themselves about the stats, and realized the bills were harmful.
Okay, now let's look at the San Bernardino case with Apple. Apple gave the FBI access to the suspect's iCloud account, literally everything they asked for except access to decrypt the phone itself. Advocates argued that the phone was unlikely to aid in the investigation, and also suggested using an exploit to get into the phone, rather than requiring Apple to break encryption. Note that in this case the alternative solution worked, the FBI was able to get into the phone using an exploit rather than by compelling Apple to break encryption. The best case scenario.
Did any of that help? Was there a group of semi-reasonable people who were willing to listen to the alternative solution? Did the debate cool because of it? No, it changed nothing about the FBI's demands or about the political debate. What did help was Apple very publicly and forcefully telling the FBI that any demand at all to force them to install any code for any reason would be a violation of the 1st Amendment. So minus another point from compromise as an effective political strategy in encryption debates, and plus one point to obstinance.
Okay, now let's jump back to early debates about encryption: the clipper chip. Was that solved by presenting the government and concerned citizens with an alternative that would better solve the problem? No, it wasn't -- even though there were plenty of people who argued at the time for encryption experts to work with the government instead of against it. Instead the clipper chip problem was solved both when encryption experts broke the clipper chip so publicly and thoroughly that it destroyed any credibility the government had in claiming it was secure, and it was solved by the wide dissemination of strong encryption techniques that made the government's demands impossible, over the objections of people who called for compromise or understanding of the government's position.
----
I do not see any strong evidence for a group of people who can't be educated about encryption/abuse, but who can be convinced to support alternative strategies to reduce child abuse. If that group does exist, it does a very good job of hiding, and a very bad job of intervening during policy debates.
I do think that people exist who are skeptical about encryption but who are not so irrational that they would fall into our category of "impossible to convince." However, I believe they can be educated, and that it is better to try and educate them than it is to reinforce their fears.
Because of that, I see no political value in trying to come up with alternative solutions to assuage people's fears. I think those people should either be educated, or ignored.
It is possible I'm wrong, and maybe you could come up with an alternative solution that reduced CSAM without violating human rights to privacy and communication. If so, I would happily support it, I have no reason to oppose a solution that reduces CSAM if it doesn't have negative effects for the Internet and free culture overall, a solution like that would be great. However, I very much doubt that you can come up with a solution like that, and if you can, I very much doubt that outside of technical communities anyone will be very interested in what you propose. I personally think you would be very disappointed by how few people arguing for weakening encryption right now are actually interested in any of the alternative solutions you can come up with.
And it's my opinion, based on the history of privacy/encryption, that traditional advocacy and education techniques will be more politically effective than what you propose.
> My belief is that by definition, a fear that is not based on any kind of rational basis is an irrational fear. I don't believe there is a separate category of people who are irrationally scared of child predators, but fully willing to listen to alternative solutions instead of banning encryption.
We disagree here, indeed. My view is not that there are ‘semi-rational’ people. My view is that there are hard to quantify risks that it is rational to have some fear about and see as problems to be solved. I think this describes most of us, most of the time.
The idea that there is a clear distinction between ‘rationally’ understanding a complex social problem through science, and being ‘irrational and unconvincable’ seems inaccurate to me. Both of these positions seem equally extreme, and neither qualify as reasonable in my view, nor are they how most people act.
I think there are a lot of people who are reasonably afraid of things they don’t fully understand and which nobody fully understands. These people reasonably want solutions, but don’t expect them to be perfect or to assuage everyone’s fear.
These are the people who can easily be persuaded to sacrifice a little privacy if it means making children safer from horrific crimes.
They are also people who would prefer a solution that didn’t sacrifice so much if it was an option.
My argument is that the best way to make things better is to make better options available. Irrationally paranoid parents, and irrationally paranoid governments exist, but are the minority.
Most people just want reasonable solutions and aren’t going to be persuaded by either extreme. If you make an argument about creeping authoritarianism they’ll say ‘child porn is a real problem, and that risk is distant’.
If you offer them a more privacy preserving solution to choose as well as a less privacy preserving option, they’ll likely choose the more privacy preserving option.
Apple is offering a much more privacy preserving option than just disabling encryption. People will accept it because it seems like a reasonable trade-off in the absence of anything better.
If we think it’s a bad trade-off that is taking us in the direction of worse and worse privacy compromises, we aren’t likely to be able to persuade people to ignore the real trade-offs, but we stand a chance of getting them to accept a better solution to the same problem.
If we don’t offer an alternative solution we aren’t offering them anything at all.
> I see no political value in trying to come up with alternative solutions to assuage people's fears.
Why do you mention this again? Nobody is arguing for a solution designed to assuage people’s fears.
> I do not see any strong evidence for a group of people who can't be educated about encryption/abuse, but who can be convinced to support alternative strategies to reduce child abuse.
Why do you assume education about encryption/abuse is relevant? Even people who deeply understand the issue still have to choose between the options that are available and practical.
> If that group does exist, it does a very good job of hiding,
It’s not a meaningful group definition.
> and a very bad job of intervening during policy debates.
Almost nobody intervenes during policy debates unless there have a strong position. Most people just choose the best solution from what is available and get on with their lives which are not centered on these issues.
> maybe you could come up with an alternative solution that reduced CSAM without violating human rights to privacy and communication. If so, I would happily support it, I have no reason to oppose a solution that reduces CSAM if it doesn't have negative effects for the Internet and free culture overall, a solution like that would be great.
Indeed. Isn’t that what we really want here? The only reason people are engaged in all this ideological battle is that they assume there isn’t a technical solution.
> However, I very much doubt that you can come up with a solution like that.
You could have just said you are someone who doesn’t believe a technical solution is possible.
> I personally think you would be very disappointed by how few people arguing for weakening encryption right now are actually interested in any of the alternative solutions you can come up with.
Why would you think I would be disappointed? We have already discussed how I don’t expect those people to change their minds.
Fortunately that is irrelevant to whether a solution would help, since it is not aimed at them.
> My view is that there are hard to quantify risks that it is rational to have some fear about and see as problems to be solved.
Heavily agreed. But those are not irrational fears.
They become irrational fears when learning more about the risks and learning more about the benefits and downsides of different mitigation techniques doesn't change anything about those fears one way or another.
We all form beliefs based on incomplete information. That's not irrational. It is irrational for someone to refuse to look at or engage with new information. If someone is scared of the potential for encryption to facilitate CSAM because they're working with incomplete information, that's not irrational.
If someone is scared of encryption because they have incomplete information, and they refuse to engage with the issue or to learn more about the benefits of encryption, or the risks of banning it, or what the stats on child predators actually are -- at that point, it's an irrational belief. What makes them irrational is the fact that they are no longer being adjusted based on new information.
A rational person is not someone who knows everything. A rational person is someone who is willing to learn about things when given the opportunity.
> Why do you mention this again? Nobody is arguing for a solution designed to assuage people’s fears.
I guess I don't understand what you are arguing for then.
Let's look at your "reasonable people who are reasonably afraid" camp. We'll consider that these people have doubts about encryption, but don't hate it. They are scared of the potential for abusers to run rampant, but are having trouble figuring out what that looks like or what the weak points are in a complicated system. They are confused, but not bad-faith, and they have fears about something that is legitimately horrific. We will say that these people are not irrational, they recognize a real problem and earnestly want to do something about it.
There are 2 things we can do with these types of people:
1) We can educate them about the dangers of banning encryption and encourage them to research more about the problem. We can remain open to other proposals that they have, while making it clear that each proposal's social benefits have to be weighed against their social costs.
or
2) We can offer them some kind of compromise solution that may or may not actually address their problem, but will make them feel like it does, and which will in theory make them less likely to try and ban encryption.
You seem to be suggesting that we try #2? And this apparently isn't designed to assuage their fears? But I'm not sure what it does then. Presumably the reason they'll accept your proposal is because it addresses the fears they have.
My preference is to try #1. I believe that if someone is actually in the camp you describe, if they have reasonable fears but they're looking at a complex social problem, openly talking to those people about the complex downsides of banning encryption is OK. They'll listen. They might come up with other ideas, they might bring up their own alternative solutions. All of that is fine, none of us are against reducing CSAM, we just want people to understand the risks behind the systems being proposed.
But importantly, if someone is genuinely reasonable, if they aren't irrational and they're just trying to grapple with a complex system -- then talking about the downsides should be enough, because those people are reasonable and once they understand the downsides then they'll understand why weakening encryption isn't a feasible plan. From there we can look at alternatives, but the alternatives are not a bargaining chip. Even if there were no alternatives, that wouldn't change anything about the downsides of making software more vulnerable. First, people must understand why a proposed solution won't work, and then we can propose alternatives.
To me, if someone comes to me and says, "I'm not interested in hearing about the downsides of banning encryption, come up with a solution or we'll ban it anyway" -- I don't think that person is reasonable, I don't think they're acting rationally, and certainly I'm not interested in working with that person or coming up with solutions with that person.
> If we don’t offer an alternative solution we aren’t offering them anything at all.
Where I fall on this is that I am totally willing to look for alternative solutions; but encryption, device ownership, privacy, and secure software -- these are not prizes to be won, conditional on me finding a solution.
We can look for a solution together once we've taken those things off the table.
Because if someone comes to me asking to find a good solution, I want to know that they're coming in good faith, that they genuinely are looking for the best solution with the fewest downsides. If they're not, if they're using encryption as some kind of threat, then they're not really acting in good faith about honestly looking at the upsides and downsides. I have a hard time figuring out how I would describe that kind of a person as "reasonable".
> I personally think you would be very disappointed
> Why would you think this? Did I say anything anywhere about convincing people who are arguing for weakening encryption?
Let me be even more blunt. I think that you could come up with a brilliant solution today with zero downsides that reduced CSAM by 90%. And I think you would be praised if you did come up with that solution, and it would be great, and everyone including tech people like me would love you for it. And I also think it would change literally nothing about the current debates we're having. I think we would be in the exact same place, I think all of the people who are vaguely worried about CSAM and encryption (even the good faith people you mention above) would still be just as worried tomorrow. You could come up with the most innovative amazing reduction strategy for CSAM ever conceived, and it would not change any of those people's opinions on encryption.
I'm not just talking the irrational people. It would not change the opinions of the reasonable people you're describing above. Because why would it? However good your solution is, if encryption is genuinely not worth preserving, then it would always be better to implement your solution and ban encryption. I don't say that derisively, if the benefits of banning encryption really did outweigh the downsides, then it would genuinely be good to get rid of encryption.
The only reason we don't get rid of encryption is because its benefits do heavily outweigh its downsides. Not because this is some kind of side in a debate, but because when you examine the issue rationally and reasonably, it turns out that weakening encryption is a really bad idea.
> My argument is that the best way to make things better is to make better options available.
This is another point where we differ then.
As far as I can tell, any reasonable person who is convinced that encryption is a net negative is always going to be interested in getting rid of encryption unless they understand what the downsides are. Any reasonable person who is on the fence about encryption is going to stay on the fence until they get more information. I don't see how proposing alternative solutions is going to change that.
So I believe that the only way these reasonable people you describe are going to change their minds are if they're properly educated about the downsides of making software vulnerable, if they're properly educated about the upsides of privacy, and if they're properly educated about the importance of device ownership.
And maybe I'm overly optimistic here, but I also do believe that reasonable people are willing to engage in good faith about their proposed solutions and to learn more about the world. I don't think that a reasonable person is going to clam up and get mad and stop engaging just because someone tells them that their idea to backdoor software has negative unintended side effects. I think that education works when offered to reasonable people.
> There are 2 things we can do with these types of people:
> 1) We can educate them about the dangers of banning encryption and encourage them to research more about the problem. We can remain open to other proposals that they have, while making it clear that each proposal's social benefits have to be weighed against their social costs.
> or
> 2) We can offer them some kind of compromise solution that may or may not actually address their problem, but will make them feel like it does, and which will in theory make them less likely to try and ban encryption.
Why are those the only two solutions? That seems like a false dichotomy.
Again why would you think I’m suggesting #2.
Can I ask you straight up, are you trolling?
There is a pattern where you say “I think you are saying X” where X is unrelated to anything I have actually said. I ask “why do you think I think X”, and you don’t answer, but just move on to repeat the process.
I have been assuming there is good faith misunderstanding going on, but the fact that you keep not explaining where the misunderstandings have arisen from when asked is starting to make me question that.
Most of what you’ve written in this reply is frankly incoherent, or at least seems to be based on assumptions about my position that are neither valid nor obvious, as to make it seem seem unconnected from our previous discussion.
For example this:
> To me, if someone comes to me and says, "I'm not interested in hearing about the downsides of banning encryption, come up with a solution or we'll ban it anyway" -- I don't think that person is reasonable, I don't think they're acting rationally, and certainly I'm not interested in working with that person or coming up with solutions with that person.
Just seems like a gibberish hypothetical that doesn’t have much to do with what we are talking about.
And this:
> You could come up with the most innovative amazing reduction strategy for CSAM ever conceived, and it would not change any of those people's opinions on encryption.
What does it even mean to ‘reduce CSAM’? Why do we care about changing people’s minds here about encryption?
Let’s take another part:
> Where I fall on this is that I am totally willing to look for alternative solutions; but encryption, device ownership, privacy, and secure software -- these are not prizes to be won, conditional on me finding a solution.
Ok, but those are all in fact fluid concepts whose status is changing as time goes by, and mostly not in the directions it sounds like you would prefer. Nobody is thinking of them as prizes. The status quo is that they are in jeopardy.
> We can look for a solution together once we've taken those things off the table.
Ok, but this just means you aren’t willing to participate with people who don’t agree to a set of terms, which in fact don’t represent anything anyone has so far developed.
That’s a comment about your personal boundaries not about whether a better solution than what Apple is proposing could be built.
That’s fine by me, in fact I’d be happy if a solution did incorporate all of the concepts you require. I agree we need that. I argue for it quite often.
I don’t think such a thing has been built yet, and if it were built, I suspect parents would like to have some mechanism to control whether it was. vector of child exploitation before they let their kids use it.
> Why are those the only two solutions? That seems like a false dichotomy.
It's not? It's a real dichotomy. What other solution could there be?
I mean, OK, I guess there are other solutions we could try like ignoring them or attacking them or putting them in prison or some garbage, but to me those kinds of solutions are off the table. So we either figure out some way to satisfy them, or convince them that we're right. That's not a false dichotomy, those are the only 2 options.
I assume you're suggesting #2 because you're sure as heck not suggesting #1, and I can't figure out what else you could be suggesting.
----
> why do you think I think X
Frankly, if this isn't what you think, then I don't understand what you're thinking.
You keep on saying that we need to offer solutions, we can't just criticize Apple's proposal, we have to offer an alternative if we're going to criticize. But why?
- I thought the point was to get rid of people's fears: no, you're saying that's not what you mean.
- I thought the point was to compromise with critics: no, you're saying that's not what you mean.
- I thought the point was to try and get people to stop attacking encryption: no, you're saying that's not what you mean.
- Heck, I thought the point was to reduce CSAM, and you're telling me now that even that's not what you mean either?
> What does it even mean to ‘reduce CSAM’? Why do we care about changing people’s minds here about encryption?
What? We're on the same thread, right? We're commenting under an article about Apple instituting policies to reduce CSAM, ie, to make it so there is less CSAM floating around in the wild. When you talk about a "solution", what problem are you even trying to solve? Because all of us here are talking about CSAM, that's what Apple's system is designed to detect.
I don't understand. How can you possibly not be talking about CSAM right now? That's literally what this entire controversy is about, that's the only reason this thread exists.
----
Honest to God, hand over my heart, I am not trolling you right now. I understand that this is frustrating to you, but my experience throughout this conversation has been:
- You say something
- I try to interpret and build on it
- You tell me that's not what you meant and ask me why I thought that
- Okay, I try to reinterpret and explain
- The cycle repeats
- The only information I can get out of you is that I apparently don't understand you. I'm not getting any clarification. You just tell me that I'm misunderstanding your position and then you move on.
What are you trying to accomplish by proposing "alternative" solutions to Apple's proposal? You seem to think this will help keep people from attacking encryption, but I'm wrong to say that it will help by reducing their fears, or by distracting them, or by teaching them, or by solving the problems that they think they have, or... anything.
You tell me that "if we think it’s a bad trade-off that is taking us in the direction of worse and worse privacy compromises, we aren’t likely to be able to persuade people to ignore the real trade-offs, but we stand a chance of getting them to accept a better solution to the same problem." But then you tell me that "encryption is not a prize" and the goal is not to convince them of anything, which to me completely contradicts the previous sentence.
If encryption isn't a prize, if "nobody is thinking of them as prizes", then why does it sound like you're telling me that preserving encryption is conditional on me coming up with some kind of alternative? If encryption isn't a prize, then great, let's take it off the table.
But then I'm told that taking encryption off the table means that "you aren’t willing to participate with people who don’t agree to a set of terms". So apparently encryption is on the table, and I am coming up with alternative solutions in order to convince people to attack something else? But that's not what you mean either, because you tell me that people will always attack encryption, so I don't even know.
You're jumping back and forth between positions that seem completely contradictory to me. I thought that you had a different view than me about how reasonable privacy-critics actually were, but apparently you also have different views than me about what the problem is that Apple is trying to solve, what privacy-critics even want in the first place, what the end goal of all of this public debate actually is. Maybe you even disagree with me about what privacy and human rights are, since "those are all in fact fluid concepts whose status is changing as time goes by".
So I need you to either lay out your views very plainly without any flowery language or expansion in a way that I can understand, or I need to stop having this conversation because I don't know what else I can say other than that I find your views incomprehensible. If you can't do that, then fine, we can mutually call each others' views gibberish and incoherent, and we can go off and do something more productive with our evenings. But I'll give this exactly one last try:
----
> Most of what you’ve written in this reply is frankly incoherent
Okay, plain language, no elaboration. Maybe this isn't what you're arguing about, maybe it is. I don't care. Here's my position:
A) it is desirable to reduce CSAM without violating privacy.
B) the downsides of violating privacy are greater than the upsides of reducing CSAM.
C) most of the people arguing in favor of violating privacy to stop CSAM are either arguing in bad faith or ignorance.
D) the ones that aren't should be gently educated about the downsides of breaking encryption and violating human rights.
E) the ones that refuse to be educated are never going to change their views.
F) compromising with them is a waste of time, and calls to "work with the critics" instead of educating them are a waste of time.
G) working with critics who refuse to be educated about the downsides of violating privacy will not help accomplish point A (it is desirable to reduce CSAM without violating privacy).
H) thus, we should refuse to engage with people about reducing CSAM unless they take encryption/privacy/human rights off of the table (on this point, you understood my views completely, people who view CSAM as a bigger deal than human rights shouldn't be engaged with)
I) a technical solution that reduces CSAM without violating privacy may or may not be possible. But it doesn't matter. Even if a technical solution without violating privacy is impossible, violating privacy is still off the table, because the downsides of removing poeple's privacy rights would still be larger than the upsides of removing CSAM.
Can you give me a straightforward, bullet-point list of what statements above you disagree with, if any?
>> 2) We can offer them some kind of compromise solution that may or may not actually address their problem, but will make them feel like it does, and which will in theory make them less likely to try and ban encryption.
> Why are those the only two solutions? That seems like a false dichotomy.
It's not? It's a real dichotomy. What other solution could there be?
3) Offer a better technical that is less of a compromise than what Apple is offering, or indeed is not a compromise at all.
> I mean, OK, I guess there are other solutions we could try like ignoring them or attacking them or putting them in prison or some garbage, but to me those kinds of solutions are off the table. So we either figure out some way to satisfy them, or convince them that we're right. That's not a false dichotomy, those are the only 2 options.
I assume you're suggesting #2 because you're sure as heck not suggesting #1, and I can't figure out what else you could be suggesting.
I’m not suggesting #2 because #2 is a straw man.
----
>> why do you think I think X
>Frankly, if this isn't what you think, then I don't understand what you're thinking.
Ok - that seems like a straightforward response. You don’t understand.. But I clearly am not saying the things you are attributing to me.
I have no repeatedly asked where I said anything that leads you to think they are my view. It’s rare that you answer. From my point of view that means you aren’t actually responding to what I have written. You read what I write, don’t understand it, and then make something up that isn’t what I’ve said (or is even directly contradicted by what I’ve said) and then you tell me that’s what I’m saying.
If this was a one time thing, it would be fine, but at this point it doesn’t seem to matter what I say - you’ll just respond as if I said something else, and you won’t explain why when asked. From here it looks like you are having a discussion with your own imagination, rather than with what I write.
Here’s an example:
>> You keep on saying that we need to offer solutions, we can't just criticize Apple's proposal,
Where do I ‘keep saying that we can’t just criticize apple’s proposal.’? If that is something I have said more than once, you should be able to quote me. If not then it isn’t actually something I keep saying, it’s only in your imagination that I am saying it.
> we have to offer an alternative if we're going to criticize. But why?
Another example of something I you are imagining me to be saying, but that I am not.
> - I thought the point was to get rid of people's fears: no, you're saying that's not what you mean.
I have now said it is not what I mean, multiple times with explanation, and yet you keep saying it is. Why is that?
> - I thought the point was to compromise with critics:
Why do you think that? I have never said it. Again it’s something you are imagining. What is the text that made you imagine it? If we knew that, we could uncover where you haven’t understood.
> no, you're saying that's not what you mean.
Of course, because I didn’t say it.
> - I thought the point was to try and get people to stop attacking encryption:
Again I have never said this was the point, not only that I have said we can never do so.
But you have not explained why you thought this was the point.
> no, you're saying that's not what you mean.
- Heck, I thought the point was to reduce CSAM, and you're telling me now that even that's not what you mean either?
In this case then misunderstanding is mine. I misunderstood ‘reduce csam’ as ‘reduce csam detection’. I.e. I read it as get Apple to reduce their efforts.
> What does it even mean to ‘reduce CSAM’?
This is what I do when I don’t understand what someone has written - I ask them. You answered, and we have uncovered where I misunderstood.
If you answered my questions, we might have understood why you haven’t been understanding me.
> Why do we care about changing people’s minds here about encryption?
It seems to me that you have an agenda to change people’s minds about encryption. What isn’t clear is why you attribute that to me.
> What? We're on the same thread, right? We're commenting under an article about Apple instituting policies to reduce CSAM, ie, to make it so there is less CSAM floating around in the wild. When you talk about a "solution", what problem are you even trying to solve? Because all of us here are talking about CSAM, that's what Apple's system is designed to detect.
Agreed - like I say I just misread the phrase.
> I don't understand. How can you possibly not be talking about CSAM right now? That's literally what this entire controversy is about, that's the only reason this thread exists.
Ageeed - like I say I just misread the phrase.
----
> Honest to God, hand over my heart, I am not trolling you right now.
The reason it looks like trolling, is that when you say ‘your are saying X’, and X doesn’t appear to be supported by my words, X seems like a straw man. I have assumed this not to be intentional, and I believe you, but by not answering the question ‘why would you think I think that?’ you created ambiguity in your intentions.
> I understand that this is frustrating to you,
It’s not so much ‘frustrating’, as not functional as a discussion. If you misunderstand me and don’t answer questions aimed at getting to the root of the misunderstanding then you’ll likely just talk past me. I am just trying to evaluate whether an alternative is possible.
> but my experience throughout this conversation has been:
- You say something
- I try to interpret and build on it
- You tell me that's not what you meant and ask me why I thought that
- Okay, I try to reinterpret and explain
- The cycle repeats
This seems like close to a description of what I am seeing but not quite. Let’s examine the steps:
1. You say something
2. I try to interpret and build on it
3. You tell me that's not what you meant and ask me why I thought that
4. Okay, I try to reinterpret and explain
5. The cycle repeats
In #2 you say ‘You tell me that's not what you meant and ask me why I thought that’. This isn’t quite true. I often don’t ask ‘why you thought that’ in a vague way. I ask ‘what did I say that made you think that’. I ad,it there may be a few lapses, but most of the time I ask what i said that led to your understanding.
In #4 you said “I try to reinterpret and explain”. What you don’t do is answer the question - what is it I said that led to your understanding?
By not answering this question, we don’t get to the root cause of the misunderstanding.
a> - The only information I can get out of you is that I apparently don't understand you.
You don’t.
> I'm not getting any clarification. You just tell me that I'm misunderstanding your position and then you move on.
This is false. I ask what I said that led to the misunderstanding. I do not move on.
What are you trying to accomplish by proposing "alternative" solutions to Apple's proposal?
> You seem to think this will help keep people from attacking encryption,
What have I said that makes you think that?
[there are a few paragraphs that I can’t respond to because they don’t make sense]
> But then I'm told that taking encryption off the table means that "you aren’t willing to participate with people who don’t agree to a set of terms".
Did I misunderstand you? Did you mean something else by ‘taking encryption off the table’?
> So apparently encryption is on the table, and I am coming up with alternative solutions in order to convince people to attack something else? But that's not what you mean either, because you tell me that people will always attack encryption, so I don't even know.
I thought you agreed that there are some people who will always attack encryption. I didn’t think it was just me ‘telling you that’. Did I misunderstand you - do you think you can get people to stop attacking encryption?
> You're jumping back and forth between positions that seem completely contradictory to me.
That’s possible, but I don’t think so. Can you quote where you think I have contradicted myself?
> I thought that you had a different view than me about how reasonable privacy-critics actually were, but apparently you also have different views than me about what the problem is that Apple is trying to solve, what privacy-critics even want in the first place, what the end goal of all of this public debate actually is. Maybe you even disagree with me about what privacy and human rights are, since "those are all in fact fluid concepts whose status is changing as time goes by".
This seems like sarcasm and bad faith. You are misrepresenting me. For example, I have never mentioned human rights.
Privacy on the other hand, is definitely a fluid concept.
What we consider it to mean has changed over time as both technology and society have developed.
> So I need you to either lay out your views very plainly without any flowery language or expansion in a way that I can understand,
What do you mean by flowery language?
> or I need to stop having this conversation because I don't know what else I can say other than that I find your views incomprehensible.
I know you do.
> If you can't do that, then fine, we can mutually call each others' views gibberish and incoherent,
Your views to the extent that I know them, don’t seem gibberish or incoherent. It’s when you incorporate interpretations of my views that don’t relate to what I have said, that what you write appears incoherent to me.
and we can go off and do something more productive with our evenings. But I'll give this exactly one last try:
----
> Most of what you’ve written in this reply is frankly incoherent
Okay, plain language, no elaboration. Maybe this isn't what you're arguing about, maybe it is. I don't care. Here's my position:
A) it is desirable to reduce CSAM without violating privacy.
B) the downsides of violating privacy are greater than the upsides of reducing CSAM.
C) most of the people arguing in favor of violating privacy to stop CSAM are either arguing in bad faith or ignorance.
D) the ones that aren't should be gently educated about the downsides of breaking encryption and violating human rights.
E) the ones that refuse to be educated are never going to change their views.
F) compromising with them is a waste of time, and calls to "work with the critics" instead of educating them are a waste of time.
G) working with critics who refuse to be educated about the downsides of violating privacy will not help accomplish point A (it is desirable to reduce CSAM without violating privacy).
H) thus, we should refuse to engage with people about reducing CSAM unless they take encryption/privacy/human rights off of the table (on this point, you understood my views completely, people who view CSAM as a bigger deal than human rights shouldn't be engaged with)
I) a technical solution that reduces CSAM without violating privacy may or may not be possible. But it doesn't matter. Even if a technical solution without violating privacy is impossible, violating privacy is still off the table, because the downsides of removing poeple's privacy rights would still be larger than the upsides of removing CSAM.
Can you give me a straightforward, bullet-point list of what statements above you disagree with, if any?
Honestly, no. This looks like just a blunt attempt to win some argument of your own with me playing a role that has nothing to do with the conversation so far. You are also asking me to do a lot of work to answer your questions when you have been unwilling to answer mine. That doesn’t seem like good faith.
Remember, you came to this subthread by replying to me. But you have consistently ignored clarifying questions.
Was it your goal along was to simply ignore what I have been saying and find a spot to just make your own case? I am genuinely unsure.
How about we start somewhere simpler? When I ask ‘what did I say that made you thunk that’, can you explain why you rarely answer?
> When I ask ‘what did I say that made you thunk that’, can you explain why you rarely answer?
Okay, sure. When you ask me to try and justify why I think you hold your position, I interpret that as a distraction (hopefully a good faith one). I don't want to argue on a meta-level about why I got confused about your comments, I want to know what you believe. I'm frustrated that you keep trying to dig into "why are you confused" instead of just clarifying your position.
My feeling is we could have skipped this entire debate if you had sat down and made an extremely straightforward checklist of your main points, consisting of maybe 5-10 bullet points, each one to two sentences max. This is a thing I've done multiple times now about my beliefs/positions during this discussion. If we get mixed up about what the other person is saying, the best thing to do is not to dive into that, it's to take a step back and try to clarify from the start in extremely clear language.
You looked at the final checklist and said "this looks like just a blunt attempt to win some argument of your own". I looked at it as a charitable invitation to step back, write 10-20 sentences instead of 15 paragraphs, and to just cut through the noise and figure out where we disagree. If your checklist doesn't overlap with mine, fine. It's not bad for us to discover that we're arguing past each other. What's bad is if we spend X paragraphs getting frustrated about meta-arguments that have nothing to do with Apple.
I don't want to debate language or start cross indexing each other's comments, I want to debate ideas.
So when you tell me that I'm wrong about what you believe, I look over your statements and try to reinterpret, and I move on. Very rarely is my instinct to sit down and try to catalog a list of statements to try and prove to you that you do believe what I think, because I take it as a given that if you tell me that I misinterpreted you... I did.
So I accept it and move on.
----
Yes, we could get into a giant debate about "what makes you think I think that". That might go something like:
> You seem to think this will help keep people from attacking encryption,
> What have I said that makes you think that?
And I could reply by linking back to one of your previous comments:
> "Most people just want reasonable solutions and aren’t going to be persuaded by either extreme. If you make an argument about creeping authoritarianism they’ll say ‘child porn is a real problem, and that risk is distant’.
> If you offer them a more privacy preserving solution to choose as well as a less privacy preserving option, they’ll likely choose the more privacy preserving option.
> Apple is offering a much more privacy preserving option than just disabling encryption. People will accept it because it seems like a reasonable trade-off in the absence of anything better."
Which to me sounds quite a bit like: "offer a solution that doesn't target encryption, and then these people won't target encryption because 'most people just want reasonable solutions'".
----
But what's the point of the above conversation? I already know that you don't interpret those 3 paragraphs about a "privacy preserving option" as meaning "a proposal that will stop reasonable people from attacking encryption." Because you told me that's not what you believe.
So how weird and petty would I need to be to start arguing with you, "actually you did mean that, and I have proof!" Is it any value to either of us to try and trip each other up over "well, technically you said"? I'm not here trying to trap you, I want to understand you.
Honestly, the short answer to why I rarely reply back with quotes about "why I think you said that", is I kind of interpreted "what makes you think I think that" as a vaguely rude attempt to derail the conversation and debate language instead of ideas, and I've been trying to graciously sidestep it and move on.
- I'm happy to debate privacy with someone
- I'm happy to listen to them so I can understand their views better
- I'm not happy to debate whether or not someone believes something. I think that's a giant meaningless waste of time.
I don't think that means you're operating in bad faith, but I can't think anything I would rather do less than spend all day going back over all of your statements to cross-reference them so I can prove that... what? That I misunderstood your actual position? I believe you, you don't need to prove to me that I misunderstood you! Let's just skip that part and move on to explaining what the actual position really is.
It doesn't matter "why I think you said what I said", it just matters that I understand you. So why get into that meaningless debate instead of just asking you to clarify or trying to reinterpret? I don't care about technicalities and I don't care about "winning" against you, and I interpret "justify why you thought I thought that" as a meaningless distraction that only has value in Internet points, not in getting me any closer to understanding what your views are.
>> When I ask ‘what did I say that made you thunk that’, can you explain why you rarely answer?
> Okay, sure.
> When you ask me to try and justify why I think you hold your position,
At this point it’s hard to read you as honest because of the frequency with which you misrepresent me. I have to assume you are unaware of this.
I am talking about why you won’t identify which words of mine lead to your misunderstandings of my position.
Why would you represent that as asking you to ‘justify’ your interpretation? That isn’t what I’m doing, and more importantly it’s not something I said. I’m just asking you to tell me what you are interpreting so I can see if what I said was ambiguous and if so how.
> I interpret that as a distraction (hopefully a good faith one).
A distraction from what? Are you not seeking to understand? Later in this comment you claim to want to know what my views are. If you ignore clarifying questions as a ‘distraction’, it seems likely that misunderstandings will keep arising.
> I don't want to argue on a meta-level about why I got confused about your comments,
Who is asking you to ‘argue on a meta-level’? I am asking you to simply say what words you are referring to when a misunderstanding becomes apparent.
> I want to know what you believe.
I recommend trying to get to the bottom of misunderstandings then.
> I'm frustrated that you keep trying to dig into "why are you confused"
You misrepresent me again. Can you not see that I haven’t asked ‘why are you confused’?
I have asked what you are referring to when you attribute a view to me that I don’t think is contained in what I wrote.
> instead of just clarifying your position.
I could clarify my position if you were willing to tell me what I said that lead to your interpretations of my views.
> "Reducing the real-world occurrences for irrational fears doesn't make those fears go away." "You're saying it yourself, these people aren't motivated by statistics about abuse, they're frightened of the idea of abuse"
We could say the same thing the other way - people up in arms are not frightened by statistics of abuse of a surveillance system, but frightened of the idea of a company or government abusing it. This thread is full of people misrepresenting how it works, claims of slippery slopes straight to tyranny, there's a comparison to IBM and the Holocaust, and it's based on no real data and not even the understanding from simply reading the press release. This thread is not full of statistics and data about existing content filtering and surveillance systems and how often they are actually being abused. For example Skype has intercepted your comms since Microsoft bought it and routed all traffic through them, Chrome and FireFox and Edge do smartscreen blocking of malware websites - what are the stats on those systems being abused to block politically inconvenient memes or similar? Nothing Apple could do would in any way reassure these people because the fears are not based on information. For example your comment:
> "We know the risks of outing minors to parents if they're on an LGBTQ+ spectrum."
Minors will see the prompt "if you do this, your parents will find out" and can choose not to and the parents don't find out. There's an example of the message in the Apple announcement[1]. This comment from you is reacting to a fear of something disconnected from the facts of what's been announced where that fear is guarded against as part of the design.
You could say that the hash database is from a 3rd-party so that it's not Apple acting unilateraly, but that's not taken as reassurance because the government could abuse it. OK guard against that with Apple reviewing the alerts before doing anything with them, that's not reassuring because Apple reviews are incompetent (where do you hear of groups that are both incompetent and capable of implementing worldscale surveillance systems? conspiracy theories, mostly). People say it scans all photos and when they learn that it scans only photos about to be uploaded to iCloud their opinion doesn't seem to change, because it's not reasoned based on facts, perhaps? People say it will be used by abusive partners who will set their partner to be a minor to watch their chats. People explain that you can't change an adult AppleID to a minor one just like that, demonstrating the argument was fear based not fact based. People say it is a new ability for Apple to install spyware in future, but it's obviously not - Apple have been able to "install spyware in future" since they introduced auto-installing iOS updates many years ago. People say it's a slippery slope - companies have changed direction, regulations can change, no change in opinion; nobody has any data or facts about how often systems do slide down slippery slopes, or get dragged back up them. People saying it could be used by bad-actors at Apple to track their Ex's. From the design, it couldn't. But why facts when there's fearmongering to be done? The open letter itself has multiple inaccurate descriptions of how the thing works by the second paragraph to present it as maximally-scary.
> "We would also in general like to see more stats about how serious the problem of CSAM actually is"
We know[2] that over 12 million reports of child abuse material to NMEC were related to FaceBook messenger and NMEC alone gets over 18 million tips in a year. Does that change your opinion either way? Maybe we could find out more after this system goes live - how many alerts Apple receives and how many they send on. A less panicky "Open Letter to Apple" might encourage them to make that data public, how many times it triggered in a quarter, and ask Apple to commit to removing it if it's not proving effective. And ask Apple to state what they intend to do if asked to make the system detect more things in future.
> "their fear is not based on real risk analysis or statistics or practical tradeoffs"
Look what would have to happen for this system to ruin your life in the way people here are scaremongering about:
- You would have to sync to iCloud, such that this system scans your photos. That's optional.
- Someone would have to get a malicious hash into the whole system and a photo matching it onto your device. That's nontrivial to say the least.
- Enough of those pictures to trigger the alarm.
- The Apple reviewers would have to not notice the false alarm photo of a distorted normal thing.
- The NMEC and authorities would have to not dismiss the photo.
It's not impossible, but it's in the realms of the XKCD "rubber hose cryptography" comic. Sir Cliff Richard, his house was raided by the police, the media alerted, his name dragged through the mud, then the crown prosecution service decided there was nothing to prosecute. The police apologised. He sued the police and they settled out of court. The BBC apologised. He sued them and won. The crown prosecution service reviewed their decision and reaffirmed that there was nothing to prosecute. His name is tarnished, forever people will be suspicious that he paid someone off or otherwise pulled strings to get away with something; a name-damaging flase alarm which is something what many people fear happening in this thread. Did anyone need to use a generative-adversarial network to create a clashing perceptual hash uploaded into a global analysis platform to trigger a false alarm convincing enough to pass two or three human reviews? No, two men decided they'd try to extort money and made a false rape allegation.
People aren't interested in how it works, why it works the way it does, whether it will be an effective crime fighting tool (and how that's decided) or whether it will realistically become a tyrannical system, people aren't interested in whether Apple's size and influence could be an independent oversight on the photoDNA and NCMEC databases to push back against any attempts of them being misused to track other-political topics, people are jumping straight to "horrible governments will be able to disappear critics" and ignoring that horrible governments already do that and have many much easier ways of doing that.
> "So in the real world we know that the majority of child abuse comes from people that children already know."
Those 12 million reports of child abuse material related to FaceBook messenger; does it make any difference if they involved people the child knew? If so, what difference do you think that makes? And Apple's system is to block the spread of abuse material, not (directly) to reduce abuse itself - which seems an important distinction that you're glossing over in your position "it won't reduce abuse so it shouldn't be built" when the builders are not claiming it will reduce abuse.
> "Nobody in any political sphere has ever responded to "think of the children" with "we already thought of them enough.""
Are the EFF not in the political sphere? Are the groups quoted in the letter not? Here[3] is a UK government vote from 2014 on communication interception, where it was introduced with "interception, which provides the legal power to acquire the content of a communication, are crucial to fighting crime, protecting children". 31 MPs voted against it. Here[4] is a UK government vote from 2016 on mass retention of UK citizen internet traffic, many MPs voted against it. It's not the case that "think of the children" leads to political universal agreement of any system, as you're stating. Which could be an example of you taking your position by fear instead of fact.
> "It doesn't matter what solutions we come up with or what the rate of CSAM drops to, those people are still going to be scared of the idea of privacy itself."
The UK government statement linked earlier[2] disagrees when it says "On 8 October 2019, the Council of the EU adopted its conclusions on combating child sexual abuse, stating: “The Council urges the industry to ensure lawful access for law enforcement and other competent authorities to digital evidence, including when encrypted or hosted on IT servers located abroad, without prohibiting or weakening encryption and in full respect of privacy". The people whose views you claim to describe explicitly say the opposite of how you're presenting them. Which, I predict, you're going to dismiss with something that amounts to "I won't change my opinion when presented with this fact", yes?
There are real things to criticise about this system, the chilling effect of surveillance, the chance of slippery slope progression, the nature of proprietary systems, the chance of mistakes and bugs in code or human interception, the blurred line between "things you own" and "things you own which are closely tied to the manufacturer's storage and messaging systems" - but most of the criticisms made in this thread are silly.
> This thread is not full of statistics and data about existing content filtering and surveillance systems and how often they are actually being abused.
It is filled with explanations about why the systems you mention are tangibly different from what Apple is proposing. There is a huge difference between scanning content on-device and scanning content in a cloud. That doesn't mean that scanning content in the cloud can't be dangerous, but it is still tangibly different. There is also a huge difference between a user-inspectable list of malware being blocked in a website and an opaque list of content matches that users can not inspect or debate. There is also a huge difference between a user-inspectable static list and an AI system with questionable accuracy guarantees. And there is a huge difference between a user-controlled malware list that is blocked locally without informing anyone, and a required content list that sends notifications to other people/governments/companies when it is bypassed.
That being said, if you want to look at stats about how accurate AI filters are for explicit material in the examples you mention, there are a ton of stats online about that, and they're mostly all quite bad.
> nobody has any data or facts about how often systems do slide down slippery slopes, or get dragged back up them
There's a lot to unpack in this one sentence, and it would take more time than I'm willing to give, but are you really implying that government surveillance doesn't count as a real slippery slope because sometimes activists reverse the trend?
> Minors will see the prompt "if you do this, your parents will find out" and can choose not to and the parents don't find out. There's an example of the message in the Apple announcement[1]
You misunderstand the concern. The risk is not the child themselves clicking through to the photo (although it would be easy for them to accidentally do so), it's the risk of that data being leaked from other phones because a friend thoughtlessly clicks through the prompt.
> The open letter itself has multiple inaccurate descriptions of how the thing works by the second paragraph to present it as maximally-scary.
Where? Here's the second paragraph:
> Apple's proposed technology works by continuously monitoring photos saved or shared on the user's iPhone, iPad, or Mac. One system detects if a certain number of objectionable photos is detected in iCloud storage and alerts the authorities. Another notifies a child's parents if iMessage is used to send or receive photos that a machine learning algorithm considers to contain nudity.
The only thing I can think of is the word "continuously" which is not strictly inaccurate but could be misinterpreted as saying that the scanning will be constantly running on the same set of photos, and the subtle implication that this scanning will happen to photos "saved", which might be misinterpreted as implying that this scan will happen to photos that aren't uploaded to iCloud. But given that the second sentence immediately clarifies that this is referring to photos uploaded to iCloud, it seems like a bit of a stretch to me to call this misinformation.
> people are jumping straight to "horrible governments will be able to disappear critics" and ignoring that horrible governments already do that and have many much easier ways of doing that.
Hang on a sec. A little while ago you were calling my fears theoretical, now you're admitting that governments routinely abuse this kind of power. You really think it's unreasonable to be cautious about giving them more of this power?
> Does that change your opinion either way?
It does not change my opinion, but at least it's real data, so more of that in these debates please. I'm not denying or rejecting the numbers that the UK lists.
> A less panicky "Open Letter to Apple" might encourage them to make that data public
Holy crud, I would hope this is the bare minimum. Are we really having a debate over whether or not Apple will make that data public? I thought that was just assumed that they would. If that's up in the air right now, then we've sunk really low in the overall conversation about public accountability and human rights.
> which seems an important distinction that you're glossing over in your position "it won't reduce abuse so it shouldn't be built" when the builders are not claiming it will reduce abuse.
I realize this is branching out on in a different direction, but it sure as heck better reduce abuse or its not worth building. CSAM is disgusting, but the primary reason to target it is to reduce abuse. If reducing CSAM doesn't reduce abuse, it's not worth doing and we should focus our efforts elsewhere.
I know this is something that might sound abhorrent to people, but we are having this debate because we care about children. We have to center the debate on the reduction of the creation of CSAM, the reduction of child abuse, and the reduction of gateways into child abuse. Reducing child abuse is the point. We absolutely should demand evidence that these measures reduce child abuse, because reducing child abuse and reducing the creation of CSAM is a really stinking important thing to do.
Which leads back to your other note:
> does it make any difference if they involved people the child knew? If so, what difference do you think that makes?
Yes, it makes a massive difference, because knowing more about where child abusers are coming from and how they interact with their victims makes it easier to target them and will make our efforts more effective. We should care about this stuff.
> It's not the case that "think of the children" leads to political universal agreement of any system, as you're stating.
I think you misunderstand. Nobody who's willing to bring out "think of the children" as a debate killer has ever dropped the argument because they got a concession. That there are some entities (like the EFF) who are willing to reject the argument as a debate killer and look at it through a risk analysis lens does not mean the unquestioned argument of "one child is too many" is any less toxic to real substantive political debate.
> without prohibiting or weakening encryption and in full respect of privacy
I don't want to bash on the EU too hard here, but it has this habit of just kind of tacking onto the end of its laws "but make sure no unintended bad things happen" and then acting like that solves all of their problems. It doesn't mean anything when they add these clauses. This is the same EU that argued for copyright filters and then put on the end of their laws, "but also this shouldn't harm free expression or fair use."
It means very little to me that the EU says they care about privacy. What real, tangible measures did they include to make sure that in practice encryption would not be weakened?
Look, I can do the same thing. Apple should not implement this system, but they should also reduce CSAM. See, I just proved I care about both, exactly as convincingly as the EU! So you know I'm serious about CSAM now, I said that Apple should reduce it. But I predict that you'll "dismiss with something that amounts to 'I won't change my opinion when presented with this fact', yes?"
> There are real things to criticise about this system, the chilling effect of surveillance, the chance of slippery slope progression, the nature of proprietary systems, the chance of mistakes and bugs in code or human interception, the blurred line between "things you own" and "things you own which are closely tied to the manufacturer's storage and messaging systems"
Wait, hold on. Forget literally everything that we were talking about above. This is, like, 90% of what people are criticizing! What else are people criticizing? These are really big concerns! You got to the end of your comment, then suddenly listed out 6 extremely good reasons to oppose this system, and then finished by saying, "but other than those, what's the problem?"
> "Wait, hold on. Forget literally everything that we were talking about above. This is, like, 90% of what people are criticizing! These are really big concerns!"
My point is your original point - where is the data to support these criticisms, the the facts, the statistics? Merely saying "I can imagine some hypothetical future where this could be terrible and misused" should not be enough to conclude that it is, in fact, terrible, and will more likely than not be misused.
We've had years of leaks showing that three letter agencies and governments simply don't need to misuse things like this. The USA didn't slide down a slope of banning Asbestos for health reasons and end up "oops" banning recreational marijuana. The USA didn't slide down a slippery slope into the Transportation Security Authority after 9/11 it appeared almost overnight, and then didn't slide down a slippery slope into checking for other things, it stayed much the same ever since.
The fact that one can imagine a bad future is not the same as the bad future being inevitable; the fact that one can imagine a system being put to different uses doesn't mean it either will be, or that those uses will necessarily be worse, or that they will certainly be maximum-bad. It's your comment about "fear based reasoning" turned to this system instead of to encryption.
You ask "are you really implying that government surveillance doesn't count as a real slippery slope because sometimes activists reverse the trend?" and I'm saying the position "because slippery slopes exist, this system will slide down it and that's the same as being at the bottom of it" and then expecting the reader to accept that without any data, facts, evidence, stats, etc. is low quality unconvincing commenting, but is what makes up most of the comments in this thread.
> "Where? Here's the second paragraph:"
The paragraph which implies it happens to all photos (not just iCloud ones), and immediately alerts the authorities with no review and no appeal process. There are people in this thread saying "I don't need the FBI getting called on me cause my browser cache smelled funny to some Apple PhD's machine-learning decision" for a system which does not look at browser cacdhe, does not call the FBI, has a review process, does have an appeal process.
> "Holy crud, I would hope this is the bare minimum."
Why would you hope "the bare minimum" the letter could ask for is something the letter is clearly not asking for? Or that the bare minimum from a company known for its secrecy is openness and transparency? It would be nice if it was, yes. I expect it won't be, because we would all have very different legal systems and companies if laws and company policies were created with metrics to track their effectiveness and specified expiry dates and by default only get renewed if they are proving effective.
> "What else are people criticizing?"
My main complaint is that people are asking us to accept criticism such as "Iraq will use this to murder homosexuals" unquestioningly. But still, to quote from people in this thread: "Apple can (and likely will) say they won't do it and then do it anyway." - despite Apple announcing this in public they're going to lie about it, and you should just believe me without me supporting this position in any way. "This will lead to black mailing of future presidents in the U.S." - and you should believe that because reasons. "Made for China" - and you should agree because China is the boogeyman. (Maybe it is, if so justify why the reader should agree). "It's not Apple. It's the government" - because government bad. "Scan phones for porn, then sell it for profit and use it for blackmail. Epstein on steroids" - because QAnon or something, who even knows???. "the obvious conclusion is Apple will start to scan photos kept on device, even where iCloud is not used." - because they said 'obviously' you have to agree or you're clueless, I guess. "I never thought I'd see 'privacy' Apple come out and say we're going to [..] scan you imessages, etc." - and they didn't say that; unless the commentor is a minor which is against the HN guidelines.
It's very largely unreasoned, unjustified, unsupported, panicky worst-case fearmongering even when the concerns could be serious - if justified.
> "Nobody who's willing to bring out "think of the children" as a debate killer has ever dropped the argument because they got a concession."
That is probably true, but probably self-supporting. Someone who honestly uses "think of the children" likely thinks the children's safety is not being thought of enough, and is self-selectedly less likely to immediately turn around and agree the opposite.
> "It means very little to me that the EU says they care about privacy."
Well, the witch is being drowned despite her protests.
> "What real, tangible measures did they include to make sure that in practice encryption would not be weakened?"
Well they didn't /ban/ it for a start; which they could have done as exemplified by Saudi Arabia and Facetime discussed in this thread, and they didn't explicitly weaken it like the USA did with its strong encryption export regulations of the 1990s. Those should count for something in defense of their stated position?
I'm not going to push too hard on this, but I do want to quickly point out:
> Well they didn't /ban/ it for a start [...] and they didn't explicitly weaken it
Does not match up with:
> urges the industry to ensure lawful access for law enforcement and other competent authorities to digital evidence, including when encrypted
If you're pushing a company to ensure access to encrypted content based on a warrant, you are banning/weakening E2E encryption. It doesn't matter what they say their intention is/was, or whether they call that an outright ban, I don't view that as a credible defense.
----
My feeling is that we have a lot of evidence from the past and present, particularly in the EU, about how filtering/reporting laws evolve over time (EU's CSAM filters within the ISP industry are a particularly relevant example here, you can find statements online where the EU leaders argue that expanding the system to copyright is a good idea specifically because the system already exists and would inexpensive to expand). I also look at US programs like the TSA and ICE and I do think their scope, authority, and restrictions have expanded quite a bit over the years. I don't agree that those programs came out of nowhere or that they're currently static.
If you don't see future abuse of this system as credible, or if you don't see a danger of this turning into a general reporting requirement for encrypted content, or if you don't think that it's credible that Apple would be willing to adapt this system for other governments -- if you see that stuff as fearmongering, then fine I guess. We're looking at the same data and the same history of government abuses and we're coming to different conclusions, so our disagreement/worldview differences are probably more fundamental than just the data.
To complain about some of the more extreme claims happening online (and under this article) is valid, but I feel you're extrapolating a bit here and taking some uncharitable readings of what people are saying (you criticize the article for "implying" things about the FBI, and the article doesn't even contain the words FBI). Regardless, the basic concerns (the "chilling effect of surveillance, the chance of slippery slope progression, the nature of proprietary systems, the chance of mistakes and bugs in code or human interception, the blurred line between 'things you own' and 'things you own which are closely tied to the manufacturer's storage and messaging systems'") are enough of a problem on their own. We really don't need to debate whether or not Apple will be willing to expand this system for additional filtering in China.
We can get mad about people who believe that Apple is about to start blackmailing politicians, but the existence of those arguments shouldn't be taken as evidence that the system doesn't still have serious issues.
>2. The complaints are all slippery slope arguments that governments will force Apple to abuse the mechanism. These are clearly real concerns and even Tim Cook admits that if you build a back door, bad people will use it.
I don't know how old you are but for anyone over 30 the slippery slope isn't a logical fallacy, it's a law of nature. I've seen it happen again and again. The only way to win is to be so unreasonable that you defend literal child porn so you don't need to defend your right to own curtains.
The intention to prevent child sexual abuse material is indeed laudable. But the "solution" is not. Especially when we all know that this "solution" gives a legal backing to corporates to do even more data mining on its users, and can easily be extended (to scan even more "illegal" content) and abused into a pervasive surveillance network desired by a lot of government around the world.
For those wondering what's wrong with this, two hard-earned rights in a democracy go for a toss here:
1. The law presumes we are all innocent until proven guilty.
2. We have the right against self-incrimination.
Pervasive surveillance like this starts with the presumption that we are all guilty of something ("if you are innocent, why are you scared of such surveillance?" or "what do you have to hide if you are innocent?"). The right against self-incrimination is linked to the first doctrine because compelling an accused to testify transfers the burden of proving innocence to the accused, instead of requiring the government to prove his guilt.
Agreed, that’s my point, and I would say that your comment outlines some good principles a better solution might respect, although it’s worth noting that those principles have only ever constrained the legal system.
Your employer, teacher, customers, vendors, parents etc, have never been constrained in these ways.
I appreciate this well thought out and logical approach.
The issue I have is with the way this is presented.
I actually really like the local-only features related to making underage users second-guess sending/viewing potentially harmful content. IMO this is a huge problem especially since most do not have the context to understand how things live forever on the internet, and is likely a source of a large percentage of this "Known CSAM material" in circulation. I think it's a great step in the right direction.
But the scan-and-report content on your device approach is where it becomes problematic.
It's sold as protecting privacy because of a magic box "the cryptographic technology" somehow prevents abuse or use outside the scope it is intended for. But just like this entire function can be added with an update, it can also be modified with an update. Apple pretends the true reality of this feature is somehow solved by technology, glosses over the technical details with catchphrases that 99.9% of people that read it won't understand. "It's 100% safe and effective and CANNOT IN ANY WAY BE ABUSED because, cryptographic technology." They're forcing their users to swallow a sugar-coated pill with very deep ramifications that only a very small percentage will fully comprehend.
I'm 100% for doing anything reasonably possible in the advancement of this cause. But you're correct in that this is "one fear is more important than another fear." And I don't think anyone can say how slippery this slope is or where it can go. I also don't really feel like you can even quantify in a way that they can be compared the harm caused by CSAM or mass surveillance. So a judgement call on "which is worse" really isn't possible because the possibilities for new and continued atrocities in both cases are infinite.
But at least in the US, a line is crossed when something inside your private life is subject to review and search by anyone else even when you have not committed a crime. If you want to search my house, you must, at least on paper, establish probable cause before a judge will grant you a search warrant.
"It's okay, these specially trained agents are only here to look for just drugs. They've promised they won't look at anything else and aren't allowed to arrest you unless they find a large quantity of drugs" -- Would you be okay with these agents searching every house in the nation in the name of preventing drug abuse(which also is extremely harmful to many people including children even when they are not direct users)?
The argument "well just don't use apple" doesn't stand either. A landlord can't just deputize someone to rummage through my house looking for known illegal things to report. Even though it's technically not "my house" and you could argue that if I don't like it, well I should just move somewhere else. But I don't think that can be argued as reasonable. Our phones are quickly becoming like our houses in that many people have large portions of their private lives inside them.
I can't quite put my finger on the exact argument so I apologize for not being able to articulate this more clearly, but there is something with removing the rights of large groups of people to protect a much smaller subset of those people, from an act perpetrated by bad actors. You are somehow allowing bad actors to inflict further damage, in excess of the direct results of their actions, on huge populations of people here. I know there is a more eloquent way to descibe this concept, but doing so just doesn't seem like the right course of action.
And I'm sorry but I am not smarter than all of the people that worked on this, so I don't have a better solution that would accomplish the same goal, but I know that this solution has a huge potential to enable massive amounts of abuse by nations who do not respect what much of the world considers basic human rights.
> But just like this entire function can be added with an update, it can also be modified with an update.
That argument is a valid argument that by using an Apple device you are trusting them not to issue an abusive update in the future. It applies regardless of whether they release this feature or not - at any time they could issue spyware to any or all devices.
I actually fully agree that this is a problem.
My position is that Apple isn’t going to solve this problem. If we want it solved, we need to solve it.
The value of using Apple devices today, and even the sense that they are going to protect children who use their devices, far outweighs relatively vague and unproven assertions about future abuses that haven’t yet materialized in most people’s minds even if they turn out to be right in the end.
> So: where are the proposals for a better solution?
Better policing and child protective services to catch child abuse at the root, instead of panicking about the digital files it produces? If you'd been paying attention, you'd have noticed that real-world surveillance has massively increased, which should enable the police to catch predators more easily. Why count only privacy-betraying technology as a "solution", while ignoring the rise of police and surveillance capabilities?
Edit as reply because two downvotes means I am "posting too fast, please slow down" (thank you for respecting me enough to tell me when I can resume posting /s):
> How do you police people reaching out to children via messaging with sexual content?
First, this is one small element of child abuse - you want to prevent child rape, merely being exposed to sexual content is nowhere near severe enough to merit such serious privacy invasion. To prevent the actual abuse, one could use the near-omnipresent facial recognition cameras, license plate readers, messaging metadata, to find when a stranger is messaging or stalking a child, DNA evidence after the fact that is a deterrent to other offenders, phone location data, etc. etc. At first I thought I didn't have to spell this out.
Second, to answer your question: very easily. With parental controls, a decades old technology that is compatible with privacy and open-source. The parent can be the admin of the child's devices, and have access to their otherwise encrypted messages. There is no need to delegate surveillance (of everyone, not just children) to governments and corporations, when we have such a simple, obvious, already existing solution. It frankly boggles the mind how one could overlook it, especially compared to how technically complex Apple's approach is. Does the mention of child abuse simply cause one's thinking to shut down, and accept as gospel anything Apple or the government says, without applying the smallest bit of scrutiny?
You have overlooked that the system for protecting minors from sexual messaging /is/ the parental controls you wish it was, and is /not/ the serious privacy invasion you think it is. It is on-device only, it only alerts the parents, in the condition that the parents enable it and the child continues past a warning informing them that their parent will find out if they continue.
> "The parent can be the admin of the child's devices, and have access to their otherwise encrypted messages."
Apple have done even better - the parent doesn't get access to all their messages (at least not as part of this system).
This is not the same system as the photo-scanning and authority-alerting one.
How do you police people reaching out to children via messaging with sexual content?
> Why count only privacy-betraying technology as a "solution", while ignoring the rise of police and surveillance capabilities?
I don’t. But if your solution is ‘just police more’, you need to explain how the police should detect people who are grooming children by sending images to them.
Chinese Government: Here is the NeuralHash for Tienanmen square. Delete all photos you find matching this or we will bar you from China.
Apple has at this point already admitted this is within is capability. So regardless of what they do now, the battle is already lost. Glad I don't use iThings.