Hacker News new | past | comments | ask | show | jobs | submit login

> "Wait, hold on. Forget literally everything that we were talking about above. This is, like, 90% of what people are criticizing! These are really big concerns!"

My point is your original point - where is the data to support these criticisms, the the facts, the statistics? Merely saying "I can imagine some hypothetical future where this could be terrible and misused" should not be enough to conclude that it is, in fact, terrible, and will more likely than not be misused.

We've had years of leaks showing that three letter agencies and governments simply don't need to misuse things like this. The USA didn't slide down a slope of banning Asbestos for health reasons and end up "oops" banning recreational marijuana. The USA didn't slide down a slippery slope into the Transportation Security Authority after 9/11 it appeared almost overnight, and then didn't slide down a slippery slope into checking for other things, it stayed much the same ever since.

The fact that one can imagine a bad future is not the same as the bad future being inevitable; the fact that one can imagine a system being put to different uses doesn't mean it either will be, or that those uses will necessarily be worse, or that they will certainly be maximum-bad. It's your comment about "fear based reasoning" turned to this system instead of to encryption.

You ask "are you really implying that government surveillance doesn't count as a real slippery slope because sometimes activists reverse the trend?" and I'm saying the position "because slippery slopes exist, this system will slide down it and that's the same as being at the bottom of it" and then expecting the reader to accept that without any data, facts, evidence, stats, etc. is low quality unconvincing commenting, but is what makes up most of the comments in this thread.

> "Where? Here's the second paragraph:"

The paragraph which implies it happens to all photos (not just iCloud ones), and immediately alerts the authorities with no review and no appeal process. There are people in this thread saying "I don't need the FBI getting called on me cause my browser cache smelled funny to some Apple PhD's machine-learning decision" for a system which does not look at browser cacdhe, does not call the FBI, has a review process, does have an appeal process.

> "Holy crud, I would hope this is the bare minimum."

Why would you hope "the bare minimum" the letter could ask for is something the letter is clearly not asking for? Or that the bare minimum from a company known for its secrecy is openness and transparency? It would be nice if it was, yes. I expect it won't be, because we would all have very different legal systems and companies if laws and company policies were created with metrics to track their effectiveness and specified expiry dates and by default only get renewed if they are proving effective.

> "What else are people criticizing?"

My main complaint is that people are asking us to accept criticism such as "Iraq will use this to murder homosexuals" unquestioningly. But still, to quote from people in this thread: "Apple can (and likely will) say they won't do it and then do it anyway." - despite Apple announcing this in public they're going to lie about it, and you should just believe me without me supporting this position in any way. "This will lead to black mailing of future presidents in the U.S." - and you should believe that because reasons. "Made for China" - and you should agree because China is the boogeyman. (Maybe it is, if so justify why the reader should agree). "It's not Apple. It's the government" - because government bad. "Scan phones for porn, then sell it for profit and use it for blackmail. Epstein on steroids" - because QAnon or something, who even knows???. "the obvious conclusion is Apple will start to scan photos kept on device, even where iCloud is not used." - because they said 'obviously' you have to agree or you're clueless, I guess. "I never thought I'd see 'privacy' Apple come out and say we're going to [..] scan you imessages, etc." - and they didn't say that; unless the commentor is a minor which is against the HN guidelines.

It's very largely unreasoned, unjustified, unsupported, panicky worst-case fearmongering even when the concerns could be serious - if justified.

> "Nobody who's willing to bring out "think of the children" as a debate killer has ever dropped the argument because they got a concession."

That is probably true, but probably self-supporting. Someone who honestly uses "think of the children" likely thinks the children's safety is not being thought of enough, and is self-selectedly less likely to immediately turn around and agree the opposite.

> "It means very little to me that the EU says they care about privacy."

Well, the witch is being drowned despite her protests.

> "What real, tangible measures did they include to make sure that in practice encryption would not be weakened?"

Well they didn't /ban/ it for a start; which they could have done as exemplified by Saudi Arabia and Facetime discussed in this thread, and they didn't explicitly weaken it like the USA did with its strong encryption export regulations of the 1990s. Those should count for something in defense of their stated position?




I'm not going to push too hard on this, but I do want to quickly point out:

> Well they didn't /ban/ it for a start [...] and they didn't explicitly weaken it

Does not match up with:

> urges the industry to ensure lawful access for law enforcement and other competent authorities to digital evidence, including when encrypted

If you're pushing a company to ensure access to encrypted content based on a warrant, you are banning/weakening E2E encryption. It doesn't matter what they say their intention is/was, or whether they call that an outright ban, I don't view that as a credible defense.

----

My feeling is that we have a lot of evidence from the past and present, particularly in the EU, about how filtering/reporting laws evolve over time (EU's CSAM filters within the ISP industry are a particularly relevant example here, you can find statements online where the EU leaders argue that expanding the system to copyright is a good idea specifically because the system already exists and would inexpensive to expand). I also look at US programs like the TSA and ICE and I do think their scope, authority, and restrictions have expanded quite a bit over the years. I don't agree that those programs came out of nowhere or that they're currently static.

If you don't see future abuse of this system as credible, or if you don't see a danger of this turning into a general reporting requirement for encrypted content, or if you don't think that it's credible that Apple would be willing to adapt this system for other governments -- if you see that stuff as fearmongering, then fine I guess. We're looking at the same data and the same history of government abuses and we're coming to different conclusions, so our disagreement/worldview differences are probably more fundamental than just the data.

To complain about some of the more extreme claims happening online (and under this article) is valid, but I feel you're extrapolating a bit here and taking some uncharitable readings of what people are saying (you criticize the article for "implying" things about the FBI, and the article doesn't even contain the words FBI). Regardless, the basic concerns (the "chilling effect of surveillance, the chance of slippery slope progression, the nature of proprietary systems, the chance of mistakes and bugs in code or human interception, the blurred line between 'things you own' and 'things you own which are closely tied to the manufacturer's storage and messaging systems'") are enough of a problem on their own. We really don't need to debate whether or not Apple will be willing to expand this system for additional filtering in China.

We can get mad about people who believe that Apple is about to start blackmailing politicians, but the existence of those arguments shouldn't be taken as evidence that the system doesn't still have serious issues.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: