I'm guessing regarding being able to use it outside of their apps, no? That's my main concern. I'm mainly a Mail.app user and having to run the bridge all the time does not seem ideal for me. I haven't tried it out so this is just me talking from the outside.
Search sucks: really. unable to find stuff unless you know the sender, their full text search seems to only work in a browser and only after indexing (which can take an hour) and it will go away with time as it's browser local storage.
Sending emails directly from share sheet in iOS failed twice.
Unable to add an invite to a calendar invite on iOS or in the Webapp (if invited by someone else).
Emails broken format when forwarding to someone else.
Long delay in delivering a simple email (to google and apple) at least three times.
The way that the emails are threaded. And that trash still shows as being in the same thread, instead of moving apart.
False sense of security as most emails are only encrypted on one side. And mine are visible on the other side.
Also just left Proton, there are so many small issues and annoyances in their iOS/iPadOS apps;
You can’t even search the content of e-mails. Can’t increase font size (pinch to zoom sort of works, but impossible to use on a small iPhone (Max) screen). Can’t set any custom notifications. Don’t see what folder an email is stored in. I can go on forever…
And of course it’s not a native app either.
It was a great relief going back to the stock Mail app.
I’ll happily return to Proton when they fix all the issues, and it doesn’t kill my productivity anymore.
At least for the ones sent in the Netherlands, if the iPhone is in silent mode, instead of making noises, it will vibrate strongly for a full minute or so.
It does not seem to be scanning phones, but stuff uploaded to iCloud, which is completely different. The article gives the idea that your device will be scanned.
That actually make it more okay with me. Apple can't have child pornography on their servers, that would be illegal. However, the fact that they are doing the scanning one the device could indicate that they don't have the ability to do the scans in iCloud. Presumably they can't read even read the images once stored in iCloud, so they have to do it on the device.
I don't know if that's the reason, but seems like a reasonable guess.
Apple actually isn't legally liable for what users upload until it's reported to them. And they are capable of doing the scanning server-side, since iCloud doesn't use end-to-end encryption.
You are right that some specific features on iCloud do have end-to-end encryption (only those listed under "End-to-end encrypted data" on this page).
But the majority of users' sensitive data is not included in that set of features. For example the Photos (what's being affected here), Drive, and Backup features don't use it. Note that any encryption keys backed up using iCloud Backup are therefore effectively not end-to-end protected either.
Somewhat misleadingly, this page indicates those features use encryption both "in transit" and "at rest", but Apple controls the encryption keys in those cases, so they are actually not end-to-end encrypted.
>>Before an image is stored onto iCloud Photos, the technology will search for matches of already known CSAM. Apple said that if a match is found a human reviewer will then assess and report the user to law enforcement.
>>Before an image is stored in iCloud Photos, an on-device matching process is performed for that image against the known CSAM hashes
Exactly, only "private" cloud data will be scanned instead, which is industry-standard practice for any self-respecting cloud provider anyway. It's a wonder how Apple wasn't doing it already.
In any case, this will be automated, rather than some poor Tier-1 pouring over iCloud Photos.
So only the guilty (and the false positives) would worry.
> So only the guilty (and the false positives) would worry.
If you truly want to "protect the children" you should have no issue for the police to visit and inspect your, and all of your neighbors houses. Every few days. Unannounced, of course. And if you were to resist, you MUST be a pedophile who is actively abusing children in their basement.
Innocent until proven guilty implies no false positives. What happens if I get arrested because of a false positive? What happens to my life because there will always be that doubt from everyone?
My social life is crippled for the rest of my life because of a false positive. Which can happen to anyone. Which means everyone should worry.
This is false, the scanning occurs on the phone. Plus, as has already been discussed at length, the NCMEC database is loaded with false positives.
The "nothing to hide" argument tends to fall apart when the database being used against you is full of legal imagery (which often isn't borderline or pornographic at all -- some of the flagged images literally don't show people).
Slippery slope to non-CSAM material? That ship has sailed already. The databases are a mess. From day 1, it detects non-CSAM.
It is not a company, but a federated network – anyone can run their own server and interact with the all other nodes (much like email). The software is open source and is a community effort, although the lead developer got a grant [1] that allows him to spend time to improve the project over time.
Looks like Patreon if you read the web site. And Liberapay, I guess.
It’s an open source “host your own” software, I don’t think they are trying to monetize beyond paying a living wage for the developer.
So, question here: If I move from outside EU to a country that is part of the block, does that mean that my Google Account will change the terms and mention something about GDPR?