So it sounds like Zoom was using the Facebook SDK, and now they're not.
I've been and iOS developer for a long time. I can tell you from experience that everyone does this. I have never worked for anyone who didn't ask for their app to include some combination of Facebook, Google, Flurry, AppCenter, Segment, Intercom, Parse, or whatever other random analytics framework the PM happens to be infatuated with.
Getting mad at Zoom for using the Facebook SDK is missing the point. They and a million others are always going to be doing this. Get mad at Apple for not letting you wireshark your own iPhone. Or having no way to package open source software where you can actually see what's running. As long as you're running binary blobs that can make whatever network connections they please, people are going to take your data and send it to places you don't know about.
Yeah maybe you can pass laws about it. But is that really a great solution? Who audits that? How do you determine what's legal and what's not? We should be pushing for a platform that makes it obvious what the software you're running is up to. The random pitchfork crusade against whatever company happens to catch a bad news cycle just isn't going to get us anywhere.
I don't want to live in a world where my parents and grandparents are expected to pull up Wireshark to figure out if the app they're using will record their front camera without consent.
Blaming Zoom and FB is entirely acceptable here, it is their responsibility to keep my data private.
Blaming Apple? Why, when Zoom is on the Play Store as well?
>As long as you're running binary blobs that can make whatever network connections they please, people are going to take your data and send it to places you don't know about.
Surely there are open source video chat solutions already? They haven't taken off for one simple reason: video hosting is expensive. It's quite literally one of the most intensive network activities you can partake in, rivaling torrenting.
It doesn't make sense economically to offer a video hosting platform without collecting income from it. Nor does it make sense to attempt a peer-to-peer solution knowing full well that one laggy peer wrecks the experience for everyone else.
> Blaming Apple? Why, when Zoom is on the Play Store as well?
Blame Apple because they constantly tout the iPhone as being "privacy respecting" and "what happens on your iPhone stays on your iPhone"[0], while they
A. Apple doesn't default to "limit tracking", or at least make "limit tracking" an option on setup/iOS upgrade
B. Apple doesn't penalize developers for using Facebook's SDK with auto data collection (ie. punishment by having text like "sends data to: facebook, google, hotjar" on an app's install page)
C. Apple doesn't do any software stuff to limit and track the trackers. Having a counter for # of total days a domain name was contacted would be an eye-opener for many, and being able to toggle a "block" on the domain would be a big step forward.
Facebook meets the standard for being included in apps (respects the user resetting the usage ID), but that standard isn't the standard privacy-conscious users want. Apple can do better, but whether it be industry pressure or monetary pressure [google paying to be the default search engine], they don't actually put privacy first.
> the Facebook SDK was collecting device information unnecessary for us to provide our services.
Sorry state of Apple App security and privacy - all your apps are swarms of data collection and privacy abuses.
Apple built this world - and Apple is to blame. Zoom is to blame too. And finally individual app developers should also alert everyone on what's truly happening in their apps.
I once had to integrate a third party tracking framework to an Android app. The PM wanted to track everything, even the apps that were installed on the device. Google immediately took our app down because of that.
And preventing this from happening with SSL pinning is not a barrier for Google or Apple because they can easily bypass that.
Since they have the means to inspect the traffic at scale then they should be able to filter out apps that are violating your privacy.
While I have zero love for any given hyper capitalistic business like Apple, Android is not any better on the whole in this space, and in some ways measurably worse (especially when you take into account what devices actually hold the largest market share)
This is true, but Apple touts how much better they are about Privacy, and charges a premium for it. Google is more up front that they make little money on the initial sale, and are dependent on advertising to make money.
> Privacy is built in from the beginning. Our products and features include innovative privacy technologies and techniques designed to minimize how much of your data we — or anyone else — can access
I agree. People who send this data externally are ultimately responsible. Hypothetically, If I use the internet to steal data from my employer it’s not the network teams fault for allowing it to happen. I’m just a thief in position of trust exploiting my capabilities.
However you have to be much more intrusive on developers if you are going to require semantic analysis of all data being sent to see if it was justified and whether it was mentioned in a (plain text, localized potentially into many languages) privacy policy.
Agreed. Apps on iOS (IMO) should have to declare what domains they'll access and otherwise get no other network access with special exceptions for browsers and network tools. I hope Apple will prevent apps from seeing SSIDs. I also hope Apple will come up with some similar solution for bluetooth so that apps can only see the devices the user selects and not just scan for all devices.
> declare what domains they'll access and otherwise get no other network access
So Facebook will just provide an SDK for app developers to integrate server-side that lets their app send the data to their own domain, and the server passes it on to FB. Developers will install it, because they want the analytics and ad conversion tracking. There probably isn't a great technical solution to this problem.
This would be leagues better than what we have now since we know that (at least a handful of) companies don't know or actively audit what their SDKs are doing - the Zoom situation here has plausible deniability. If they requires some server-site SDK to do this, some/many would do it, but that increases the cost of running the SDK and there wouldn't be any way to say "we didn't know FB used us as a privacy trojan".
Apple's browser is privacy respecting. The app universe is still the wild west. IDFA is terrifying because you can do out-of-band lookups with third parties and you'd NEVER KNOW. At least with cookies you can trace the information flows.
Not just that Apple has actually started to sell users' data to Goldman Sachs as well. The worst part is, this is opt-in. Not opt out. And opt out is incredibly so backward that you need to email some address instead of just clicking a button.
So I don't see how they're a "privacy respecting company" either. It's just marketing BS.
I'm not sure you read the article you linked to properly:
> Apple is changing the privacy policy for Apple Card with iOS to share a richer, but still anonymized set of data with Goldman Sachs in order to allow the creation of a new credit assignment model, which could expand the group of users that may be able to secure credit.
> There is also a beefed up fallback method in the works that will allow users to share more personal data on an opt-in basis with Goldman Sachs if you do not at first get approved
So anonymised by default.
Opt-in, IF you want to share more personal data.
Do you also understand it's Goldman Sachs that run the credit cards, accounts, etc, they're not just randomly sharing data with Goldman Sachs?
“You can opt out of this use or your Apple relationship information by emailing our privacy team at dpo@apple.com with the subject line ‘Apple Relationship Data and Apple Card.’”
It doesn't matter how anonymized they claim it to be, it should be opt-in, not opt-out. Of course virtually nobody would choose to opt-in, which is the point.
Sure, it would be nice if no company ever shared data with any other company, but that does not track in this case.
People signing up for an Apple branded Goldman Sachs credit card shouldn't be surprised or affronted by the fact Goldman Sachs gets anonymised data from Apple.
Why the hell anyone would sign up for this crap is beyond me. But it's not a reason to drag Apple into the context of a thread about a company guilty of basic privacy failures -- sending personal data to a 3rd party social network the user has no connection to.
Please also understand what 'anonymised' means, it means _not reversible_ i.e. you _cannot_ tell who the user is.
This whole Zoom revelation reminds me of the Cambridge Analytica scandal. This has been going on for a long time now, and it wasn't until one specific company did it that everyone is now concerned.
Mine shows nothing: "You have no available activity to show at this time."
I have been running Facebook in the special Firefox container pretty much since it was available. I took off the WhatsApp and Instagram apps from my phone months ago. For me, the number of ads (on Instagram) and integration into Facebook made them expendable.
I don't know if Facebook really has no information or they do but are not showing it to me.
I'd like to do the same with Google but the Google container wants to force all interactions with Google into one container. I've got dedicated containers for different Gmail identities - it was very handy to have a Gmail identity while I was president of the kids' soccer club and then turn the account over to someone else.
>Surely there are open source video chat solutions already? They haven't taken off for one simple reason: video hosting is expensive. It's quite literally one of the most intensive network activities you can partake in, rivaling torrenting.
There are, as you state OSS solutions [0]. But the video hosting is not akin to Torrenting. Most people are fine with 720p quality video as you're not "watching" the participants like a movie. And as you scale up the number of users the required bandwidth for each subsequent user goes down in a linear fashion due to reduced screen real estate. A conference with 8 users, from a video perspective doesn't reasonably take up more bandwidth than that of 2 given the smaller stream. I am on almost constant conference meetings with 4-12 users, many times with video and I have a full packet monitoring solution at home and can tell you it's not remotely as intensive as you've claimed here.
Just to take you up on possible FOSS solutions: If anyone is looking for a private and open source video chat platform self-hosting NextCloud [0] might be worth a try.
> I don't want to live in a world where my parents and grandparents are expected to pull up Wireshark to figure out if the app they're using will record their front camera without consent.
It's a "commons" issue. I don't necessarily trust FOSS software because I am going to login to the repo and check the code (though I have once or twice), I trust it because I know thousands of people motiviated by ethics and quality vs. money have peer reviewed the code for things like this.
The problem is the lack of notification/visibility into what hosts to which the app is connecting.
It’s possible that your solution might work for some small fraction of users, some of the time, for known spying hosts. Many people still want to access Facebook and Instagram, though.
Why not blame Facebook for being the most data hungry and privacy disrespecting for-profit entity ever known to man? (Google is tied for that spot, Microsoft close third)
It‘s a little bit like blaming the person making the deal with the devil. Of course on some level they deserve blame for engaging with evil but evil presenting itself in a slick interface should also get its fair share.
Yes, Facebook deserves blame for automatically sending all those things to them, even if an app only wants to provide optional Facebook login for users who opt in.
However every app vendor by now should know that Facebook is hungry for data and careless use of Facebook software is to blame in them.
As is Apple (and Google) to blame for providing no privacy measures for users.
> Getting mad at Zoom for using the Facebook SDK is missing the point. They and a million others are always going to be doing this. Get mad at Apple for not letting you wireshark your own iPhone.
There’s plenty of anger to go around. Get mad all all three: Facebook for making an SDK that tracks you, Zoom for integrating it, and Apple for letting it through unencumbered.
Do you know what developer uproar there would be if Apple decided to block all Facebook SDK usage? Surely you know most anger will be targeted at Apple, rather than FB.
Doesn't scale. We can't have 1,000,000 front page "App X uses Y SDK" posts. People will stop caring. Nobody's made a post of that flavor in awhile, and Zoom got caught in the crossfire. Honestly, if anything it shields other apps. People have a limited capacity for repeatedly addressing the same thing.
That's a great point. The 80-20 rule likely applies here. Of all instances of beaconing events, the bulk of those events originate from a small number of popular apps.
I'd guess you work mainly with slower players (the mention of webex surely suggests so). Zoom as been very much on the rise for a year or so, and is riding the coronavirus WFH wave very well. IME quality is better than competitors, but boy do they use dark patterns. Finding the link to the web version in the meeting page becomes harder every day.
From security perspective, Companies do not like the fact that Zoom was developed in China and the vast majority of its R&D is still in China. China has different rules on security than many other countries. Particularly surrounding intellectual property. https://www.sec.gov/Archives/edgar/data/1585521/000119312519...
"Top of page 21- In addition, we have a high concentration of research and development personnel in China, which could expose us to market scrutiny regarding the integrity of our solution or data security features. Any security compromise in our industry, whether actual or perceived, could harm our reputation, erode confidence in the effectiveness of our security measures, negatively affect our ability to attract new customers and hosts, cause existing customers to elect not to renew their subscriptions or subject us to third-party lawsuits, regulatory fines or other action or liability, which could harm our business."
Would any cloud-hosted analytics be acceptable? Is it just Facebook that’s problematic? What if they switched from client-side analytics to server-side so you couldn’t detect it all? Would that be any better? The bottom line is when you use a service, that data is their data to send to whomever they want.
My point is that you've removed one instance of the Facebook SDK from your phone, but you still have 50 others. Plus probably hundreds of other analytics frameworks that you've never even heard of that are just as bad or worse.
A journey begins with a single step. As a community, we suss our and shame the rest into removal. If shame doesn’t work, those in California try using the CCPA.
We’re all stuck inside for a while, this is the perfect time to act. One app and SDK at a time.
I could get behind that but am sure people get tired - both the activists and sheer mass of people who would need to get convinced.
During covid nobody is paying attention and we have the additional problem that they're trying to use cellphone location data to enforce social distancing! Once this is in effect it will be difficult to undo because the next epidemic will be "just around the corner" ...
This is unsustainable. It requires constant vigilance and turns the privacy matter into a cat and mouse game where we are constantly one step behind the worst actors. These systems exist everywhere in the world and they’re fundamentally inefficient. E.g. recycling, or “please bring your own plastic bag”, which relies on goodwill.
Compare to a system where you fix the incentives to automatically align everyone’s interests: e.g. bottle deposits, or a small fee for plastic bags. Now people will want to do the right thing, because it is aligned with their own interests.
The same holds here: fix this one instance with enough outrage, there will be a thousand more. Instead, let’s fix the misaligned incentives between app builders and users, so their invasion of my privacy costs them as much as it does me (e.g. GDPR).
This is how you make efficient markets: align incentives. Fixing everything on a case by case basis only provides temporary relief.
[edit: note that OP never said "don't do it", they just said "it's missing the point". which I think is a fair call. this one fix is good, but it's unsustainable.]
I guess your point is that fixing this one transgression is the equivalent of one store implementing that rule, and if we fix more of them eventually it’s a law, making it but the first step on the journey to sustainable privacy?
It isn’t. This is recycling one bottle. It doesn’t have any sustainable long lasting effect.
To stretch the metaphor, the equivalent of one store asking for deposits would be e.g. Apple requiring full disclosure of all such tracking SDKs on the App Store page, as suggested by someone else in this thread. That’s sustainable, scalable, and that’s what might eventually even lead to legislation, as you pointed out.
No, you attack the systematic problem and don't become happy by fixing one of them, since it is a hollow victory, and public outrage has limited capacity for repeated posts of "app x is sending to Facebook".
In this very thread we started from “I can tell you from experience that everyone does this.”.
Now when a PO will be asked to add facebook in its app (or wants to remove it) there is at least one prominent instance to point to showing that having the SDK is not the right move. And hopefully that “everyone does it” will become “some still do it”.
If of course in the meantime we find a working systematic solution, it’s all for the better.
honest question: _how_ do we attack the underlying systematic problem to solve it once and for all?
write a blog post?
take it twitter/HN/reddit?
hold a rally/demonstration outside Apple/Google?
call our MP?
bombard their employees with phone calls or knock on their front door where they live?
write malware?
... really I got nothing that sounds like it would work. In retrospect all of Tim Cook's privacy / security grandstanding and attitude of superiority was just that. There are no good guys in this game.
sure, but since I am unable to actually make legislation I wrote "call your MP" - which is more sobering/realistic if you look at the likely success of this particular effort.
We're outgunned by the lobbying from these companies I think.
Of course passing laws about that is a great solution. This is how society defines what is and what isn’t acceptable behavior for corporations. Are you also typing up rallying paragraphs against laws that dictate how companies have to adhere to food safety? Would your suggestion then be to “get mad at Burger King for not allowing you to perform chemical tests in the restaurant”? “Everyone does it”, like everyone used asbestos and lead pipes in the past?
Does this work with apps that do their own TLS using their own pinned certs? I don't see how it could. Surely that's a lot of high profile apps these days.
If this app works without root, it must be possible to apps on iPhone to add their own certificates to the system, which are then trusted by other applications - that would already be pretty alarming. I think Android still requires certificates to be manually imported by the user. Maybe this app points you to instructions on how to do this, but the description makes it sound very automatic.
You basically need your own VPN server with Pi-hole installed to control the tracking. It is a very effective way to block this, but not that easy to setup.
When you say 'everyone' in your second paragraph, really you mean 'all of the Silicon Valley style employers I'm aware of'.
That's a tiny proportion of the user population and doesn't imply agreement or consent to the information the Facebook SDK shares. And even if it it did, it wouldn't automatically mean that it's an acceptable or good behaviour by those apps and Facebook.
Bringing widely-distributed privacy breaches to a wider audience's attention can help those users provide feedback regarding products and then allow them to select vendors who respect their values.
> As long as you're running binary blobs that can make whatever network connections they please, people are going to take your data and send it to places you don't know about.
PWAs could answer this problem, at least to some extent, but Apple historically has been limiting the features to protect the AppStore and the Apple Tax (v. the recent local persistence changes in ITP).
It's better than, say, Google pretending that third-party cookies make the web a safer place (yup, that happened).
(Don't get me wrong, I think ITP and Safari are great)
> Get mad at Apple for not letting you wireshark your own iPhone.
People on HN can, but an average user shouldn't have to care about that. I'm 100% up for stronger legislative measures (both tech and dark UX patterns) and more education in this area. Sounds boring, but without it we'll just keep running in circles.
Hello fellow iOS developer. I have two apps on the Apple store. I never used any external SDK/libraries, only the built-in Xcode ones. I preferred to spent a bit more time in writing/testing but I would never accept that FB and other scum (privacy standpoint) track children (I wrote the apps for my nephews and nieces and I put them in the Apple store just for them)(I don't advertise them at all and I won't do so here either).
Regarding the issue that started this Zoom-FB dialogue I have commented a dozen (or more) times on the necessity to have a firewalled phone that a user (unfortunately the user needs to have basic knowledge of firewall admin) can decide what to allow and what to block. Your point on who audits is valid (I am a CISA and CISM of many years), and, well, nobody does. Each user will have to do his/her own work/effort to keep their family clear of these scum.
Apple gives you no way to find what your phone is doing, and no way to prevent it from doing it.
They provide company sponsored "controls" on what apps can do, which is about as useful as a factory alarm on a mid-80's car. Except with a modern twist, where they're the only ones capable of installing an alarm. (and imagine the alarm gives a free pass to apple)
The fact that they're starting in on MacOS and Little Snitch makes me think their platform isn't long for the world.
Yes, we need laws against this and for the gatekeepers to be the enforcers. I used to think that individual choice would solve these problems, but it won’t. Zoom et al are growing like a weed and we can’t protect all our loved ones from this bullshit with individual action all of the time. There are some problems that require government action; I think the events of this year have demonstrated that clearly. Individually we are weak as water, but collectively we are embarrassingly powerful. Time to organise.
> Getting mad at Zoom for using the Facebook SDK is missing the point. They and a million others are always going to be doing this. Get mad at Apple for not letting you wireshark your own iPhone.
But you’ve just said everyone does it and we shouldn’t get mad at them - so we don’t need wireshark, because it would simply confirm that everyone does it and we shouldn’t get mad at them - right?
Another problem is these analytics platforms just keep getting worse. There used to be a lot of effort put in to not collecting any personally identifiable info. Hell even google analytics was strict about that. It also took time to integrate them.
Now almost all the packages grab identifiable info by default and some are doing things like making screen recordings. Combine that with a rotating set of product owners like described above and a lot of apps just end up making way too many calls to way too many places.
And I do think Apple could and should be doing something more here. Their developer analytics setup is a good example to lead by as it gives users a global option to opt out. They also are able to reject apps for an icon being offbrand so I’m pretty sure they could figure out something here.
It doesn’t “sound like” that, it’s literally what they’re admitting to. Let’s not spin the narrative here on HN that Zoom didn’t admit to their faults.
No, they don't? There are very few companies that record your phone calls/video calls, then transcribe them into text. Then store the data for themselves. This includes any data that we shared in the session. Why can't Zoom just come out and explain why they do this? Seems pretty simple. If you asked me?
Note that the functionality is actually attribution for the app adds platform on Facebook. If you run ads on FB for an app, this ensures installs are tracked, and doesn't show the user the ad if they already have it.
"Who audits that?" We just did. And if there was a law against that, Zoom would just have been exposed for breaking it. Any sane company will try their best to adhere to laws. Some big players like Google can afford to mess around pay a few billions in fines, but those are the exceptions, not the rule. Eventually, even they can't afford to pay the fines in the long run (Even Google bowed to GDPR or at least its getting bashed with steeper fines until they wake up).
"How do you determine what's legal and what's not?" You pass a law, read the law? This is a self-contradiction. Laws are open for interpretation but the interpretation is quite clear after a supreme court case (for the better or worse).
"We should be pushing for a platform that makes it obvious what the software you're running is up to". Oh the web of trust? Did you ever install Snitch or some other firewall on your system? Its utterly hopeless even if you are knowledgeable. There is simply not way to audit that. Who audits that? Here you CAN ask this question.
I can't for the life of me understand how you can believe that it is better for everyone, including parents and grandparents to audit their phone, instead of having researchers audit phones and report companies who break the law. This is non-nonsensical. You must either be some expert without a connection to the real world, or some elitist who thinks everyone is like him.
Specifically on this point, I think the HN comment sorting algorithm may take account of how many votes child comments have too, so you may find that it’s the top child comment which has brought this to the top.
Constantly having to be in a war against my own phone's operating system is exhausting. These days I absolutely refuse to buy any brand that makes me jump through hoops just to get root on my own silicon.
I'm really liking Zoom's responses to incidents lately. Both this and the "oops we implemented certain features by leaving a localhost webserver gaping open" fiasco fairly recently got extremely nimble responses from them, and the responses were absolutely the right thing to do. They could have hand-waved the http server away and claimed to have "secured" it, and they could have hand-waved this away as "standard practice", which, let's be frank, it almost certainly is. The fact that they understood the seriousness and swiftly yanked the features in both instances is HUGE. Kudos to them for this.
edit: some people won't want to give them any slack because they committed the offenses in the first place, but I think that's silly. Reward them for trying, because if this is the way they're going to respond to blowing it, they're one of the good guys.
in the end they did the right thing with the local web server, but iirc their first response was "this is a non issue and needed for proper operation".
I contacted LG last month regarding their use of the Facebook SDK's automatic event collection in their ThinQ Android app. They responded and told me that they're disabling it in an upcoming release (incidentally, today's). If a single email is all it took to get a company with over $50 billion in revenue to disable Facebook's tracking in one of their apps, I really don't think that these companies are sharing data intentionally.
What justification does Facebook have for keeping automatic event collection turned on by default in their SDKs? Why can't they enable it only when the the user has explicitly opted in (https://developers.facebook.com/docs/app-events/gdpr-complia...)? They even say, "you need to ensure that your SDK implementation meets these [GDPR] consent requirements."
> I don't think these companies are sharing data with Facebook intentionally.
That would imply they are incompetent and negligent.
Would one not expect large companies like LG to have internal security and privacy reviews of the software they publish, and know very well what they are doing?
> That would imply they are incompetent and negligent.
Not really.
Product Manager: I want to be able to support Facebook login for our app.
Developer: OK... [googles for how to do that] ... We can use the FB SDK for that.
PM: Cool, let's do that.
Dev: [implements it]
Nobody really does much more due diligence than that most of the time. I suppose you could argue that's negligent, but if that's the case, then pretty much every company that has an app with login functionality is probably in that boat.
> I suppose you could argue that's negligent, but if that's the case, then pretty much every company that has an app with login functionality is probably in that boat.
I think every company that does this is negligent. Audit your dependencies, people!
I think for small teams this is a near impossible task. For big corporations it should be doable and expected. They actually have some leverage to push the other big companies to track less. Something a small company simply can't do.
> This is the Facebook SDK, from Facebook, and everybody knows what their business is.
Ignorance is a bliss. Talk to some people that still use fb after their scandal and you'll get "who cares, everyone is tracking users and selling data anyway" as an answer.
Exactly. A simple online search for the phrase "Facebook SDK" will reveal plenty. It's not like you need forensic accounting level research to see that the SDK does much more than provide a simple login mechanism.
> That would imply they are incompetent and negligent.
> Would one not expect large companies like LG to have internal security and privacy
can't tell if this is sarcasm because this is exactly what they are. an OEM is just packaging stuff and always bigger than it's parts (in this case meaning the knowhow of their otherwise bright and knowledgeable engineers is lost in the organization as a whole). the biggest companies are always the dumbest places where no matter how bright you may be the management layers above make sure that this gets cancelled out (I've worked at Samsung, Nokia and Ericsson and it was the case in all these places). Doubt LG would be any different.
Given Google's hostility towards the glacier-slow release schedules of Android updates and the continued embedding of vendor apps that screw up Android by phone vendors such as LG, I'm already quite biased in favor of "companies like LG are incompetent and negligent", based on the evidence available over the past several years.
Nice way to bury an innocuous "iOS Advertiser ID" in the middle of the list. What "iOS Advertiser ID" means is, to a very good degree of approximation, your deanonimized identity.
Also, that just linking the SDK in your app deanonimzes the user to Facebook is very, very clear in its documentation. It's not like Zoom didn't notice until someone told them. They made a decision, and now they're changing it because they were called out.
The Advertising Identifier is app-specific, and if Limit Ad Tracking is enabled, it is set to all zeros. So it's not accurate to say that it's "your deanonimized identity".
Alphabetical order is neither mandated by any rule, nor deterministic since you can choose how to call things. "Application Bundle Identifier" made it to the top of the list, but if it was "iOS Application Bundle Identifier" it would be below the Advertiser ID.
Do you really think they prepared a PR statement to respond to harsh criticism and just decided to toss in there the list of information sent without crafting the order of the items?
> Do you really think they prepared a PR statement to respond to harsh criticism and just decided to toss in there the list of information sent without crafting the order of the items?
Yes, because it's in alphabetical order.
You don't have to craft anything for it to be in alphabetical order.
You just put it in alphabetical order.
The sorts of people who are going to read that list and understand any of it are the sorts of people for whom the order doesn't matter one iota — they will see the information.
For the majority of people, putting it at the top of the list would, equally, not matter one iota — they won't know what it means.
> You don't have to craft anything for it to be in alphabetical order. You just put it in alphabetical order.
And you can still set your preferred order by naming and wording. Had they dropped "iOS" from everything, "Advertiser ID" would be at the top. I don't think they would've lost a lot of clarity with e.g. "Device Disk Space Available" instead of "iOS Device Disk Space Available". Or, if prefixes are their thing, why not call it "Zoom Application Bundle Identifier"?
Alphabetical order means nothing if you control the strings that are used for sorting.
Seriously, what is more likely, someone decided to nefariously re-order the list, possibly while laughing maniacally, or the list was just pulled and presented in alphabetical order?
HN commenters are just determined to turn everyone in to evil not-so-geniuses, refusing to recognise that almost everyone involved in this at Zoom are just like everyone else on HN.
They found a thing that did what they needed, an official SDK no less, and used it. They found out (in zoom's case via public crucifixion) that it was doing something nefarious they didn't like, and stopped using it.
But no, if HN is to be believed, they were collaborating with Facebook in some evil diabolical plan to take over the world via advertising.
They are changing because right now they are growing like crazy without the need to do much on user acquisition, and a bad PR is just too costly right now. But good to see them doing it.
Sure, good they are changing. And Zoom is definitely not alone in this. Facebook SDK usage is widespread and it's a horrible thing. And even then, the fault ultimately resides with Apple and Google that provide cross-application unique identifiers.
Yup, cross application unique identifiers are such a bad idea that it is hard to believe they exist. Maybe for Google I understand, since their entire business is advertising and android is more like a live billboard to display ads from their POV, but Apple, the company crying privacy, still provides an advertising id is shocking.
To use the Facebook SDK is a rocky mistake. It includes all kind of telemetry that is send to Facebook, whenever the user is connected to Facebook or not.
In the company I worked for, they read the code, you have access to it, and stripped that parts. It's not much work but its a pain.
The best approach is to use just the HTTP APIs and ignore the SDK. Your team will better understand how Facebook works, your app will be lighter and you are free from nasty surprises that a 3rd party may add to your app without your knowledge.
It's good that they removed it, but it's also dissapointing that they had no idea that it was happening until someone made a blog post about it. Do their employees not vet any of the code they use, and just slap things together off the internet and hope it's not doing anything their users don't like?
> Do their employees not vet any of the code they use, and just slap things together off the internet
That sounds like a pretty accurate description of how software is built. (No, I'm not being flippant.)
> ... and hope it's not doing anything their users don't like?
I expect most don't think too much about it, not out of malice, but because their product manager told them "I want FB login" and to do that, they either spend an afternoon using the FB SDK, or spend a week figuring out how it works, implementing it from scratch themselves, and debugging the inevitable interop issues with whatever oauth2 (or whatever) library they've picked. It's really a no-brainer... few developers can take the week-long route and then justify that to their manager. They'll get fired.
I've worked at places where "cowboy coding" was the norm and people would just look up how to do something on StackOverflow and copy/paste it. But to pull in a major 3rd party dependency like this and just "YOLO" ship it in your company's product? That's almost unbelievable. Didn't anyone have a look to see what the thing does? Assuming the SDK comes with source code, and if they integrated a 3rd party library that doesn't come with the source, even more shame. All it would have taken was a single engineer to notice unexplainable network traffic to a third party at runtime--at any time during development. So much WTF here.
Conversely, I’ve never worked anywhere, in 10+ years, where “we shouldn’t be sending this data to X, it’s bad for our users”, would have got further than the developers. Marketing, Product and management rarely care: in many cases they want the data to go to as many analytics and targeting services as they can.
Since the GDPR came into effect, at least in Germany I notice how product managers and other parties are involved in stuff like this, and not only devs and dev leads.
As an example, 2 weeks ago I had to implement Instabug's SDK for one of our app brands, and created a no-op fake library [0] in order not to shop any Instabug code to the other 5+ apps.
Simply because our PM was afraid of possibly sending stuff to them while not having added them to the privacy policy.
Lots of tempting 3rd party iOS frameworks are binary only without source, such as Google Maps for iOS. Who knows what kind of telemetry and event listeners these frameworks install.
I really can't fault Zoom here. They used an existing tool provided by a company that is, allegedly, reputable.
Though, thinking about it more perhaps Zoom should get some more scrutiny here because this isn't the first time Facebook has said eff it to user privacy. Distrust of Facebook should be the default.
Actually Apple and Google should not allow this in their app store policy. An 3rd party SDK sending data if it’s not needed should be a BIG no-no....I expect at least Apple to require this.
There are probably thousands of other apps that have the same problem.
Hard in the general case, but I'll bet it's trivial to scan for the Facebook SDK, or any other blacklisted libraries, unless they're intentionally obfuscated.
And it's known that what Facebook does is ugly. So Hanlon's Razor and all that, but given recent events it strains credulity that the developers weren't at least suspicious.
Often the developers are aware of, or at least suspect these types of things.
However, it's the Project Managers and Product Owners that are not aware and they say "do it because that's what the customer wants!" and you can argue. You really do so at your own peril if you don't have others on the team to back you up.
As a sometimes iOS developer, I can’t even imagine how you could build something like Zoom without at least sometimes auditing the network traffic. Even a novice user can do this in a few minutes with Charles Proxy.
People on the team knew, they just either didn’t care or were ignored when they voiced concerns.
To rephrase this into something more beneficial to others trying to learn from this:
"It's good that they removed it, and it goes to show just how important it is to inspect your application's wire traffic as part of your development and testing processes. Otherwise you'll have no idea what's happening until someone makes a blog post about it."
Between this and the HTTP server, it feels like Zoom of old that wrote the app was more willing to make the user experience vs user privacy trade off in favor of user experience.
Now you need to log in via Facebook with a separate browser window, and thanks to the HTTP change, you need to click on a browser dialog to launch a meeting from a link. So, they've either changed their policy to err more towards the privacy side and haven't found all the cases yet, or, more likely, still have the same attitude except when the tech world starts screaming at them.
I think it's more likely that the developers responsible for the HTTP server just didn't know much about local security, and Zoom doesn't have a good security review process (where actual infosec professionals are involved). That doesn't absolve them of responsibility, of course, but I really don't think it was malice or an intentional desire to ignore privacy concerns.
The “bug” where it would basically act like persistent malware, or the bug where it would act like persistent malware but also allow attackers remote access to your machine?
they can't see what happens inside FacebookSDK's code. even if they could see it, good luck convincing the PMs and directors to avoid implementing Facebook login.
Dear all, I am the CEO of Zoom. First, I sincerely apologize about this Facebook SDK issue. We learned a lesson, and we will do all we can to improve. I also wrote a blog.
Hey, thank you for listening to security researchers and fixing a problem when you became aware. I know this is not so much a problem, but rather a business decision, but the product would be much more useable if it was possible to connect without the app without having to use some tricks. There have been many conferences that took much longer than expected to start because not everyone had Zoom
You are right on! To focus on our service stability and security are our top 2 priorities. We will work as hard as we can to keep improving. Thank you for your great support!
And so zoom crumbled from the social pressure, while every other service and website is thinking "oof, they didn't realize that everybody does this to do advertising"
It's also possible they didn't listen to their app over the wire and see it doing this. What lesson could we teach about "why you should mitmproxy your app while it's in development?", so that people can start uncovering this in other apps — including their own?
The implication of saying they “crumbled” to the “social pressure” is that it’s bad to take clear feedback from your user base. It doesn’t make any sense.
Yep, no doubt it was when you commented, but an unfairly downvoted state on a comment usually corrects fairly quickly, which is one of the reasons why the guidelines ask us not to complain about it.
Considering how many apps are using Facebook's SDK, shouldn't this be something that FB should be addressing? After all, they are the ones making an SDK available to app developers to help with user-login. Shouldn't the presumption of trust rest on FB?
I can’t see the big deal. We use the Facebook SDK specifically for the free analytics. It’s just a default part of the SDK. It’s not sending anything any other analytics package wouldn’t
Could you maybe expand on what company you work for so that the rest of us can avoid it and its products?
Uploading all of this data to Facebook just so you don't have to run a Matomo instance (or whatever controlled analytics platform you use) is either laziness or disregard for your users. There's a reason the analytics are free and sacrificing your users for something this small is exactly what is wrong with the modern software ecosystem.
What if mobile platforms (iOS, Android... ) changed the security/privacy policy so that apps had to request the “network access” permission, either whitelisting domains they want to talk to, or askingfor wildcard access?
Most apps shouldn’t need wildcard access, and the mobile device could include a warning when an app does this teaching users that they should be careful with the app.
This way at least when you installed Zoom for example, it would say something like:
And then at everyone would know. It still doesn’t solve the underlaying problem, but it would probably make companies more reluctant to add third party analytics and sdks.
This wouldn’t help much unfortunately. The company could just setup a proxy server to do the work that lives under their domain. A company like segment (which routes analytics to other platforms) could then offer personalized domains and make a killing as everyone throws everything there.
But there probably is some sort of good similar solution based on guidelines. If apple were to start defining policies on data collection and opt outs and say that apps needed to follow them or be rejected it would put a lot of pressure companies like Facebook to adhere to these guidelines in their sdks.
I don’t know if apple has the appetite for this as it would cause a whole lot of rewriting of a whole lot of code but they are in a great position to do this.
It worked for location access on iOS, and that was just adding a flashing blue icon. Many apps stopped using the location all the time, and now there's a popup every few days telling you how often the app requested your location in the background.
> we decided to remove the Facebook SDK in our iOS client and have reconfigured the feature so that users will still be able to log in with Facebook via their browser.
Since they removed the Facebook SDK entirely, whatever mechanism Facebook used to collect the info doesn’t exist any more. Instead of being able to collect the data at all times, wouldn’t FB only have a vector to do so through web login? At that point, I assume they could do fingerprinting in the browser to collect some info, but at least the cannot do it on the system level any more.
It still seems like this is a big improvement. Though, I imagine most folks will have at least one other app using the FB SDK, so it’s not like the root cause is fixed.
On recent versions of iOS, in-app browsers do not share data with the Safari browser. How effective would browser fingerprinting be? Everyone with the same device, same language/locale and same timezone should have the same browser fingerprint, I thought.
> On recent versions of iOS, in-app browsers do not share data with the Safari browser.
Specifically, SFSafariViewController does not share cookies or other data with Safari anymore. Some bad actors got caught with their hands in the cookie jar, literally, and out that sharing went.
The do still share something. In response to this headline I installed the Zoom app and picked to login with Facebook. A browser popped up showing the facebook webpage and said "Login as Gregg Tavares?". Since I just installed app how did Facebook know it was me? The only possibility that comes to mind is that Safari was using cookies from some other app's embedded webview.
I believe the cookies passed into the webview are limited to a specific domain. So the app developer says “open a webview for Facebook.com” and the webview includes cookies only for the stated domain.
If Zoom "takes its users' privacy extremely seriously" and their "customers’ privacy is incredibly important" then why would they be releasing software without a strong knowledge of what third party code they're adding in, and what exfiltration might be happening as a result? They hold user privacy in such high regard and yet are releasing a program without even hooking it up to a network monitor for five minutes?
This is absolutely common. Business will require tracking/authentication/etc, contracts will be signed, developers will implement the provided SDK. Nobody will inspect the data being sent.
> releasing a program without even hooking it up to a network monitor for five minutes
How many times have you seen anyone do that? Unfortunately that is the reality - my personal take is to simply try to avoid vendor libraries at all costs, but it's hard to sell.
Oh I don't doubt the practice is common, but for an organisation making their claims about privacy it is at odds with their slapdash approach to development.
I guess this is another under-recognized benefit of developing for the web - when doing so, you're staring at the Network tab all day, trying to grok what's going on over the wire and to whom. I don't remember doing this nearly as much on native.
Yeah, that's the thing. I do very little web development, but I inevitably find myself in the Network tab of dev tools debugging something. I do around as little mobile (Android) development, and I'm not even really sure how I'd watch network traffic coming from an Android app. (I'm sure it's possible, but I imagine it requires explicit setup, possibly with some third-party software and/or the assistance of a laptop.)
If you have a router running something like openwrt (or from a vendor that uses openwrt and lets you get a root shell on the router) you can just use tcpdump on the router with a filter to pick the host you want to monitor.
mitmproxy or Charles proxy on your laptop. Charles proxy also has an iOS version, I imagine there is something similar that can run directly on Android.
Take npm as another example, in a large corporation, any commercial product that relies on third-party npm packages will have to survive a long legal audit process.
>This is absolutely common. Business will require tracking/authentication/etc, contracts will be signed, developers will implement the provided SDK. Nobody will inspect the data being sent.
That's exactly the point. Zoom says they care deeply about privacy, but their actions demonstrate they don't. Doesn't matter how common it is, or the reasons why it happened, it's proof positive that their statement is untruthful.
I for one don’t think Zoom is being malicious here. I imagine plenty of other apps out there are doing the same right now, by naïvely making use of FBook’s SDK.
I don't yet have a comment on whether they acted maliciously by permitting user data to be exfiltrated to third parties without their permission, but to make categorical falsehoods about the importance they place on privacy is malicious in of itself.
While I think one shouldn't read too much into PR statements like those, I don't think it's useless to call them out either, especially when they use very strong words like "extremely seriously" and "incredibly important".
If the defense is the practice being common then there isn't anything special about Zoom regarding user privacy, is it.
centralized location tracking in an infectious crisis (either mandatory or mass voluntary) will normalize to 'good'
sleazy third-party phone home from apps considered key to surviving under lockdown, apparently not ok now! good
the bad news is we'll normalize some bad practices, but the good news is we'll make pragmatic compromises using actual information -- with covid taking up moral panic cycles, privacy is a place we can be rational
also no business feels totally secure RN and people will do anything to win & keep business -- even zoom
That's great for iOS users, but it doesn't mention anything about the Android client. Nor do they discuss the tracking that the webapp still does even though they explicitly state they still allow FB login via browser. Isn't this more of a sorry for getting caught rather than an actual apology?
They were made aware of the issue on the 25th... it seems like due diligence would have revealed the issue at the outset. It should be no secret that FB provides their authentication services with strings attached. At least they fixed it quickly.
On the Web, it's recommended to use a generic OAuth library instead of integrating Facebook JS SDK. On mobile, this is almost impossible though as Facebook doesn't implement the OAuth PKCE flow.
For anyone interested in Facebook's SDK behaviour on Android, there's a good video[1] from 35C3 covering this topic, and a related HN discussion thread[2].
Thank you zoom.us for the best tool available right now with a fair and acceptable price model. A special thank you also for the great Linux support, that is absolutely unmatched from all other "solutions".
I am very sorry that so many bad publicity happens right now, but as far I know even bad publicity is good in the end.
I'd like this to be independently verified. There were a lot of comments on the other threads that "Of course Zoom was sending data to Facebook" because they're using Facebook's SDK. It made it seem like such data leaks were inevitable, when apparently they aren't.
Great, now if they would stop having an activity monitor that checked every process running on your computer, or made their web interface work with Firefox we’d be all set...
Let me ask how many of those who now argue in Zoom's favor have bought shares or are invested in any other way, like having bought licenses directly or indirectly for their company?
> "We will be removing the Facebook SDK and reconfiguring the feature so that users will still be able to login with Facebook via their browser. ..."
It sounds like this they're just no longer flat out using the Facebook SDK (which provides a slightly more "native" / "nicer" login flow for apps when used). They're going to do what most (at least, from personal experience) apps do and just show a webview with a redirect back into the app, which doesn't call out to Facebook at all.
EDIT: That is, it doesn't call out to Facebook at all until you start the login flow, which is just opening a browser view to the oauth2 flows..
That is ridiculous that a company as big as Zoom wouldn't know what an API they're using is doing with their customer's data. Is there not a legal/privacy team at Zoom that is in charge of reading all the fine prints and license agreements??
I feel like this is a common video chat feature. Given that the option to mute while in the chat is likely done most frequently through their UI vs the system settings, this seems to be an appropriate use of monitoring input to ensure the user is able to avoid common problems.
There seem to be some legitimate concerns about privacy worth discussing further but I'm not sure this is one of them.
>> we were made aware on Wednesday, March 25, 2020, that the Facebook SDK was collecting device information unnecessary
So Zoom is basically lying here
Come on, the developers who takes the responsibility to use the SDK were aware of it, ok maybe the CEO of Zoom or the market guy was not but the tech team is. They are not stupid.
You should have just apologise and assume your fault, that would be the courageous position, not denying it.
Tbh I am ok with Zoom sending my data to FB (I mean, in my case I've insta/messenger anyway) but not ok for Zoom taking everyone as naïve people with this lying statement.
This is what scares most security analysts is the fact that the product was developed and stores data in a place that has incredibly sketchy laws when it comes to intellectual property.
I can't see why Zoom can't come out with a statement regarding why they are collecting all of this sensitive data.
Big corporations might be sharing stuff unwittingly with people that they don't want to share it with.
"In addition, we have a high concentration of research and development personnel in China, which could expose us to market scrutiny regarding the integrity of our solution or data security features. Any security compromise in our industry, whether actual or perceived, could harm our reputation, erode confidence in the effectiveness of our security measures, negatively affect our ability to attract new customers and hosts, cause existing customers to elect not to renew their subscriptions or subject us to third-party lawsuits, regulatory fines or other action or liability, which could harm our business."
I'm happy the Zoom doesn't want to help Facebook spy on me. Unfortunately the chosen solution is still a privacy nightmare. Basically they let you login to Facebook via an in app browser. The problem is an app can spy on all activity of an in app browser. That means you have to trust that Zoom is not recording your facebook password as you type it in. We need a better system.
Also scary. I have never ever logged in to Facebook on my iPhone except via the Facebook app and it was the first time I've installed Zoom. When I went into the Zoom app and picked login via Facebook, somehow it knew who I was and asked if I wanted to login as me. How is this possible? Is iOS sharing cookies across apps? I feel like maybe I need to reset my phone. WTF
I also feel like the best solution for this case is to somehow login via the facebook app. I know that used to be an option but it seems facebook deprecated it. My argument would be (a) I don't have to worry Zoom (or any other app) is getting my Facebook credentials (b) If actually do want to login via Facebook it's almost guaranteed I have the app installed.
Not true AFAICT. Given Chrome on iOS uses a webview and overrides all networking and given Firefox can inject JavaScript it seems trivial to use both of those features to spy on any webview activity
I love $ZM, it works when nothing else does, and it's responsive to user feedback and cares about user happiness. Most of the interactions I have today are through $ZM and my life would be cut off almost entirely if $ZM didn't exist. If I'm fortunate enough to run a company, I would love to have a business relationship with $ZM, and I'll remember all this.
No, I'm not a paid shill, just a really tired and stressed out guy who gets almost all of his social interaction through Zoom.
I've been and iOS developer for a long time. I can tell you from experience that everyone does this. I have never worked for anyone who didn't ask for their app to include some combination of Facebook, Google, Flurry, AppCenter, Segment, Intercom, Parse, or whatever other random analytics framework the PM happens to be infatuated with.
Getting mad at Zoom for using the Facebook SDK is missing the point. They and a million others are always going to be doing this. Get mad at Apple for not letting you wireshark your own iPhone. Or having no way to package open source software where you can actually see what's running. As long as you're running binary blobs that can make whatever network connections they please, people are going to take your data and send it to places you don't know about.
Yeah maybe you can pass laws about it. But is that really a great solution? Who audits that? How do you determine what's legal and what's not? We should be pushing for a platform that makes it obvious what the software you're running is up to. The random pitchfork crusade against whatever company happens to catch a bad news cycle just isn't going to get us anywhere.