As much as we can criticise Google's handling of this situation, the fact that the developer was able to reduce permissions from accessing data on _all websites_ down to _their website_, as well as tighten up a few other permissions, shows that Google is correct that the extension is asking for more than it needs.
I hope the developer finds another load of permissions they can tighten up, resubmits, and is approved. As long as it results in permissions being more correct this is a very positive thing for users because for every PushBullet there's hundreds of attempts at malicious Chrome extensions that are abusing permissions.
That's what you got out of it? Google doing a good job? They sent an email with no guidance whatsoever.
These guys went above and beyond what most developers would've done, which would have been to contact support until they get a clear answer.
This only alienates the extension ecosystem. And this was the primary reason I switched to Firefox. Google is the new Microsoft. If I remember correctly, they started Chrome exactly so this very thing wouldn't happen.
As mentioned, I think Google have handled it poorly, but their fundamental position – that this extension is incorrectly using permissions – was significantly correct and may prove to be fully correct.
Google deserve criticism for the lack of clarity in the communication, they deserve criticism for the lack of human touch, customer support and many other aspects.
They do not deserve criticism for calling out incorrect permissions usage and forcing developers to do better.
It's confusing because whatever system (whether human or automated) they're using to flag permission issues has more precise detection abilities than they chose to expose with a simple "Permission is too wide - fix it".
The fact that the extension has over broad permission asks isn't good but I think saying their communication lacks clarity is underselling just how opaque they were with their feedback. It also concerns me a bit because it looks like their opaqueness might be an attempt at security via obscurity by trying to cloak what the rules actually are - which is a generally bad approach to trying to fight malevolent actors.
It's possible that the flagging has come from user submitted reports. In that case if Google trust the reports (and they have enough data about users to know if reports are likely to be genuine) then they don't necessarily need to know any more details.
Alternatively it could be vague to restrict the possibility of bad actors circumventing the letter of the rules without adhering to the spirit of them, or even just protecting themselves from legal repercussions (perceived or real).
Your later point is the one that concerns me. Organizations like governments have issues where the spirit of the law is valued over the letter due to inertial restrictions over revising the law - when it comes to private corporations the ability to restructure rules remains unless it's explicitly surrendered. In these cases keeping the set of rules exposed to the public (and even demoing changes) can allow revisions to those rules to increase their accuracy.
And, when you get right down to it, any rule that isn't well structured will be exploited by bad actors, people looking to roll out malicious browser extensions have a strong motivation to try and discover those rules with a high level of accuracy by testing them - only the good actors remain uninformed.
My gripe is that you should always be specific making requests, especially if you dangle something like a complete block of your account towards op but then you don’t say what needs to be done to prevent it.
It’s like I tell you get me a book on computer science or Ill fire you you, but I don’t tell you which one. Also I won’t response to any questions from you.
Whether OPs extension made him think about it is simply an entirely different matter.
a) the extension had been operating for years, unmolested by the Googlebot, with the expanded permission set
b) tightening up the permissions did _not_ solve the problem, indicating clearly that whatever the Googlebot was selecting for, it wasn't an incorrect use of permissions.
14 days is an absolutely egregious duration to get a response for a software change. A developer could be out on vacation for that long. Encouraging fast fixes is also irresponsible from a security perspective, which is what they are trying to fix to begin with.
Google sent an email saying they were asking for too many permissions. Pushbullet was asking to observe all website traffic. Google's email was objectively correct. I agree that the second rejection is more surprising, but yes, the first email seems like a case of Google doing a good job. I have very little sympathy for apps that ask for too many permissions.
Did they, though? The email seemed pretty clear that the problem was requesting more permissions than necessary.
I'm no Google fan, by any means, but if it's that hard for the developer to check which permissions their own app is requesting, I don't know if it's Google's fault.
I really did try to call out the benefits that happened when I was told to "give permissions another look". Like all software, needs change and I was able to make a great improvement.
The issue I have is that it's not clear if I'm even addressing the correct issue(s). If I don't make the Correct change, all other changes are irrelevant since they'll never get published.
Yeah, it's crap that they didn't give you guidance, although it seems like you managed to find plenty of issues quickly so perhaps the guidance is less necessary than it might seem.
Ultimately you know your extension, codebase, and use-case, far better than Google does, so it may not really be possible for them to give you the detail that you're looking for – you may be the only person who can do that.
I hope that they provide the support you need in understanding the problem to the point where the extension can continue to live on the Chrome store.
The big crime isn't the request to reduce permissions. The big crime is the lack of details and lack of communication. It's having to drop everything and work in a panic trying to guess how to please the faceless mysterious robot.
> I hope the developer finds another load of permissions they can tighten up, resubmits, and is approved.
You're missing the point here. The developer isn't given any guidance on what needs tightening. This shouldn't be guess and check. These rules impact this developer's livelihood. They should be well defined, documented, and communicated.
For comparison, some anecdotes elsewhere in the thread about how Apple attaches screengrabs and even decompiles apps to point to exact methods/lines of code in apps they reject from the iOS App Store, even small free ones: https://news.ycombinator.com/item?id=23170498
At the very, very least, they could identify which of the permissions are in violation and need to be made more restrictive, and which aren't. Someone at some point at Google clearly had that information when they decided to flag the extension, but Google's processes failed to ensure they communicated it.
For the record, I actually agree with you that this is a good policy and will be a positive outcome for users. But while you seem to agree that Google could have handled this better, you're not doing a good job of acknowledging just how developer-hostile Google was here, which is why you're getting a lot of pushback.
Most of the discussion on this link is about how Google is being developer hostile. I think that's getting plenty of attention.
> At the very, very least, they could identify which of the permissions are in violation
If they've flagged this through user reports of the permissions being too wide then they may not actually know which permissions need to be changed. This is purely speculation though.
> they may not actually know which permissions need to be changed
How can they not know? They decide whether the update is accepted or rejected, and there's somebody or something at google that makes that decision, so google has to know.
If they didn't know what permissions need to be changed, how is the accept/reject decision made? Something like "accept the fourth try if the developer makes it that far because it is probably an improvement?"
> ... may not actually know which permissions need to be changed.
Sure, the first notice may have come from user flags, and the motivation for those flags is unknowable.
But it's been rejected again, after substantial permissions pruning.
Either they know why they rejected the update, in which case they should tell the developer; or they don't know why they rejected the update, in which case they're holding developers hostage to an inscrutable black box.
Most of the discussion on this link is about how Google is being developer hostile. I think that's getting plenty of attention.
You are certainly within your rights to state things divisively if it pleases you. I was merely suggesting how you might make your point in a way people will agree with you.
If they've flagged this through user reports of the permissions being too wide then they may not actually know which permissions need to be changed.
Even if this were true,
1) what about the update that narrowed the permissions, surely Google knew which permissions remained in violation? Remember, it was the rejection of that update that prompted this post
2) user reports of permissions being too wide should also be required to identify the specific permission that is in violation. That would not only help the developer, but also help Google make the decision on whether to ultimately ban the extension
3) Google should have clearly stated in the initial message that they hadn't actually verified that the alleged violations are occurring
>> As much as we can criticise Google's handling of this situation, the fact that the developer was able to reduce permissions from accessing data on _all websites_ down to _their website_, as well as tighten up a few other permissions, shows that Google is correct that the extension is asking for more than it needs.
OK fair enough, but why aren't the big violators held to this? (I realize this example isn't Chrome, but it is Google Calendar -- ever try to add a Zoom meeting invitation to your Google calendar? Zoom wants access to read and write all events ever on your entire calendar!
Extension developers monetizing their extensions by selling the data that they get from users is a big problem. It's the reason that I don't freely install useful extensions that I find today. I have no way to distinguish those who sell my data from those who dont.
I love that Google is starting to solve this problem, and from my perspective an extension that is sending and receiving SMS messages should not be requesting the ability to read and change all data on all websites that I access.
"Solve the problem" ok, so you're starting that this selling only happens when a third party dev does it?
Do You have an android phone? Do You use google for anything? Gmail? Google docs/drive? Youtube? Chrome? ChromeOS? Anything google owns? Then they're selling your data.
Try reading all those fun TOS agreements that come with using any of the aformentioned products, or heck, visiting sites that use google analytics.that won't tell you how much or what data google gets from you, but it'll tell you that you agreed to it.
They aren't solving this problem, they're killing off extensions. And I say this having received many unsolicited attempts to "purchase" Chrome extensions.
It's obviously a balance, but you could use that argument to allow any plugin on the store. It gives more choice.
I think it's important to remember that while PushBullet is known to many of us, is posting on Hacker News, is a valued part of "the community" in some respect, at Google scale this fact is not know. PushBullet is obviously good to _us_, and maybe just needs to tweak permissions a little, but to a reviewer at Google it probably looks very similar to the hundreds of extensions they may review a day, many of which may contain malware.
They have to use certain metrics to sort the good from the bad, and abuse of the permission system – intentional or not – is a pretty good one when you care about the end user.
I often wish for a separate browser for consumers that are also devs. I'd happily lift the permissions for some open source extensions I'm using if that means better functionality.
Was this not determined before, or they changed their minds now that Google is threatening to pull their product? Either they thought that was appropriate before, or they didn't think about it at all. Inexcusable either way.
Exactly. Not once in their diatribe did they provide a reason that they need those permissions. The fact that noone there knew why they were asking for those permissions in the first place is a huge red flag for me.
And why in the world are they asking for the cookies permission? That's a big, fat nope for me. It's as if they don't understand what they are asking for and the potential implications of passing that data around so haphazardly.
These folks need to take another hard look in the mirror before they point the finger, because their own house is way out of order.
Disagree that G's motivation here is to reduce permission footprint, because:
- if G has the ability to automatically audit necessary permissions, they'd do it when you upload to the plugin store
- if they're doing this manually for popular plugins, then (1) they'd publicly certify safe plugins and (2) the interaction would be way more high touch
Plugins are inherently unsafe + require trusting the developer.
Could be malicious, or G may not even have a reason for this (it may be some forgotten dinosaur instinct to knock over other people's stuff when it gets too big).
> - if G has the ability to automatically audit necessary permissions, they'd do it when you upload to the plugin store
If they added it more recently then they are just back-applying it to an already existing extension.
Alternatively, you can report plugins as requesting incorrect permissions – I've done this. Perhaps that's what's happened here, lots of reports triggering an investigation.
Also, Google could just block the permission and let the extension developers deal. Even that would be less hostile because at least the developers would know what to fix.
1. The article contains more relevant information that you did not show in your point.
2. Those relevant information made your point void
3. I think your point make no sense on the relevant information.
There, I refuted your claim, you have 14 days to change it and show what you learned.
I strongly disagree. If they were actually interested in this, they could simply tell the developers what to fix. This is beyond arrogant and counterproductive.
Why can't Google provide support instead of vague threats?
Provide a permissions audit tool, recommend ways to reduce permissions, provide a dev tool to automatically report on permissions that haven't been used while running an extension.
Is banning someone's entire Google account across all services a proportionate response to a developmer having trouble with Google's confusing permissions API?
Anti-cheat through obscurity on the other hand is absolutely a thing.
As a metaphor, there’s a damn good reason you can’t just pay an Olympic anti-doping facility to test your urine; it would be trivial to develop protocols that evade the tests if you could do that.
That’s not what we’re discussing though. We’re discussing if anti-cheat through obscurity works, and I’m saying if it did there would be no cheaters. Instead companies have to build technology solutions that also don’t work 100% but that’s beside the point.
There are certainly less cheaters than if there were no anti-cheat methods. To use OP's example, an open source urine testing procedure would be trivial to game. The same thing goes for open-source multiplayer games.
> it would be trivial to develop protocols that evade the tests if you could do that.
If it's trivial to evade the tests, then the tests are inadequate in the first place, and should not be trusted to be accurate.
Likewise, if an anti-cheat system relies on obscurity in order to not be bypassed, then it's a crappy anti-cheat system (and, mind you, would be far less necessary if multiplayer games didn't have a fetish for trusting the client to do potentially-exploitable things instead of insisting upon server-side validation, but I digress).
And likewise, if making your policy publicly-known will result in people skirting around the spirit of that policy, then the policy is poorly-written and should be rewritten to better reflect the intent.
Security through obscurity is not security. Full stop.
> As I looked at the permissions and what our extension actually needs to operate, I noticed a great opportunity to reduce our permissions requests. We do not need to request access to data on https://*/* and http://*/*. Instead, we can simply request data access for https://*.pushbullet.com/*, http://*.pushbullet.com/*, and http://localhost/*. This is a huge reduction in the private data our extension could theoretically access. A big win!
They were completely in the wrong there, and posing a huge security risk to all of their users.
Permissions seem to be a pretty empty metric if you don't' know what the result is...
What was the impact of fewer permissions?
Let's assume PushBullet was doing something bad with some of those permissions and gathering data? Do they no longer have access to that data? I'm not sure that's the case, permissions alone don't determine that.
If PushBullet wasn't doing anything bad, did anything change?
Is it a positive thing for users when the extension disappears in a few days?
I hope the developer finds another load of permissions they can tighten up, resubmits, and is approved. As long as it results in permissions being more correct this is a very positive thing for users because for every PushBullet there's hundreds of attempts at malicious Chrome extensions that are abusing permissions.