Hacker News new | past | comments | ask | show | jobs | submit login

If a piece of software can connect to the internet, it can implement its own autoupdate. For extensions it would be trivial. People even commonly did it with Greasemonkey scripts.

The reason Chrome includes this functionality is because the hand-rolled solutions people come up with frequently have security problems.

Also, auto update gives good developers a way to fix their own security bugs when they are discovered.




We should be asking ourselves why we allow extensions with so much power and no meaningful security model. Auto update is a misfeature if I've ever seen one (again for anything but the most critical pieces of software).


Autoupdate is not a "feature", it's a consequence of Turing completeness combined with network access. Any platform that has these two properties has autoupdate, and there's no technical way to prevent it.

The autoupdate feature exposed to Chrome extensions is not there to make autoupdate possible, it's there to prevent developers from shooting themselves in the foot by implementing their own autoupdate insecurely. It's similar to a platform providing crypto libraries rather than letting developers implement crypto themselves.

The only way to try and prevent software from autoupdating is with manual review, which is what Apple does with iOS. However that has a whole other host of problems.

As for security model: I don't know how to measure "meaningful", but Chrome extensions have the most fine-grained security model of any extension platform. In fact, it is more fine-grained than most general purpose software platforms:

http://developer.chrome.com/extensions/permission_warnings.h...

The problem that started this thread is real, but it is not solved by disabling autoupdate (which is impossible) or adding more "security model" to the platform.

In fact the problem is very deep: How do you balance the desire to allow flexible software on your platform (adblocking, network stack interjection, manipulation of the user interface, etc), with the desire to limit the potential harm malicious actors can do? These two goals are in conflict. Existing best answers that humanity has come up with include some combination of "manual or automated review", "social signals" (stars, reviews from your network, etc), "blacklisting", "controlling access to the platform" (iOS apps can only be deployed in large numbers through the store), and "limit the power of the platform where possible with clever user interfaces" (the file upload button in HTML is a classic example).


I can't help but roll my eyes at these "turing completeness" arguments. The obvious way to prevent auto updating is to simply disallow an extension from modifying its own storage and to prevent it from running any code it manages to download. We don't operate on abstract turing machines, we can impose whatever limits we want on the code we allow to run. Now, I'm not saying this is easy to accomplish, but impossible it is not. If Google can manage to allow machine code downloaded over the internet to run securely like it claims, I'm sure it can handle a little javascript.


"The obvious way to prevent auto updating is to simply disallow an extension from modifying its own storage and to prevent it from running any code it manages to download."

You can remove eval() (In fact eval() is disallowed in Chrome extensions, for different reasons).

But how do you prevent an extension from including an interpreter for some other language and simply downloading and interpreting code in that language? This isn't crazy. It's common for games to include interpreters, for example.

Including an interpreter for a simple language is just one step in complexity beyond downloading configuration files. Is downloading configuration files that change the behavior of the code forbidden in your proposed system too? How is that accomplished?

"If Google can manage to allow machine code downloaded over the internet to run securely like it claims, I'm sure it can handle a little javascript."

NaCL has the same properties I'm describing. NaCL enforces a sandbox on what a hunk of code has access to; it doesn't enforce that that code won't change its behavior over time, possibly in response to data from the network. That is not possible to do.


What you say is certainly true in general, but it doesn't take into account the threat model of a supposed malicious extension. The threat model here is that someone decides to significantly alter the behavior of an extension after it gains traction and a certain level of trust. For this to work would require the extension to include an interpreter or some other mechanism from the start. They would essentially have to think "if this gains traction I may want to start silently injecting ads/stealing information, so why don't I include this interpreter in it now just in case". While possible, it isn't very likely.

The other problem with the security model is that all extensions are seemingly created equal. I would think that a large majority of extensions do not require access to any external resources besides the ones downloaded from the webpage itself. Think of a youtube downloader, flashblock, etc. These types of extensions should have a different security model that is constrained to restrict any web request calls besides to the domain of the page and perhaps any domains that page calls (to take into account cdns). These types of extensions shouldn't need any consideration of security implications to install. There's no reason why a youtube downloader should have the potential to steal my passwords down the line. More involved extensions that would require generic http requests could have the warning about data access.

I see in your profile that you know a thing or two about chrome extensions, so I will defer to your expertise on whats possible. It just seems like with the proper constraints for different levels of access one can have much greater security than we have now just treating all extensions the same.


If we didn't provide an autoupdate mechanism, then developers would just implement their own. Frequently, these handrolled systems would have security flaws. We have an existence proof of this: it happened in the Greasemonkey ecosystem.

We might propose that we could prevent these hand-rolled systems by restricting eval(). Well, we already restrict eval for different reasons. What we see is a lot of people working around the restriction by injecting code into websites (!!).

My bet is that basically any significantly sized extension would implement a workaround for any autoupdate restriction we tried to employ. Developers really like the ability to update their product, and the workarounds are not that hard. And these workarounds would be worse than the original problem - that sometimes good people turn bad.

It would also destroy the value that autoupdate provides which you are forgetting about: most of the time it is used by good people to do good things. We frequently find extensions with security problems, tell the author, and then they fix and push them to users. Without autoupdate this wouldn't be possible.

===

As for your proposal for how to restrict the levels of access extensions have... As I said above, we do this kind of thing already. You can read all about it here: http://developer.chrome.com/extensions/permission_warnings.h.... We have a very granular security system.

For example, it has been always been possible to write a youtube downloader Chrome extension that only has access to youtube.com.

Flashblock would theoretically be possible with the upcoming declarativeWebRequest API (http://developer.chrome.com/extensions/declarativeWebRequest...).

However designing the right APIs that have narrow risk, yet are flexible enough for developers to want to use remains a difficult problem that is unsolved except in specific cases.


Thanks for the info. I had no idea chrome had that, I don't think I recall installing an extension that DIDNT give me a warning about access. It seems like developers just stick to the more unrestricted system out of convenience or narcissism ("must phone home my youtube downloader, for science!"). There's definitely something out of whack here if developers won't use the restricted access when their extension fits nicely within that model. It would be nice if it could be shown that extensions are penalized in terms of downloads by requiring unnecessary access. As it is I've stopped using all extensions in chrome except adblock.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: