See the thing about the sandbox is that it's only going to be effective for very simple programs.
If you're building a real world application, especially a server application like in the example, you're probably going to want to listen on the network, do some db access and write logs.
For that you'd have to open up network and file access pretty much right off the bat. That combined with the 'download random code from any url and run it immediately', means it's going to be much less secure than the already not-that-secure NPM ecosystem.
> That combined with the 'download random code from any url
What protection does NPM actually give you?
Sure, they'll remove malware as they find it, but it is so trivially easy to publish packages and updates to NPM, there effectively is no security difference between an NPM module and a random URL. If you wouldn't feel comfortable cloning and executing random Github projects, then you shouldn't feel comfortable installing random NPM modules.
> and run it immediately
NPM packages also do this -- they can have install scripts that run as the current user, and have network access that can allows them to fetch, compile, and execute random binaries off the Internet.
From a security point of view, Deno is just making it clear up-front that you are downloading random code snippets, so that programmers are less likely to make the mistake of trusting a largely unmoderated package repository to protect themselves from malware.
I lean towards calling that a reasonably big security win on its own, even without the other sandboxing features.
Dependency version pinning comes to mind. The main difference between this and a random URL is that at least you know that if the module gets bought by a third party, your services or build system won't auto update to some rando's version of the package. IIRC there have been cases when a version was replaced as well.
I think this could be fixed quite easily if one could add a hash and a size after the url, to force a check.
Yeah, basically sounds like they could implement it à la Content Security Policy in the browser and it would be well understood right off the bat.
Or similar to node_modules, have some way to pull your dependency graph & host locally — At least for enterprise-y adoption I imagine that people will want to have _their_ copy of the code and choose when to update it even if in theory the remote code is locked down.
That is what I figured too. People are rightly concerned about the security implications of this new paradigm of including package dependencies.
These concerns and the conversation around them are good and healthy. Give it some time. People will experiment with what works and over time best practices will emerge for the set of trade offs that people are willing to make.
Arguably you can get (even more reliable) version pinning by copying typescript from that random URL & storing it in your own S3 bucket. Sure, you have _some_ work to do, but it's not that much and you 100% control the code from there on.
If you publish your module versions on IPFS that would provide a guarantee to your users the module versions do not change once published. But hashes are not very memorable as module names.
Well, using message digests, NPM or Yarn can pretty much guarantee content addressable versions, too. Do not have to use IPFS or blockchains, just because...
> That combined with the 'download random code from any url and run it immediately', means it's going to be much less secure than the already not-that-secure NPM ecosystem.
What deno does is move package management away from the framework distribution. This is great - one thing I hate about node is that npm is default and you get only as much security as npm gives you. (You can switch the npm repo, but it's still the overwhelming favourite because it's officially bundled.)
Deno can eventually give you:
import lib from 'verified-secure-packages.com'
import lib from 'packages.cloudflare.com'
So you'll be able to pick a snippet repository based on your risk appetite.
The idea of the above example is to show a controlled distribution can be made that would verify all levels of imports if needed, which is very promising.
Both the network and disk access permissions are granular, which means you can allow-write only to your logs folder, and allow net access only to your DB's address.
Typically, chmod and iptables are not used to restrict applications. Applications are restricted by virtual machines, containers, sandboxes, AppArmor profiles, SELinux policies…
There's a fairly long history of giving applications their own uid to run under which puts chmod and chown in control of filesystem operations the app is allowed to perform. "Typically" maybe not, but it's hardly unusual.
+ you can make a network namespace and have separate iptables just for that namespace/app, you can for example give the namespace/app a VPN connection without affecting the rest of the system. And other apps can join the namespace and communicate as if they had their own isolated network.
NodeJS is also working on policies (1) which allows you to change permission to single modules or files.
chmod/chown has been the de facto (if not de jure) method securing LAMP stacks for as long as I have been alive. Not that I recommend taking the advice of a LAMP stack too seriously :)
> For that you'd have to open up network and file access pretty much right off the bat.
For the network access I have an issue[0] open that asks to separate permissions to listen on the network from permission to dial. Also, along the way I want to have the ability to let the OS pick the port as an option.
Permissions are meant to work by whitelisting. So you wouldn't open access to the whole system just to talk to your DB, or to operate on some files.
Maybe this will develop into a standard of multi-process servers (real micro services you could say), where the permissions are only given to a slice of the application.
QNX is hands down amazing! No car manufacturer could ever come close to having their in-house infotainment system being as snappy as QNX...which is why they gave up and switched to QNX! Fine print: Tesla not included.
Contrary to the impression I seem to have given you, I'm actually super excited about Deno and am planning to write my next MVP app in it.
That means that I am actually a lot more vested into it, and if I want to put it in production, then I have to be concerned about things like this.
When somebody says they think X is broken, and they present a solution Y which they say is better, I am definitely entitled to ask why they think Y is better when I can't see the difference.
It’s one of the selling points. One of the main points I took away was
” We feel that the landscape of JavaScript and the surrounding software infrastructure has changed enough that it was worthwhile to simplify. We seek a fun and productive scripting environment that can be used for a wide range of tasks.”
Sounds intriguing to me. As a fan of starting projects of as simply as possible, I will certainly be tinkering with Deno.
For the use-case you describe, your just going to need network access: no file access and no process-forking needed, this is a big surface attack reduction.
Moreover Idk how granular the network permission is, but if its implementation is smart, you could block almost all outbound network access except the ones to your DB and the few API you may need to contact.
> For that you'd have to open up network and file access pretty much right off the bat.
I think that overall you're right, but it's worth noting that deno can restrict file system access to specific folders and can restrict read and write separately. It's plausible to me that you could have a web server that can only access specific folders.
I don't think running a public web server application is one of the envisioned use cases here. It looks like a tool for quickly and dirtily getting some job done. But I agree that to get something useful done, you probably need to open up a bunch of permissions, so you're still running arbitrary code on your machine.
It's always a good idea to run in a container, which limits the ports you can listen on, directories allowed for writing and reading, and can have its own firewall to limit outgoing connections.
If you don't need the firewall, you can just run in a chroot under a low-privilege user.
I mean, if you do otherwise, you are not following best practices and the voice of reason.
The manual looks pretty sketchy, but it seems you can limit file access to certain files or directories and that could be used to just give it access to the database and log files.
If you're building a real world application, especially a server application like in the example, you're probably going to want to listen on the network, do some db access and write logs.
For that you'd have to open up network and file access pretty much right off the bat. That combined with the 'download random code from any url and run it immediately', means it's going to be much less secure than the already not-that-secure NPM ecosystem.