Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Basically, what we mean by sandboxed is that the application would not be able to do anything harmful to your system unless explicitly allowed (similar ethos as Deno, compared to NPM).

Let me explain a bit more how:

* stdio / file io: thanks to Wasmer VFS and the WASI mapped dirs, the application will only able to access to the directories explicitly defined by the user when running a program. Any file/dir access outside of the allowed directories will throw an error (both by design of WASI and by our implementation of it). Note: by default stdout/stderr will be piped but that can also be easily customized.

* Environment variables: no environment variables can be accessed unless explicitly specified to our CLI (or SDKs)

* Time: we consider time access (get) to be harmless (WASI don't allow setting the platform/OS time, so we don't have to worry about that)

> What are "secure systemcalls"? What does this mean on the technical level?

It means system calls that we consider harmless to execute. For example, getting the current time (unix timestamp) is considered harmless by our WASI implementation (it should be easy to even shield that, if needed), but accessing a file could be harmful (that's why we ask for permission first).

Of course, no software is free from bugs that could jailbreak the sandbox, specially after Spectre and Meltdown. So having something to be "fully sandboxed" is an infinite game.

But our aim is there, nonetheless :)



Thanks for the reply! I appreciate that.

> Basically, what we mean by sandboxed is that the application would not be able to do anything harmful to your system unless explicitly allowed (similar ethos as Deno, compared to NPM).

So how is this different to the mentioned (obviously failed) JVM sandbox?

It's exactly the same concept as I see it. "Security" fully depends on (complex!) configuration.

What possibly could go wrong here…

> stdio / file io: thanks to Wasmer VFS and the WASI mapped dirs, the application will only able to access to the directories explicitly defined by the user when running a program. Any file/dir access outside of the allowed directories will throw an error (both by design of WASI and by our implementation of it). Note: by default stdout/stderr will be piped but that can also be easily customized.

So basically you've implemented file access rights.

With multiple additional layers of indirection and a lot of additional code. (TCB anybody?)

Sure, this looks very promising and innovative. What possibly could go wrong here?

How does this prevent unwanted information exfiltration given std. I/O and networking works?

Also, how does it prevent that a rouge / hacked app for example encrypts all files / databases it has regular access to?

> Environment variables: no environment variables can be accessed unless explicitly specified to our CLI (or SDKs)

> Time: we consider time access (get) to be harmless (WASI don't allow setting the platform/OS time, so we don't have to worry about that)

How is this different form std. Linux "capabilities", and the Linux syscall filter?

> Of course, no software is free from bugs that could jailbreak the sandbox, specially after Spectre and Meltdown. So having something to be "fully sandboxed" is an infinite game.

Getting the sandbox tight and close is not the problem imho.

This was done several times already (with mostly OK-ish results).

The problems start when you need to dig holes into it. (And you need holes, otherwise you're so "secure" that you can't do anything meaningful).

I still don't get how a new "whitelist only" sandbox will not end up in the same fundamental issue like the ones before it. When "security" is crucially depended on complex configuration it's very likely that there will be unintended holes because someone didn't get the config right. (Just think about why almost nobody is using SELinux even it gives you even better guaranties then WASM currently).

Java shows this problem also very well: It died on the client not because there were technical loopholes in the sandbox. There were almost none of this kind! The problem was the overly complex sandbox config (with to be honest additionally quite stupid defaults in the beginning) that nobody got right, and security issues popped up constantly therefore.

So you don't even need to think about Spectre and Meltdown kind of attack vectors.

After years (or even decades) there is still no guaranty that people get things like Linux "capabilities" or even std. FS access rights correct. Now the same people are expected to get some other features, that do basically the same, right?

I know already how this will end up… And this will blow up very painfully right into your face after you have spit so much promises about the superior security of your approach!

Don't be stupid, learn form the Java story. (And no, you're not anyhow better. If you think that, you lost already, and your ship is going to sink with a lot of skit for sure).

---

I think the only way to get capability security right, and especially usable by mere mortals in the first place is to make it a fundamental part of the languages used.

Rust is light years away form that, though… (And others don't even have the basic enabling features for that.)

The languages that tried something like that in the past are by now dead since a long time. (Like the E programming language)

Though I put high hopes into the upcoming Scala features in this regard.

But nevertheless those are way ahead of any other (real) language even there a usable system is also still years away. For reference:

https://docs.scala-lang.org/scala3/reference/experimental/cc...

(This will give you all the features of Rust plus true capabilities in the type system. But not before the next couple of years. But maybe someone starts to clone this feature set early so they won't be a decade behind when Scala finally ships this… ;-))




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: