Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are multiple different problems with different solutions of varying impact.

I think you can probably split the two areas of interest into:

1. A package maintainer's credentials are compromised

2. A package repository is compromised

And the two attack vectors into:

1. The build script(s)

2. The runtime library

You can cut off the "repository is compromised" path with signing. An attacker doesn't have the maintainer's private key so even if they can modify source/packages on the server they can't "trick" the client into verifying it.

(1) is harder. Let's assume we have package signing, we know that any package we have was signed with a key that we hope only the maintainer has access to. At this point, either the maintainer is compromised or malicious. We can make compromise harder in a few ways, but should ultimately assume compromise.

One way to reduce compromise is to have the signing key stored on a hardware token that requires proof-of-presence for signatures.

Still, at this point we have "build script bad" and "library bad". Both are much harder problems, with solutions that you've alluded to to some extent - that is, sandboxing behaviors.

What this requires is a way to say "this code can do these things". This is how browser extensions / mobile apps work - they have to declare their permissions and you have to ack them any time they change.

Doing this for build scripts isn't too hard. You can run the scripts in a "hermetic" build system and have each script execute serially in a restricted environment - if one needs networking, give it networking, etc. There's no native support for this but imo it wouldn't be that hard to add it in.

Doing this for libraries is much harder. You'd need a native capabilities system in the language, and changes to capabilities would break the API. But sandboxing entire processes isn't that hard. The vast majority of services don't require egress to the public internet, meaning that an attacker is already going to have a hell of a time if they just get a shell into some box that they can't even communicate with. So I'd say start there - limit processes to what they can do and that limits the impact of a compromised library.

So altogether, none of these approaches seem super hard. We have signing, sandboxed builds (they can be pretty loosely sandboxed tbh - do a 'fetch', and then cut off internet for build scripts/ limit fs access), sandboxed services.

It'd be nice to have something more robust, but today you can do everything above without a ton of effort.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: