We need better capabilities.
E.g. when I run `fd`, `rg` or similar such tool, why should it have Internet access?
IMHO, just eliminating Internet access for all tools (e.g. in a power mode), might fix this.
The second problem is that we have merged CI and CD.
The production/release tokens should ideally not be on the same system as the ones doing regular CI.
More users need access to CI (especially in the public case) than CD.
For example, a similar one from a few months back https://blog.yossarian.net/2024/12/06/zizmor-ultralytics-inj...
> We need better capabilities. E.g. when I run `fd`, `rg` or similar such tool, why should it have Internet access?
Yeah!! We really need to auto sandbox everything by default, like mobile OSes. Or the web.
People browse the web (well, except Richard Stallman) all the time, and run tons of wildly untrusted code, many of them malicious. And apart from zero days here and there, people don't pay much attention to it, and will happily enter any random website in the same machine they also store sensitive data.
At the same time, when I open a random project from Github on VSCode, it asks whether the project is "trusted". If not, it doesn't run the majority of features like LSP server. And why not? Because the OS doesn't sandbox stuff by default. It's maddening.
I’m not sure I can recommend Qubes entirely due to the usability aspect.
I’ve used Qubes several times for a week at a time over the last few years. It’s gotten better, but they really need someone to look at the user experience of it all for it to be a compelling option.
I’m regularly questioning myself if what I’m doing is making it less secure because I don’t understand exactly everything Qubes is doing. I know how all the pieces work individually (Xen, etc).
Outside of configuration, I believe I’d have to ditch any hope of running 3D-anything with any expectation of performance. That’s simply a non-starter as someone who has written off “nation-state actor targeting me, specifically” as something I can defend against.
And lastly, I’m deeply skeptical of anything that loudly wears the Snowden-badge-of-approval as that seems to follow grifts.
My main workstation is a Mac and I’m doing this on Parallels. Would Qubes probably be more secure? Maybe. But it comes at a massive usability hit.
OpenBSDs pledge[0] system call is aimed at helping with this. Although, it's more of a defense-in-depth measure on the maintainers part and not the user.
> The pledge() system call forces the current process into a restricted-service operating mode. A few subsets are available, roughly described as computation, memory management, read-write operations on file descriptors, opening of files, networking (and notably separate, DNS resolution). In general, these modes were selected by studying the operation of many programs using libc and other such interfaces, and setting promises or execpromises.
How so? Obviously this is ineffective at the package level but if the thing spawning these processes, like the GitHub runners or Node itself added support to enter a "restricted" mode and pledged then that would help, no?
As far as I see its purpose is mostly a mitigation/self-defence for vulnerabilities in C-based apps, so basically limiting what happens once the attacker has exploited a vulnerability. Maybe it has other uses.
It could be used defending against bugs in the Node runtime itself, as you say, but as I understand vulnerabilities in the Node runtime itself are quite rare, so more fine-grained limitations could be implemented within itself.
For CI/CD using something like ArgoCD let's you avoid giving CI direct access to prod - it still needs write access to a git repo, and ideally some read access to Argo to check if deployment succeeded but it limits the surface area.
Great points! Harden-Runner (https://github.com/step-security/harden-runner) is similar to Firejail and OpenSnitch but purpose-built for CI/CD context. Harden-Runner detected this compromise due to an anomalous outbound network request to gist.githubusercontent.com.
bubblewrap is a safer alternative to firejail because it does not use setuid to do its job, and it is used by flatpak (so hopefully has more eyes on it, but I have no idea).
You do have to assemble isolation scripts by hand though, it's pretty low level. Here is a decent comment which closely aligns to what I'm using to isolate npm/pnpm/yarn/etc, I see no need to repeat it:
FreeBSD has Capsicum [0] for this. Once a process enters capability mode, it can't do anything except by using already opened file descriptors. It can't spawn subprocesses, connect to the network, load kernel modules or anything else.
To help with things that can't be done in the sandbox, e.g. DNS lookups and opening new files, it provides the libcasper library which implements them using helper processes.
Not all utilities are sandboxed, but some are and hopefully more will be.
Linux recently added Landlock [1] which seems sort of similar, although it has rulesets and doesn't seem to block everything by default, as far as I can tell from quickly skimming the docs.
I don't think it would help in this case, when the entire process can be replaced with malicious version. It just won't make the Capscium call.
What you really want is something external and easily inspectable, such as systemd per-service security rules, or flatpak sandboxing. Not sure if FreeBSD has somethingike this.
You also need to block write access, so they can’t encrypt all your files with an embedded public key. And read access so they can’t use a timing side channel to read a sensitive file and pass that info to another process with internet privileges to report the secret info back to the bad guy. You get the picture, I’m sure.
I get the picture, yes, namely that probably 99% of project dependencies don't need I/O capabilities at all.
And even if they do, they should be controlled in a granular manner i.e. "package org.ourapp.net.aws can only do network and it can only ping *.aws.com".
Having finer-grained security model that is enforced at a kernel level (and is non-circumventable barring rootkits) is like 20 years overdue at this point.
> You also need to block write access, so they can’t encrypt all your files with an embedded public key. And read access so they can’t use a timing side channel to read a sensitive file and pass that info to another process with internet privileges to report the secret info back to the bad guy. You get the picture, I’m sure.
Indeed.
One can think of a few broad capabilities that will drastically reduce the attack surface.
1. Read-only access vs read-write
2. Access to only current directory and its sub-directories
3. Configurable Internet access
Docker mostly gets it right.
I wish there was an easy way to run commands under Docker.
E.g.
If I am running `fd`
1. Mount current read-only directory to Docker without Internet access (and without access to local network or other processes)
2. Run `fd`
3. Print the results
4. Destroy the container
This is exactly what the tool bubblewrap[1] is built for. It is pretty easy to wrap binaries with it and it gives you control over exactly what permissions you want in the namespace.
> 1. Mount current read-only directory to Docker without Internet access (and without access to local network or other processes) 2. Run `fd` 3. Print the results 4. Destroy the container
Systemd has a lot of neat sandboxing features [1] which aren't well known but can be very useful for this. You can get pretty far using systemd-run [2] in a script like this:
Which creates a blank filesystem with no network or device access and only bind mount the specified files.
Unfortunately TemporaryFileSystem require running as a system instance of the service manager rather than per-user instance, so that will generally mean running as root (hence sudo). One approach is to create a suid binary that does the same without needing sudo.
But that's what firejail and docker/podman are for. I never run any build pipeline on my host system, and neither should you. Build containers are pretty good for these kind of mitigations of security risks.
We need better capabilities. E.g. when I run `fd`, `rg` or similar such tool, why should it have Internet access?
IMHO, just eliminating Internet access for all tools (e.g. in a power mode), might fix this.
The second problem is that we have merged CI and CD. The production/release tokens should ideally not be on the same system as the ones doing regular CI. More users need access to CI (especially in the public case) than CD. For example, a similar one from a few months back https://blog.yossarian.net/2024/12/06/zizmor-ultralytics-inj...