This is a very, very hard problem to solve without some serious coordination and effort. To do this well you need to do all kinds of things like:
1. Verifying the authenticity of the software you're looking at. Is that really libxml or has someone fed you a poisoned package? Does it have a checksum? Is the registry or repository we pulled it from adequately secured?
2. Verifying the provenance of a particular version of a package. Who made this change? Why did they make it? Is the change safe?
3. Vetting the governance of a package or library. We may know of Apache and have some confidence in the governance of that project, but what about leftpad of NodeJS/npm fame? Is that logging library we got for our Rust app off of crates.io managed by a reputable OSS developer and/or company, or are the motivations of the entities building it uncertain? Can they be bought? Bought out? Sued? Blackmailed? If you're looking at this with the mindset of a state-sponsored organization that handles anything remotely important these are not unreasonable considerations.
The list goes on and on. It helps if you're working in an ecosystem that has high level of quality standards for the libraries or software packages that are being used, but there's always more to dig up the deeper you go into a dependency tree.
I'm of the opinion that supply chain attacks can be mitigated, but never eliminated. Having many eyes on a project helps (thanks to all of you working on projects with openly available sources!). We're still a long way from making our industry's tools and development processes naturally robust to these things, though I'm excited to see more dialog around these issues.
Those are signals, which may be part of security, but far from adequate if you're talking about operational security, security of health records, security of network-wide root keys, etc.
This is a very, very hard problem to solve without some serious coordination and effort. To do this well you need to do all kinds of things like:
1. Verifying the authenticity of the software you're looking at. Is that really libxml or has someone fed you a poisoned package? Does it have a checksum? Is the registry or repository we pulled it from adequately secured?
2. Verifying the provenance of a particular version of a package. Who made this change? Why did they make it? Is the change safe?
3. Vetting the governance of a package or library. We may know of Apache and have some confidence in the governance of that project, but what about leftpad of NodeJS/npm fame? Is that logging library we got for our Rust app off of crates.io managed by a reputable OSS developer and/or company, or are the motivations of the entities building it uncertain? Can they be bought? Bought out? Sued? Blackmailed? If you're looking at this with the mindset of a state-sponsored organization that handles anything remotely important these are not unreasonable considerations.
The list goes on and on. It helps if you're working in an ecosystem that has high level of quality standards for the libraries or software packages that are being used, but there's always more to dig up the deeper you go into a dependency tree.
I'm of the opinion that supply chain attacks can be mitigated, but never eliminated. Having many eyes on a project helps (thanks to all of you working on projects with openly available sources!). We're still a long way from making our industry's tools and development processes naturally robust to these things, though I'm excited to see more dialog around these issues.