I've thought this about Maven Central before (not familiar with Leiningen, but it seems it's trying to do a similar thing).
Maven Central has PGP signatures for all uploaded artifacts - but they are in fact useless, because anyone can create a PGP key that claims to be (say) maven-releases@google.com and upload that to a keyserver. There appears to be no mechanism by which a consumer can know whether the signing key should be trusted, so an attacker uploading a malicious artifact can easily upload a malicious signature with it.
I'm not going to argue with any of the criticisms of PGP in the linked article, but they don't seem hugely relevant to the problem here; the fundamental trust problem is much deeper than "GPG has janky code" (and it's not like there aren't any other options at all).
Maven central will let you sign artifacts with any published key you like, but one thing you can do in theory, is verify that new releases are signed by the same key as a known good release.
I am not aware of anyone actually doing this though.
That approach is still vulnerable to what amounts to a MITM attach: a bad actor can simply provide the legitimate versions for whatever period he deems required to build trust.
Isn't the idea that you form your own trust network? For example meeting people in-person at conferences and signing each other's keys, and extending trust that way?
Conceptually you could do that, if you were willing to only use dependencies from people you trusted that closely. That can only be a very, very tiny minority of people using Maven.
> if you were willing to only use dependencies from people you trusted that closely
No that's what makes it a network. There's a transitive closure, so you can also use dependencies from someone trusted by someone you trust, or by someone trusted by someone trusted by someone you trust, and so on.
That's orthogonal to the PGP signing though. That verifies that you control the domain for the Maven group id but doesn't need any PGP keys (it's more akin to the process of verifying DNS records for SSL certificates).
The keyserver is intentionally designed to be a write-only (no delete/edit) database where anyone can upload their keys. GPG's target market contains people living under highly oppressive governments, where getting a 'please verify your identity' email could get you killed/etc. Given this requirement, it must be possible for anyone to upload a key claiming they are asdf@example.com.
Now it turns out having a globally accessible, unauthenticated, write-only database is a really stupid idea, and there was a piece submitted on HN within the past year about this. Similarly at some point as a demonstration someone uploaded ~50 GPG keys for Linus Torvalds (or something like that) as another demonstration of this.
The reason this is OK is that you have out-of-band verification. When I upload my key for asdf@example.com, I will also put a page on example.com saying that my public key has fingerprint XXXXX. This is the exact same thing that happens with SSH; the first time you connect to another computer it shows you the key fingerprint and asks you whether it is as expected, as discussed in another top level comment here.
So if you want to verify a signature on a file I sent you and you find 50 public keys on the keyserver claiming to be me, you can still very easily figure out which key is mine by looking at the keys' fingerprints. Or instead of publishing it on the WWW I can tell you the fingerprint with another more private/secure method, such as a piece of paper at a dead drop. Or the network of trust can be used: you already have Alice's public key on your computer, and she can sign my key on the keyserver. So when you find 50 keys on the keyserver claiming to be for asdf@example.com, you know the one signed by Alice is the real one.
The word you're looking for isn't "write-only". It's "immutable". And it's not the immutability that's responsible for the attack you're discussing, but the lack of a rooted chain of trust. You seem to be imagining a world in which the keyserver is responsible for authenticating the keys uploaded to it. I don't want to give the keyserver people that kind of power or responsibility.
Interesting: I wouldn't describe an "append-only" database as "immutable" (after all, you are mutating it by adding things to it) but it appears that the common parlance actually does refer to databases that are append-only as immutable, perhaps as short hand for immutable-entry databases.
Annoys me disproportionately. But the people have spoken.
Yeah, it's like not you would call an in-memory B-tree immutable if you could only add more nodes but not change existing ones. Clearly other people are wrong and we should stand up for what is right!
It's readable and appendable, but not updatable. That doesn't mean immutable; with versions or timestamps, you can create mutable semantics on top of an appendable store.
The best verification would be recorded video of a person with sheet of paper with printed hash, pronouncing this hash. Of course this video is supposed to be served by https. So if you have any idea about that person (e.g. you saw him at conference), you can confirm its identity.
It might be possible to edit video to change picture, to fake audio. But that's as much as one could do without personal contact.
Maven Central has PGP signatures for all uploaded artifacts - but they are in fact useless, because anyone can create a PGP key that claims to be (say) maven-releases@google.com and upload that to a keyserver. There appears to be no mechanism by which a consumer can know whether the signing key should be trusted, so an attacker uploading a malicious artifact can easily upload a malicious signature with it.
I'm not going to argue with any of the criticisms of PGP in the linked article, but they don't seem hugely relevant to the problem here; the fundamental trust problem is much deeper than "GPG has janky code" (and it's not like there aren't any other options at all).