Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Signing .jars is not worth the effort (quanttype.net)
64 points by nurettin on July 26, 2020 | hide | past | favorite | 90 comments


A solution I am working on for these sorts of software supply chain attacks uses transparency logs. Take a look here (warning, alpha): https://github.com/transparencylog/btget

Essentially the binary transparency log acts like a notary service. It appends the cryptographic digest of a URL to an append only log. The append only log cannot be rewound without detection by clients. And clients verify the contents they receive against the entry in the log.

Two nice properties for the issues outlined in this post:

1. Hosting providers (or third parties) can add this sort of log without needing developers to do things like manage key material or 2FA tokens 2. It is complementary to good upload authentication or signing systems

In fact, Go is using transparency logs for source libraries. I think similar systems should be used for binaries in all other language ecosystems as well.

I would love to see a world where major source code distributions start running transparency log servers as part of their own file host integrity protections. Imagine maven, github, npm, etc all running these sorts of services for all uploads and using them by default in their clients. Users would have additional confidence that the v0.4.3 package they downloaded to their system two weeks ago is the same one their colleague got or the CI/CD system got.


Some time ago I made a fun hack where I abuse the Go transparency log to store immutable url->fingerprint mappings of arbitrary urls:

https://github.com/mkmik/getsum

Example:

    $ getsum https://github.com/bazelbuild/bazel/releases/download/3.4.1/bazel-3.4.1-installer-linux-x86_64.sh
    9808adad931ac652e8ff5022a74507c532250c2091d21d6aebc7064573669cc5
now, if somebody ever replaces that file on github, getsum will fail with an error.

(I was too lazy to make getsum actually download the file if the checksum matches, but that should be trivial; for now just do "getsum $url && wget $url").

This works by creating synthetic Go modules that embed "facts" about URLs inside generated Go code, which gets then downloaded by getsum and parsed using the stdlib "go/parser" package. Since all Go modules are recorded in the Go "sumdb" public transparency log, once this module has been generated it cannot ever change. (Getsum uses a constant "version" for the synthetic packages, thus facts are immutable).

I'm confident this works because it piggy-backs on the stable production transparent logs infrastructure of a high-profile project managed by a high-profile company. On the other hand the Go transparency logs admins could ban my "getsum.pub" domain if the generated traffic bothers them. This was for me just a quick way to show to some colleagues the potentials of transparency logs etc, and of course to have fun.


> (I was too lazy to make getsum actually download the file if the checksum matches, but that should be trivial; for now just do "getsum $url && wget $url").

Please don't. Instead check the hash of the local file. A malicious server could serve a different file for getsum and for wget (bonus point: you won't have to download it twice)


yes that's indeed a problem that needs to be dealt with.

That said, technically the getsum client doesn't download the file; it only fetches "$url.sha256" if present or a "SHA256SUMS" in the same directory and compares that with the one stored in the transparency log.

It follows a common pattern on release sites (such as major projects on github).

In other words getsum only ensures that the checksum file itself is immutable, and the checksum file then is used to ensure that bigger file also hasn't changed. The reason for that lies in the fact that getsum has a server side component (hosted on getsum.pub) that serves the Go modules fetched by the Go sumdb.

Thus, the correct instructions are:

    $ (echo -n $(getsum https://github.com/bazelbuild/bazel/releases/download/3.4.1/bazel-3.4.1-installer-linux-x86_64.sh); echo " bazel-3.4.1-installer-linux-x86_64.sh") > bazel-3.4.1-installer-linux-x86_64.sh.sha256 && wget https://github.com/bazelbuild/bazel/releases/download/3.4.1/bazel-3.4.1-installer-linux-x86_64.sh && sha256sum -c bazel-3.4.1-installer-linux-x86_64.sh.sha256
    
Big file downloaders tend to be more complicated than expected; progress bars, resuming downloads etc etc. Perhaps getsum instead of downloading the file it could verify the file? E.g.:

    $ wget https://github.com/bazelbuild/bazel/releases/download/3.4.1/bazel-3.4.1-installer-linux-x86_64.sh && getsum -c bazel-3.4.1-installer-linux-x86_64.sh https://github.com/bazelbuild/bazel/releases/download/3.4.1/bazel-3.4.1-installer-linux-x86_64.sh
EDIT: just implemented the "-c" flag described above


Here's an aside / funny thing about jar signing on blurays. The Blu-Ray spec only really enforces jar certificate verification for encrypted blurays (i.e. AACS). non AACS blurays (i.e. decrypted), ignore the certificate chain. i.e. a self signed certificate that uses its own key to sign the jars works fine per the spec.

The only reason people can take blurays (with java menus) and make them reliably playable when decrypted (in real players, ignoring things like VLC which can easily ignore the cert chain) is because the bluray spec tells them to explicitly ignore the certificate chain. This enables decryption programs to rewrite the java bytecode (and resign the jar's with a self signed certificate) removing any protection mechanisms (think screenpass) that are in the java code itself.

While I sort of understand the reason for it (you want to be able to test discs before mastering without encryption and without the need for signing), it really killed what could have been an effective security model. Basically the standard strong security vs convenience dilemma.


I half suspect they do things like that on purpose, because the alternative is to create demand for players that disregard the restrictions, and by that point they're also going to disregard all of the other ones and let the user fast forward through commercials etc.

So if they let the thing they don't want people to do be hard but not too hard, it satisfies demand from the people determined to do it instead of creating a market to solve the problem which would make it more convenient.


I don't know. perhaps, but they've shown a large willingness to force the hardware players to enforce security mechanisms that can't really be avoided nicely (ex: cinavia, there are ways to avoid it, but very few on disc based players)


I've thought this about Maven Central before (not familiar with Leiningen, but it seems it's trying to do a similar thing).

Maven Central has PGP signatures for all uploaded artifacts - but they are in fact useless, because anyone can create a PGP key that claims to be (say) maven-releases@google.com and upload that to a keyserver. There appears to be no mechanism by which a consumer can know whether the signing key should be trusted, so an attacker uploading a malicious artifact can easily upload a malicious signature with it.

I'm not going to argue with any of the criticisms of PGP in the linked article, but they don't seem hugely relevant to the problem here; the fundamental trust problem is much deeper than "GPG has janky code" (and it's not like there aren't any other options at all).


Maven central will let you sign artifacts with any published key you like, but one thing you can do in theory, is verify that new releases are signed by the same key as a known good release. I am not aware of anyone actually doing this though.


That approach is still vulnerable to what amounts to a MITM attach: a bad actor can simply provide the legitimate versions for whatever period he deems required to build trust.


Isn't the idea that you form your own trust network? For example meeting people in-person at conferences and signing each other's keys, and extending trust that way?


Conceptually you could do that, if you were willing to only use dependencies from people you trusted that closely. That can only be a very, very tiny minority of people using Maven.


> if you were willing to only use dependencies from people you trusted that closely

No that's what makes it a network. There's a transitive closure, so you can also use dependencies from someone trusted by someone you trust, or by someone trusted by someone trusted by someone you trust, and so on.


But then it only takes one person trusting a bad guy to bring down the network.


There's a human in the loop when you get your Maven Central account that lets you publish your jar under some prefix.


That's orthogonal to the PGP signing though. That verifies that you control the domain for the Maven group id but doesn't need any PGP keys (it's more akin to the process of verifying DNS records for SSL certificates).


It's been a while. You don't have your PGP key countersigned? I don't recall.


Mh i thought your answer is ridiculous. Why would someone enforce/use a pgp signature with public key servers without a proper challenge?

I know it from my visited 'network of trust' but apparently there is no real challenge involved in uploading your key.

Thats just really really shitty.


The keyserver is intentionally designed to be a write-only (no delete/edit) database where anyone can upload their keys. GPG's target market contains people living under highly oppressive governments, where getting a 'please verify your identity' email could get you killed/etc. Given this requirement, it must be possible for anyone to upload a key claiming they are asdf@example.com.

Now it turns out having a globally accessible, unauthenticated, write-only database is a really stupid idea, and there was a piece submitted on HN within the past year about this. Similarly at some point as a demonstration someone uploaded ~50 GPG keys for Linus Torvalds (or something like that) as another demonstration of this.

The reason this is OK is that you have out-of-band verification. When I upload my key for asdf@example.com, I will also put a page on example.com saying that my public key has fingerprint XXXXX. This is the exact same thing that happens with SSH; the first time you connect to another computer it shows you the key fingerprint and asks you whether it is as expected, as discussed in another top level comment here.

So if you want to verify a signature on a file I sent you and you find 50 public keys on the keyserver claiming to be me, you can still very easily figure out which key is mine by looking at the keys' fingerprints. Or instead of publishing it on the WWW I can tell you the fingerprint with another more private/secure method, such as a piece of paper at a dead drop. Or the network of trust can be used: you already have Alice's public key on your computer, and she can sign my key on the keyserver. So when you find 50 keys on the keyserver claiming to be for asdf@example.com, you know the one signed by Alice is the real one.


The word you're looking for isn't "write-only". It's "immutable". And it's not the immutability that's responsible for the attack you're discussing, but the lack of a rooted chain of trust. You seem to be imagining a world in which the keyserver is responsible for authenticating the keys uploaded to it. I don't want to give the keyserver people that kind of power or responsibility.


Interesting: I wouldn't describe an "append-only" database as "immutable" (after all, you are mutating it by adding things to it) but it appears that the common parlance actually does refer to databases that are append-only as immutable, perhaps as short hand for immutable-entry databases.

Annoys me disproportionately. But the people have spoken.


Yeah, it's like not you would call an in-memory B-tree immutable if you could only add more nodes but not change existing ones. Clearly other people are wrong and we should stand up for what is right!


It's readable and appendable, but not updatable. That doesn't mean immutable; with versions or timestamps, you can create mutable semantics on top of an appendable store.


The best verification would be recorded video of a person with sheet of paper with printed hash, pronouncing this hash. Of course this video is supposed to be served by https. So if you have any idea about that person (e.g. you saw him at conference), you can confirm its identity.

It might be possible to edit video to change picture, to fake audio. But that's as much as one could do without personal contact.


My specific point was made in context of maven central repository.

It gives a false sense of security when you have to have a signed pgp key but no challange for trust.


> As far as I know, nobody ever verifies the signatures in a systematic way.

If you'll forgive a semi-relevant ramble about SSH:

The situation seems similar with SSH. As far as I can tell, just about everyone goes with the approach of trust-blindly-on-first-use. I've made a habit of manually checking the public-key when I SSH into a new EC2 instance using PuTTY. It is not obvious how to do this. Existing tooling just isn't set up with it in mind.

The fingerprint reported on the EC2 system-log uses a different hash function than PuTTY uses. (This is PuTTY's fault. It's behind OpenSSH.) I was able to get an answer on ServerFault, but I was surprised no-one had asked the question before. [0]

With some lesser-known distros it seems to be outright impossible, as the fingerprint is not written to the system-log at all. Presumably the distro maintainers never check SSH fingerprints.

Still less relevantly: the trust-blindly-on-first-use antipattern is known euphemistically as TOFU, for trust on first use. This term can also refer to where the public key is manually verified by the user on first use. [1] Very unhelpful for a term-of-art to be so ambiguous.

[0] https://serverfault.com/q/996828/

[1] https://en.wikipedia.org/wiki/Trust_on_first_use


SSH certificates avoid this issue and are used in real life. We make them with hashicorp vault, it works well and is really convenient.


Have to admit I'm not very familiar with SSH certs.

I'm generally just spinning up a fresh Ubuntu instance on EC2, so I have an entirely 'vanilla' SSH set-up. Small scale, so I don't especially mind the ritual of verifying the fingerprint. It's annoying though that it takes a few minutes for the EC2 system-log to be populated with the 'true value' of the fingerprint.


Huh, I had no idea there was even such a thing as SSH certificates!

Presuming you mean X.509 certificates, is this part of the standard spec (e.g. should work with OpenSSH etc)? Do you know if it works with PuTTY?


Here is a guide I've found useful in the subject, https://engineering.fb.com/security/scalable-and-secure-acce....


They are not based on X.509. I recommend taking a look at the CERTIFICATES section of the ssh-keygen man page, because I am unable to find a good reference on the web right now.


Big companies that care about security use short-lived certificates for SSH, not simple private/public keys.


> I've made a habit of manually checking the public-key when I SSH into a new EC2 instance using PuTTY. It is not obvious how to do this. Existing tooling just isn't set up with it in mind.

This is a not necessarily useful if you use key auth. The attacker can’t perform an useful MITM attack on the connection no matter what key you accept from the server. At best the attacker could let you log into their server, but presumably you’d notice that you’re logged into the wrong server.

I guess there are a few corner cases where this might help, such as blind scp uploads.


Surely the attacker can serve up an SSH server implementation that forwards the connection to the intended server while recording and/or modifying the decrypted traffic as desired? To the user everything would appear as normal (expect perhaps for some additional latency) but their data would be exposed to the attacker.

If there were no way to perform a MITM attack because you would notice that you were connected to the wrong server, the whole system of keys and fingerprints would be pointless. So would the CA system for HTTPS.


> Surely the attacker can serve up an SSH server implementation that forwards the connection to the intended server while recording and/or modifying the decrypted traffic as desired? To the user everything would appear as normal (expect perhaps for some additional latency) but their data would be exposed to the attacker

The attacker can’t do this because the attacker does not have your private key. If the attacker proxies your authentication to the remote server, they’ll only be back to square one as the connection is now encrypted with keys the attacker does not hold.

This is only possible if you use password auth.

> If there were no way to perform a MITM attack because you would notice that you were connected to the wrong server, the whole system of keys and fingerprints would be pointless. So would the CA system for HTTPS.

It’s much easier for an attacker to create a malicious copy of a website they have access to than a server they don’t.

The phishing opportunities offered by a SSH mitm attack like this are rather limited.


> The attacker can’t perform an useful MITM attack on the connection no matter what key you accept from the server ... The attacker can’t do this because the attacker does not have your private key.

Why does the attacker need my private key? If he has his own private key and I accept it then he can successfully perform a MITM attack and proxy the connection to the real server.

If not, then what is the purpose of key fingerprints and the web CA system?

There is no need to involve "malicious copies" of servers or sites - you just have to intercept the data in transit between the user and the real server. If the client connects to the attacker (accepting his key) and the attacker connect to the real server, both client and server believe they are talking to each other but they are actually both talking to the attacker, who passes the data on. For HTTPS there is a handy tool[1] available to do this. If there isn't already one for SSH, the same principle still applies.

[1] https://mitmproxy.org/


> Why does the attacker need my private key?

To authenticate as the user on the target server.

> If he has his own private key and I accept it then he can successfully perform a MITM attack and proxy the connection to the real server.

That's not the case.

If you accept the public key offered by the attacker's machine-in-the-middle, then half the attacker's work is done, as they've tricked you into connecting to their machine. The other half of their work still remains though: they need to connect to the target server, impersonating the user.

They can't do this, as they don't have your private key. It doesn't do them any good to try to generate their own keypair, as it won't be recognised by the server. (Roughly equivalent to guessing at a password.)

I'm not sure if I've been very clear. There are plenty of explanations of public key crypto out there better than I can provide here.

It's a major advantage of using public keys to authenticate users, rather than passwords. (Servers are always authenticated using a public key.) If we were using passwords, then in the situation we've described above, it would be possible for the attacker to connect to the target server, impersonating us, and enabling them to set up a man-in-the-middle. The attacker just needs to capture our password when we send it to them. Of course, as they now have our password, they might do plenty else besides.

> what is the purpose of key fingerprints and the web CA system?

It's to ensure the target machine really is the expected target machine.

As you say, the web uses certificate authorities. SSH supports a similar approach, but the 'conventional' SSH solution is to manually check the server's fingerprint (which has the effect of checking the server's public key).

> For HTTPS there is a handy tool[1] available to do this. If there isn't already one for SSH, the same principle still applies.

That's just a web proxy, there are many of these available. If such a proxy is used maliciously to attempt to intercept an HTTPS connection, the connection will fail the browser's certificate-authority check, and will be terminated. (At the very least, the user will be shown a scary warning popup, but sensible modern browsers may just refuse the connection entirely, as users tend to unthinkingly click through such popups.)

These proxies can only work with HTTPS if the (self-signed) certificate used by the proxy, is added to the browser's list of trusted certificates. [0]

[0] https://docs.mitmproxy.org/stable/overview-getting-started/


Ah, good point, thanks. I misread the first sentence of the comment I was responding to as being about the server's public key, so I was assuming password authentication.

With pubkey authentication (or rarely-seen-in-practice authentication with client SSL certs on the web) the attacker couldn't impersonate the client.


Right, but it's still pretty disastrous if the attacker succeeds in tricking you into connecting to their machine, so you still need to verify the server's fingerprint.


> This is only possible if you use password auth.

It's not possible with password auth, either.

If you check the server's SSH fingerprint on first connection, you're safe from man-in-the-middle attacks, regardless of how you authenticate the user. If you fail to check the server's fingerprint (the way most people use SSH), then you haven't verified the identity of the server, and again this is regardless of how you authenticate the user.

There are other advantages to using a public key to authenticate the user, though. [0]

[0] https://security.stackexchange.com/a/69408


> It's not possible with password auth, either

It is, if you screw up with TOFU.

> If you fail to check the server's fingerprint (the way most people use SSH), then you haven't verified the identity of the server, and again this is regardless of how you authenticate the user.

This has different ramifications depending on how you authenticate.


> It is, if you screw up with TOFU.

If you simply forget to check the fingerprint, you've got a problem, yes, and this is a risk if it's expected to be done manually. Most users simply can't be bothered, despite that their SSH security depends on it.

As I said though, if you screw up the TOFU check, it's game-over either way: you've failed to verify the identity of the server. If you perform the TOFU check properly, then you have verified the identity of the server. Again, this is regardless of whether you use a password or a public key to authenticate the user.

> This has different ramifications depending on how you authenticate.

Right. With public key authentication, the server is never sent the user's secret (their private key), unlike with password authentication, where the server is entrusted with the user's secret (their password).

This could of course be significant if the attacker is able to capture the user's password, perhaps for the purposes of setting up a man-in-the-middle attack, but I think it's reasonable to treat it as game-over if an attacker has tricked a user into connecting to a machine under the attacker's control. HTTPS works this way; we should apply the same thinking to SSH.


> The attacker can’t perform an useful MiTM attack on the connection no matter what key you accept from the server.

True, thanks to public key crypto.

> At best the attacker could let you log into their server

This is another way of saying the machine you end up trusting could be under the control of an attacker. So yes, that's a serious concern.

> presumably you’d notice that you’re logged into the wrong server

Not if it's a fresh Ubuntu instance. Even if not, I'd expect SSH security to answer that definitively, on principle. SSH shouldn't be any sloppier than HTTPS.


Maybe I'm being naive here, but if you spin up a new instance and immediately SSH into that via the given public IP, isn't the likelihood that someone could have meaningfully injected themselves between you and that previously unknown IP address vanishingly small?


Just monitor and mitm all ssh connections. It’s really easy.


Right. The attacker might have compromised my local network, for instance.


What's absolutely incredibly frustrating is that it's absurdly common to ignore SSH fingerprints even if they're staring you in the face. I just spun up a Fedora 32 cloud image on RamNode over the weekend and the default image oh-so-helpfully displays the SSH fingerprints as part of the MOTD so you can verify it just looking at the login prompt on console. Except apparently the script that generates that runs before the SSH keys are regenerated on the first boot of the image, so the fingerprints displayed don't match the keys on disk. Either no one else has noticed this bug yet or nobody has been bothered enough to fix it.

There is a solution though, SSHFP records for the host. I just set them up after catching that issue just because of how frustrating blindly trusting on first use is. Now I can SSH to my servers even without anything for them being in known_hosts and it checks for SSHFP records and verifies the fingerprint and DNSSEC protects the SSHFP records.


SSHFP requires you to DNSSEC-sign your zones, which virtually nobody in the technology industry does, for a variety of reasons, some involving security.

A more straightforward solution that doesn't require you to hitch your wagon to a moribund infrastructure overhaul is to use SSH certificates. SSH certificate infrastructures have other benefits beyond resolving the introduction problem; for instance, they're easier to manage than keys once set up, and, perhaps most importantly, they make it possible to issue short-lifespan keys based on single signon, which is much safer than having everyone register a canonical personal key.


I would recommend using the original tooling.

The ssh CLI shows the fingerprint, and explicitly asks:

  $ ssh example.com
  The authenticity of host 'example.com (1.2.3.4)' can't be established.
  RSA key fingerprint is SHA256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.
  Are you sure you want to continue connecting (yes/no/[fingerprint])?


That's roughly what I've ended up doing. [0]

Both OpenSSH and PuTTY ask you to confirm the fingerprint, but PuTTY's fingerprint scheme is MD5/hex, whereas modern versions of the OpenSSH client use SHA256/Base64.

[0] https://serverfault.com/questions/996828/#comment1296514_996...


Here's an approach that I'm using with small VPS providers. I'm using web remote access to install Linux (from pristine distribution ISO). That web remote access uses HTTPS, so I can be sure that all input/output is E2E protected from provider to my computer. After I install Linux, I boot it first time and access it using that HTTPS remove console. Then I'm running something like ssh-keygen -l -f /etc/ssh/ssh_host_ed2519.pub -E md5 (I might be wrong about this command, check mans before use, also it does not require root, so login under ordinary user) and I can see md5 hash of server ssh public key. I write it down (or rather print-screen), connect with Putty and verify it. I believe that this method provides as much chain of trust as possible.


That's the approach I use with Linode. Unlike EC2 there's no browser-viewable system-log that shows the public key, but it can be retrieved using a web-based terminal session.

To find the fingerprint in PuTTY's obsolete format, the command I use is equivalent to the one you've given. It was provided in the ServerFault answer I linked:

     ssh-keygen -l -E md5 -f /etc/ssh/ssh_host_ed25519_key.pub
I don't like comparing fingerprints by eye, but PuTTY doesn't make it easy to do otherwise. This issue doesn't arise with PowerShell's OpenSSH client, where I can just copy the fingerprint (in the new format) to the clipboard. (It lacks features like mouse support though, so I prefer to use PuTTY.)

To find the fingerprint in the new format for OpenSSH, I believe the command is:

    ssh-keygen -l -f /etc/ssh/ssh_host_ed25519_key.pub
> I believe that this method provides as much chain of trust as possible.

Yes, can't do better than that. If your HTTPS session with your VPS provider has been compromised, it's all over anyway.


Thank you for asking serverfault question and writing about it here.

When I started to work with EC2 and SSH few years ago I encountered this problem. However I was overwhelmed with the stuff I had to do for actual business and had no energy to investigate.

Next question is, how do we protect against this attack without waiting for few minutes and making all those clicks to find the fingerprints every time we launch an instance?


Someone mentioned SSH certificates. My understanding is that it's the 'proper' solution, but is more work to set-up. Whether it's worth it will presumably be a question of scale.


The instance itself needs to get its SSH host key signed, or upload its fingerprints somewhere, on first boot. This is easy enough to write, but making it any more secure than plain old TOFU is tricky.


> making it any more secure than plain old TOFU is tricky.

To refer again to my complaint about this term: this isn't meaningful. It's not clear whether you mean trust blindly on first use (total failure to cryptographically verify the identity of the server) or manually check SSH fingerprint on first use (secure but inconvenient).

Both get called TOFU. One meaning of the term is a valid means of establishing a secure channel, and the other is cybersecurity negligence.

I'm not just being obtuse here, I really don't know which you mean. Manually checking fingerprints on first use seems like a pretty solid approach to SSH security, if you can trust the client to actually perform the check. Short-lived keys are the obvious improvement that this approach doesn't really support. Is that what you meant?


Where do you get the fingerprints to check against? Three intuitive but wrong answers:

a) By spawning the instance with a script that will post them to some API. Now you are just blindly trusting whatever gets posted to that API. If your provider offers instance identity documents, you could sign the request with that, but an attacker who can MITM within your VPC can presumably impersonate anyone to the instance metadata service. And if MITM within the VPC is not in your threat model, then blind trust (from another host in the VPC) is fine.

b) By having an SSH CA service connect to the instance to grab its public key. The CA is extending the same blind trust as the admin would have in doing it manually.

c) By having generated the host keypair somewhere other than the host. Now you have key material in the wrong places.

I don't think there's such a clear-cut line here. "Verify" just pushes the blind trust somewhere else.

I can think of some potentially better answers, for example write the public key to a block device, detach it from the new host, attach it to a verification service host and read it there. But this is kind of just reinventing networking... why do we trust the cloud provider's SAN more than its SDN?


> Where do you get the fingerprints to check against?

Any serious cloud/VPS provider will offer a way, leveraging HTTPS.

On EC2, the Linux distro writes the SSH fingerprint to the instance's 'system log' which can be securely viewed over the AWS web dashboard.

On Linode, you can use a web-based terminal session (over HTTPS of course) to run a command to show the fingerprint.

I've discussed this in other replies in this thread.

> "Verify" just pushes the blind trust somewhere else.

Not really. If my cloud provider is compromised, it's game-over anyway. Provided there's a secure channel to communicate the fingerprint, and provided we really do check the fingerprint (most people just don't bother), we should be ok.

> write the public key to a block device, detach it from the new host, attach it to a verification service host and read it there

Yep, that's pretty much the Amazon approach!


Caring about MITM inside the VPC already assumes cloud provider compromise.


It's not inside the VPC, I'm connecting from the outside. I agree there's little need to check fingerprints for VPC-internal sessions.


Note that you've introduced the additional measure of "inside a providers VPC". Which indeed is a solution, but not something every environment provides and not everyone uses.


How many organizations are paranoid enough to care about MITM but not enough to have private networking?


Java applets are largely dead at this point but they do still exist in small corners of the IE world. Signing those jars is paramount and the signature is validated by the browser.


"Java Web Start" still exists and there are at least two programs I use from time to time based on that technology. One being a custom thing, the other openstreetmap editor jOSM.


> "Java Web Start" still exists

I'm afraid it's been dead for a while now.

> Java Web Start (JWS) was deprecated in Java 9, and starting with Java 11, Oracle removed JWS from their JDK distributions.

https://openwebstart.com/


In large portion of "real world", Java 8 is the latest. Sometimes because of JWS deprecation, even.


And Java Webstart jars had to be signed correctly or else the browser would reject the application. As I recall META-INF also had to have the correct security meta data or else the browser would complain.


Just because its depricated doesn't mean it is unused.

People still use Python 2.7.


It's not deprecated - it was deprecated and now it's completely gone.

Like Python 2.7, you can now only get support for JWS if you pay or if you go through a third-party.

If you're still using it I recommend you get off fast.


> now it's completely gone.

...From newer versions. Old JDK versions have been kept the same, and Java 7 & 8 are still in heavy use.

> you can now only get support for JWS if you pay or if you go through a third-party.

Not everyone needs enterprise support.


> Old JDK versions have been kept the same, and Java 7 & 8 are still in heavy use.

But they aren't getting free security updates.

> Not everyone needs enterprise support.

Run it without security patches? That's suicidal for a system like JWS.

Or do you think there's some option that is still support but not specifically enterprise and doesn't cost anything? I would take a look at who historically contributes security patches to OpenJDK and think about if you free vendor really has the expertise you think they do to keep up with security attacks.


> Or do you think there's some option that is still support but not specifically enterprise and doesn't cost anything?

Yes, Amazon backports security updates to Amazon Corretto (fork of OpenJDK 8 and 11)


> Amazon backports security updates to Amazon Corretto

There's a flaw in your logic there...

JWS is gone in new versions, so there aren't any security updates for it in the newer versions either.

You can't backport a patch which hasn't been written.


> As far as I know, nobody ever verifies the signatures in a systematic way.

I systematically check the signatures on the jars of my dependencies. See e.g. https://github.com/m50d/tierney/blob/master/free/keys.proper... . If these artifacts were signed with different keys, I would notice (my CI builds would fail). There are still points of failure (e.g. if you could subvert the maven-pgpverify-plugin itself), but security is about increasing the cost of attacks.

> It’s hard to find the public keys for the library maintainers. Sometimes they upload them on the keyservers, sometimes not.

> There’s no established way of communicating that which public keys should be trusted. If there’s a new release and it has been made with a new key, your best bet is to e-mail the maintainer and ask what is up.

> It’s hard to get any security benefits from the signatures in practice.

This is making perfect the enemy of good. Any solution to this problem would have to start with a signature mechanism. The signatures that currently exist make it a little bit harder to fake a library. If you want to make it harder, check signatures on jars you depend on, ask maintainers to publish their keys to keyservers and communicate which keys should be trusted, demand explanations when keys change.

Better to light a candle than curse the darkness. "x is incomplete, therefore it should be replaced by a completely new thing" is a total fallacy - and, strangely, seems to be used only when attacking PGP.


This is why you should use a well-designed system such as The Update Framework (TUF) that aims to make security as usable as possible:

[1] https://www.python.org/dev/peps/pep-0458/ [2] https://theupdateframework.io/


When I work on embedded Linux stuff I sign my packages.

Shipping hardware as opposed to software allows secure deployment of pre-shared keys which can be trusted.

Even if someone hacks our automatic updates server (not too unlikely, it's some shared hosting), devices we have sold won't trust the modified packages because 512-bit ECDSA signature won't match the public key they have pre-deployed.


Out of curiosity:

1. Why ECDSA?

2. Why a 512-bit prime for the curve?


1. Only two asymmetric algorithms are widely supported and almost universlly recommended, RSA and ECDSA. Compared to RSA, ECC needs smaller keys for same security.

2. I did that couple years ago already, I think it actually was 521 bits, the best one recommanded by FIPS at that time. The hardware has no relations to FIPS, it's not even _that_ expensive and should contain no secret data. I just saw no reasons not to implement the best security available: development time is not affacted by the count of these bits.


https://stackoverflow.com/questions/3307146/verification-of-...

https://issues.apache.org/jira/browse/MNG-6026 Extend the Project Object Model (POM) with trust information (OpenPGP, hash values)

https://issues.apache.org/jira/browse/MNG-5814 Be able to verify the pgp signature of downloaded plugins against a trust configuration


What's with the random shot at PGP at the end? PGP is perfectly fine for this particular application. None of the stuff that the article talks about has nothing to do with the particular tool used to sign the releases.



> PGP is bad and needs to go away

how should we sign git commits?


There's nothing special about PGP that makes it good for signing git commits, and plenty of alternatives exist. Signify is a simple tool to do signing, used by OpenBSD: https://news.ycombinator.com/item?id=9708120. Minisign is also an alternative, though it doesn't seem as popular: https://jedisct1.github.io/minisign/. These may not be well integrated into git, but aside from 'everyone already uses and integrates with PGP,' there's no real reason stopping the usage of these other tools.

The two things that PGP does are pretty simple/straighforward. What makes PGP bad is that PGP itself it is really complicated, and the way it is integrated into email etc. is also complicated and carries a lot of historical baggage. But if you just want to make a public/private keypair and then use it to sign & encrypt data, that's pretty easy (or as easy as writing any crypto code can get). Signify, Minisign, and Age are clean, simple implementations.

Age does encryption: https://news.ycombinator.com/item?id=21895671

Of course this is sort of a https://xkcd.com/927/ situation. PGP is already used by everyone so why switch?


It's possible to sign with TLS certs, just not well-known.

https://stackoverflow.com/questions/50150318/sign-git-commit...


Minisign is designed to be compatible with signify.


The perfect is the enemy of the good.

-- Voltaire

OK, if you're going to downvote this, please tell me how you plan to convince git and Github to replace PGP with signify? Github in particular has invested significant time building up their PGP support.

Personally, I would be fine with signify, I've used it in the past and I like it, and I think people are right when they say we should move toward more focused, Unix-style cryptographic tools - for greenfield projects.

But that doesn't we should abandon all current uses of PGP, particularly when it's working as well as it is with git and Github. There's absolutely nothing wrong with it. It does what it's supposed to.

It took a long time to get PGP supported by Github, and now you're going to want them to change it?

Edit: If people want to add support to Git for signify, and lobby Github to support it, I'd be in favor. But strongly, strongly opposed to removing PGP support.


You might want to check out a couple blog posts

https://latacora.micro.blog/2019/07/16/the-pgp-problem.html

https://blog.cryptographyengineering.com/2014/08/13/whats-ma...

And don't complain about downvoting.


I'm fine if you think I'm an idiot (I definitely can be), if you disagree with my stand on PGP, or my remark about downvoting. I'm often even willing to edit/change/correct my comments.

But don't order me around. I'm not your employee, kid, or whoever it is you feel you have a right to speak to like that.


The author made a fundamental mistake of judgement.

So far, what he says amount to an advocacy against digital signing of code as such.

Miikka Koskinen says "digital signing doesn't work, so, lets decide to use something even more broken." Just that alone makes me think he has no say on this.


Nah. He said that the current infrastructure for signing JARs is broken, making supplying signed JARs pointless. He didn’t say that this is the way things should be.

He advised, at the end, fixing the system; but it is implicit in his tone that he doesn’t expect anyone to try—it’s been broken this long with nobody caring, so why would that change now?

And yes, he advised relying on other mechanisms for verification, for the time being. Because those mechanisms provide nonzero (if small) security, while the current infrastructure for JAR signing provides zero security.

(Compare: “if calling the police does nothing, at least carry pepper spray.” This advice does not imply that pepper spray is better than effective police response; only better than an ineffective one. It also does not imply that one should not seek police reform. It only suggests what one should do while the condition of “ineffective police response” still holds.)

At the end, he says:

> I’ve written this post in part to be proven wrong. I’m eagerly waiting for posts from y’all about how you do, in fact, systematically verify the signatures.

One would presume that, if he were proven wrong, and the digital signatures Of JAR files actually did anything, he would retract this post and post one giving the opposite advice.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: