If you didn't trust the closed source disassembler you use, for whatever reason, you would verify the assembly output, not the actual software. In practice this is often done unintentionally anyway: it's common to run a debugger (ex: gdb, windbg) alongside an annotating disassembler (like IDA). objdump (from GNU binutils) also supports many non-Unix executable formats, including PE (Windows) and Mach-O (OSX/iOS).
For fun I just compared objdump's entry point function assembly of a downloaded Pokemon Go IPA to the Hopper representation, and, no surprise, they are identical.
This is a good point, which should be brought up more. Although you probably meant key id or key fingerprint, not keyserver ID, which would imply something else.
You're supposed to do additional verification of PGP keys, either through attending key signing parties (who does that in 2018?), checking the signatures of people you already trust, or comparing as much out-of-band information as you can.
It's not terribly hard to create a plausibly trusted keyring from scratch that depends on only 1 of 3 websites being legitimate. For example:
All keys are cross signed as shown by gpg2 --list-signatures.
If this sounds like a pain in the ass, it's because it is, and GPG could be so much better.
Ironically, if you can't acquire the developer's public signing key, it might be best to install software directly from their website, if no trusted repositories are available. If you can acquire their signing key, it's probably best to not install software directly from their website, in order to avoid selective distribution attacks. Sort of unintuitive.
Linux/BSD distribution mirrors don't control the package signing keys, maintainers do. Similarly, Google doesn't possess the ability to push out updates for third-party apps, without fundamentally redesigning the OS with a platform update, because the signing keys are owned by the app developers, and the existing OS rejects updates signed with different keys. In both of these situations, the key owners lack the ability to selectively push out signed updates, unless they also control the distribution infrastructure.
No, there's no effective difference between those examples, apart from maybe post mortem analysis. It's also a poor method of key discovery, as hueving said.
It doesn't take ridiculous confidence to analyze shell scripts. In the hundreds of scripts I have read, few were more than 100 lines long. It shouldn't take more than 60 seconds (probably 30 or less) to mentally build a list of all possible operations a short script can perform. Bourne shell scripts don't have much room to hide surprising behavior, and when they do, it immediately stands out. If they are permanently installed, and invoked later by other parts of the system, then they may need more probing, but we're talking about installation scripts.
.deb and .dmg can be easily extracted. The former is just an `ar` archive containing tarballs, which you can (and should) extract to read the install scripts. (.dmg specifics escape me, since I only dealt with them one time, years ago.)
Binary code isn't inscrutable. Some good tools for this are, among many, many more, IDA, Hopper, and radare2. How long this takes depends on what your goals are, how comprehensive you are, and the program complexity. I don't think I've yet spent years on one project, fortunately, but the months-long efforts, for undoing some once-prominent copyright protection systems, were pretty brutal. Smaller programs have taken me just several hours to appropriately examine.
Proprietary? The backend maybe, but the keybase clients are open source. Some of the code is a little rough, and completed API docs would be nice, especially concerning KBFS, which is still missing. It's still under heavy development though, so these shortcomings should be understandable. (I personally won't use it much until I can actually develop my own non-reverse-engineered client, but that's just my requirement.)
No, the server implementation is proprietary. Therefore, it's a walled garden that relies entirely on them. Supporting federation would be going above/beyond just releasing the server implementation's source under a permissive license. As it stands today, you have no choice but to rely on their proprietary server implementation, since the clients are useless on their own.
Considering the whole point of end-to-end encryption is to reduce or eliminate necessary trust in the middleman, this seems like a minor, but still valid concern. Open sourcing the backend code wouldn't allow you to attest to what's running on the server. If the clients also allowed you to point to a custom server URL, which I would support, then the source availability might matter.
Without the proprietary server backend, you cannot use the clients. It's a walled garden. If keybase goes away for whatever reason, you're stuck. You cannot host it yourself, others cannot host it, and even if they released binaries, you'd have no idea what it is doing with the unencrypted 'metadata'.
I didn’t dispute the description of Keybase being labeled a walled garden. I opposed it being too-broadly called proprietary, when it’s not — only the backend is. And for anyone only using the official keybase servers, that’s irrelevant from a trust perspective, which is the reason people usually (mistakenly) bring up source code availability.
Now I’ll also partially dispute the accusation of it being a walled garden, since walled gardens don’t have open specifications and documented APIs for third-party client implementations.
The backend source code would be good to have, for the prudent reason you pointed out, as well as for private instances, but that’s not enough: you also need client code modifications to allow configuration for custom servers.
About binaries: anyone who thinks source code is required for determining program behavior probably shouldn’t be auditing software in the first place. (Often having just the source code makes it more difficult, not less.)
> And for anyone only using the official keybase servers, that’s irrelevant from a trust perspective
Gosh, not really. They completely control who can use the service, and any information they 'require' to register for the service.
> since walled gardens don’t have open specifications and documented APIs for third-party client implementations.
I'd like to point out that walled gardens will still openly invite folks to join, and give them tools that they could reproduce, but give them no way to experience the garden outside the walls.. including the tools previously given that are also useless outside the walls. That's exactly the case with keybase.
> About binaries: anyone who thinks source code is required for determining program behavior probably shouldn’t be auditing software in the first place. (Often having just the source code makes it more difficult, not less.)
Huh. I'm interested in hearing how having source code makes autiting more difficult, since that has not been my experience.
I agree with you that Keybase should release their backend code. My comment about (server source code- derived) trust was made in the context of users who would remain using the official keybase.io API servers, which would probably be the vast majority of Keybase users.
It’s not all of the time, or even most of the time, but frequently there are reasons for preferring binaries:
- build systems which are more annoying to setup than just straight reading the assembly / IL dump (ex: android)
- you might want to reverse and/or edit the binary anyway — to look at compiler output, as one example
- it’s sometimes faster to understand the asm than it is to go over the code, compile it, and compare [non-]matching binary outputs (this is regularly true for smaller programs)
- the tools for analyzing binaries are often more advanced than code tools
secured automatically with end-to-end encryption is a funny way of saying secured automatically with TLS. If some messages are being encrypted on the server, then it's not end-to-end. (I'd also argue that end-to-end encryption can't be meaningfully done in the browser, further reducing its typical security to the lowest common denominator: TLS.)
I'm not exactly sure where that is in the copy but it is referring to emails between ProtonMail users, not unencrypted mails from outside. It should probably be clarified, but it's tough to tell without context.
The quote is from the front page of protonmail.com, and it's been there since 2015. As the only description of encryption on the front page, it gives the unequivocal impression that all email is end-to-end encrypted.
Regarding email between ProtonMail users, Lavabit once claimed "Our team of programmers answered with a system so secure that even our administrators can’t read your e-mail." Which is very similar to your claim, "even we cannot decrypt and read your emails." Lavabit was then asked to give up its TLS key, to evidently allow impersonation and delivery of malicious JavaScript designed to exfiltrate "non-decryptable" data. ProtonMail users are vulnerable to the same attack if anyone in a conversation ever uses the web interface. Or the mobile app, if it's just a web view.
In contrast, native SMTP+IMAP (+-E2E) clients are not typically developed by the email service provider, making orchestrated compromise much more difficult, and users can benefit by performing actual audits themselves because their email client hopefully doesn't fetch malleable remote code at runtime.
1. Mobile apps are native, not web views
2. That's not what the TLS key was subpoenaed for--it was a very different system with a set of vulnerabilities we don't have, including a server-side encrypt mode and non-PFS TLS ciphers.
3. Again, if we are part of your threat model, you can run the web client locally and audit it yourself if this is a concern.
I'm glad the mobile apps don't download code, and I really appreciate the correction on Lavabit; ugh, that project was embarrassing. I'm personally not happy with auditing local clients unless I have a mild assurance that other participants are running the same code, at some point, which can't be achieved with the web.
I didn't really follow it very far, I did notice Dan Harkins quickly got assholier-than-thou. But there's no shortage of crypto or general tech talk that is worse.
https://help.apple.com/developer-account/#/dev21218dfd6