"How often do resets happen? Answer: if you're using WhatsApp or Signal, all the freaking time.
With those apps, you throw away the crypto and just start trusting the server: (1) whenever you switch to a new phone; (2) whenever any partner switches to a new phone; (3) when you factory-reset a phone; (4) when any partner factory-resets a phone, (5) whenever you uninstall and reinstall the app, or (6) when any partner uninstalls and reinstalls. If you have just dozens of contacts, resets will affect you every few days."
I guess I don't have "dozens of contacts", but getting a new phone/resetting a phone isn't really that common of a thing in my circle. I feel like for the average user, they wouldn't do this with their phone more than like once every year or two. So I guess if you have like 600 people you talk to on these apps regularly then that works out to daily, but for me at least this isn't that big of a deal and was pretty much understood from the outset.
This isn't really the issue. From further down on the article:
"""
There's a very effective attack here. Let's say Eve wants to break into Alice and Bob's existing conversation, and can get in the middle between them. Alice and Bob have been in contact for years, having long ago TOFU'ed.
Eve simply makes it look to Alice that Bob bought a new phone:
Bob (Eve): Hey Hey
Alice: Yo Bob! Looks like you got new safety numbers.
Bob (Eve): Yeah, I got the iPhone XS, nice phone, I'm really happy with it. Let's exchange safety numbers at RWC 2020. Hey - do you have Caroline's current address? Gonna surprise her while I'm in SF.
Alice: Bad call, Android 4 life! Yeah 555 Cozy Street.
So to call most E2E chat systems TOFU is far too generous. It's more like TADA — Trust After Device Additions.1 This is a real, not articifical, problem, as it it creates an opportunity for malicious introductions into pre-existing conversation. Unlike real TOFU...by the time someone is interested in your TOFU conversation, they can't break in. With TADA, they can.
"""
The quote you linked is relevant because it means that you can't simply ignore this problem; resets are fairly common, common enough that you can't just delete the key-loser's account (for example). However it doesn't have anything to do with the actual security flaw (if we want to call it that, it's really more of a UX / messaging problem) being discussed.
I use KeyBase. Not long ago, one of my contacts deleted all their devices. So I got the skull-and-crossbones warning. Then they messaged me as a new account.
Now this is an anonymous contact. There is no way that we will authenticate in meatspace. So I said "sorry", and pointed out that they should have shared a public GnuPG key with me before triggering a reset.
Very belated edit: That comes across as a bit heartless. I mean, we have no clue who each other are in meatspace. So why is a potential identity change so problematic?
It's just the TFA example:
> Bob (Eve): Hey Hey
> Alice: Yo Bob! Looks like you got new safety numbers.
> Bob (Eve): Yeah, I got the iPhone XS, nice phone, I'm really happy with it. Let's exchange safety numbers at RWC 2020. Hey - do you have Caroline's current address? Gonna surprise her while I'm in SF.
Now, I and my iffy correspondent have never shared anything actionable about third parties. But I do know a little about what he's doing, and have offered advice. And it would be very hard to continue doing that, without the risk of revealing potentially damaging information.
Wait, how did you authenticate them the _first_ time? An anonymous contact who you will not meat in physical space who had no possible way to share a public GPG key with you prior to your original authentication...?
They're just someone who contacted me, and asked for help with identity management, VMs, VPNs, OPSEC, etc, etc, etc. It's not uncommon, given how much I write online about that stuff. Sometimes, if they want too much hand holding, I request payment (Bitcoin). Sometimes I actually configure and test stuff for them, if they're willing to pay.
But I have no clue who they are, and vice versa. Indeed, I emphasize that I don't want to know anything about what exactly they're up to. They could be hobbyists like me. Or criminals. Or cops. I mean, I have no way to know. So I don't worry about it.
Basically it's TOFU. Ideally, they contact me via GnuPG encrypted email. So who they are is their public key, and an email address. Sometimes, for convenience, we more to Keybase or some other secure channel. But whatever, their identity is their public GnuPG key.
Anonymity can be amusing. I have many other personas, besides Mirimir. None here on HN, of course. And I'm sure that many of my correspondents also have multiple personas.
So a few times, I've wondered whether I was actually having multiple parallel conversations with someone, using different personas. But one just doesn't ask about such things. And at times, my contacts and I do that intentionally, to confuse observers.
> It's more like TADA — Trust After Device Additions.
Indeed. Or, "TA-DAA"! (trust after device addition/alteration, again)
I'm not comfortable with this "secure" device held key.
Maybe a private key that's pass-phrase derived could anchor the trust? So that the device key just becomes a trusted sub key/cert? With maybe a 30 day validity before renewal. (renewal ux being: please enter your pass phrase to insure continued integrity...)?
This is the idea of cryptocurrency "brainwallets," and the result seems to be that people are really bad at picking high-entropy passphrases. One of the nice things about cryptocurrency from a cryptography point of view is that it provides a direct, real-world monetary benefit to attacks. So we don't have to wonder if people will pick good passphrases or they'll be brute-forced—the experiment happened, and it wasn't promising.
Using a passphrase alongside some online verification mechanism by a semi-trusted third party (e.g., your initial Signal app generates a secret, encrypts it to your passphrase, and stores it on Signal's servers who only return the encrypted secret to "your" new phone if it has your phone number too) might be enough.
Yeah, "cryptographically secure" and "something you remember" doesn't mix well. 128 bits (say) is a lot of data to memorize. Having it be a real approximation to random doesn't help.
I suppose the pgp/ssh model of secure device holding the master key plus the ability to backup (eg qr code printout in a safe).
An approximation for phones would be a random key locked in the device with a pin, an the ability to transfer and backup keys as you mention.
Other than that - I've not really heard of gpg keys or ssh keys being brute forced - but that may be because by the point you gain access to the (encrypted) private key - you already have access to everything else?
[ed: for example there are 52 cards in a normal deck of cards, meaning each card encodes about 5.7 bits(2^5=32,2^6=64). You could represent a ~128 bit key as a sequence of ~23 random cards. Or add add a few checksum bits and use half a deck (26 cards).
Note, shuffling a deck isn't a great source of randomness, but you could use dice or a computer to generate the key - then map it to a sequence of cards.]
128 bits is like, what, ten words chosen randomly from a suitably large dictionary? Is that really prohibitive? Even as random alphanumeric string that doesn't seem beyond muscle memory. Seems like the biggest pain would be to input it consistently on a phone keyboard.
10 random words from a really large dictionary. Narrow down to, say words three to five letters, and 128 bits get tricky. It's easier in the sense that we have a certain active/passive vocabulary - but you'd probably need a mnemonic with words too.
[ed: input is definitely a challenge. I've considered making various alternative input methods - but in the end 128 bits is a lot of data to input without error. Especially on a phone. Of course on a phone you might have a pin protected secure enclave for example, or use an external nfc key etc.
But it then we're back to some way to remember 128 bits - and maybe even without the luxury of "practice" by way of typing it a few times a day...]
Seems hard to reliably remember 10 words chosen randomly unless you use them very regularly. I don't want to type in a ten-word passphrase every time I open my texting app?
I'm seeing more multi-device verification logic finding its way into various systems (as with many advances in software, it started with a popular game), where the system tattles on every new device that shows up claiming to be you and you have to use an old device to vouch for the new one.
But even if you do that well, you still have a MITM attack at registration time. The surface area is very small here, but a state or even corporate backed individual could certainly afford to perform such an attack. If there is no specific target, hacking an Apple store or a Best Buy to spoof all registration traffic and substitute your own would probably catch a lot of fish in your net.
That exchange is not realistic, it's easy to ID family or friends by how they conversate. At worst you can ask a challenge question. The prompt that a device has changed is perfect to heighten senses for a quick ID. I disagree that physical contact is necessary as TFA (and industry lore) seem to recommend:
> You must now reestablish identity, and in almost all cases, this means meeting in person and comparing "safety numbers" with every last one of your contacts
Perhaps the alert could be a little more alerty and in red and read something along the lines of "Hey! Your buddy's safety code has changed, make sure they sound normal and aren't acting weird and creepy asking for information"
This is exactly the weakness that phishing exploits all the time - they only need to succeed a small percentage of the time. If they rely on users being vigilant or noticing odd behaviour, they are guaranteed to succeed some percentage of the time. That's a flawed system, not something you should blame on the user.
How many people are realistically going to pester their friends & relatives with "challenge questions"? I bet even the majority of folks in the HN crowd don't/wouldn't.
I was impressed that my non-technical parents challenged my brother when he (or someone claiming to be him) (claimed to have) lost his phone and other possessions travelling.
I think as long as the meta message about the change of key is prominent/scary enough, even non-HNers will be as on edge as necessary.
I'm not saying you have to know their first pet's name. I'm saying it's usually pretty obvious through regular conversation if someone is who they say they are, and worst case, if you're suspicious, you can ask about some shared past event without them knowing that they're being challenged.
"I'd love to chat, but I'm lost in a foreign country. Can you just Venmo me some money so I can get home and then we can talk about $shared_past_event later?"
I could see some people doing this if they notice obviously suspicious behavior on top of the safety number change, but I suppose that depends on the skill of the attacker.
You can NOT verify anything by asking a challenge question. Man-in-the-middle attack means there's a "man in the middle". That is, the attacker can relay challenge question and answer between the contacts it attacks.
The _protocol_ can be arranged to help you do this, but yes just asking a challenge question inline doesn't protect against a MITM.
If Alice and Bob know a good secret (say a 128-bit AES key) then they can definitely just use that secret to protect their communications against the MITM. This only requires updating the protocol to allow such a secret to be introduced. Mallory can continue to relay messages, but they are now passive and don't learn anything beyond traffic analysis or have any ability to tamper with the messages.
But chances are Alice and Bob don't have such a secret (and of course they can't use the potentially MITM'd channel to agree one)
I _think_ if Alice and Bob know a weak human secret they can do something here with a Balanced PAKE. A PAKE lets two parties agree a key based on knowing some relatively weak secret, Mallory can try to guess but only gets one chance each time this is done and failure is detectable by Alice and/or Bob. Again this requires support in the chat protocol itself.
> I guess I don't have "dozens of contacts", but getting a new phone/resetting a phone isn't really that common of a thing in my circle.
What income bracket is your circle? Just going to reminisce a bit here...
I come from a family business that used a lot of manual labor from the local neighborhood here in south Queens, NYC. I spent a lot of time with guys in late teens and 20's, mainly black and hispanic kids looking to make some decent side money. They came from low income backgrounds and were poorly educated. This leads to a very unstable life where money really is scarce.
They frequently changed phone numbers due to: "crazy" ex, owes money to scary people, owes child/alimony support, owes the government, commits crimes and uses frequent burners, did jail time, or the #1 reason; out of minutes and no money to put on the prepaid plan. They used prepaid plans as they could refill accounts using cash at physical phone stores or check cashing stores because they don't have a computer or bank account. No phone service means no internet. Most of those guys were good people who I got along with just fine. Ignorance really is a terrible thing.
High enough that they can afford an iPhone but low enough that they treat it like a treasure they don't want to have to replace unless they've had it at least five years.
I'm surprised at your suggestion that lower income people would be more likely to go through phones. I could definitely understand this for cheap flip phones on a pre-paid plan, but it was my understanding that Signal and WhatsApp are smartphone-only apps, so I figured lower income people especially wouldn't be flying through phones that cost hundreds of dollars.
As someone who only reluctantly got their first smartphone in the past year (yes, I'm a computer programmer, I dunno), I can say that cheap flip phones don't really exist anymore in the US. You can generally get a cheap smartphone cheaper than a cheap fliphone, in 2019.
For instance, you can get smartphones for well under $50 from Cricket, once every 90 days per account. They are locked to Cricket, but you don't actually have a contract, you can abandon them at any time with no penalty. Yes, the cheapest smartphones are gonna be _really bad_. But in the past year-ish, perfectly usable smartphones available around $100-$160 became a thing (carrier locked but no contract). (Flip phones have become _really bad_ too though. Flip phones available in 2019 have worse battery life and worse performance/UX than flip phones of 5 years ago. like press a button wait 1s+ for something to happen bad UX)
Flip phones are pretty much gone, I don't think they are a thing poor people use anymore. They might be a thing old people use, or that people attempting to have an anonymous disposable phone use. They don't generally make sense any more as a cost conscious thing, plus if you're poor you don't want to be reminded that you're different from "everyone else" everytime you look at your phone. Incredibly cheap (crappy) smart phones are available, and relatively affordable reasonable ones too.
(Plus, not just for poor people these days, but especially if you're poor, your phone is probably your only computer and only internet access).
> I guess I don't have "dozens of contacts", but getting a new phone/resetting a phone isn't really that common of a thing in my circle. I feel like for the average user, they wouldn't do this with their phone more than like once every year or two.
It's not to common with me, and I just notify my contacts know I'm about to do it under my old key before I switch, so they know a warning is coming. Then I just reverify next time we meet.
I'd probably be more diligent if I actually had something to hide, but I don't, so I'm not.
I've had to do it 4-5 times myself (busted phone, water, upgrade). I think it's possible to export the keys (at very least the message history) to avoid this, but if I need to verify with someone I actually fall back on... Keybase :)
How many people do that? If enough people don't do that and resets become normalized, then it's bad. It doesn't matter if you always do backups when switching devices if resets have been normalized to everyone you talk to, an attacker steals your number, does a reset, and none of your contacts question the reset.
Signal now uses the Android hardware keystore such that you cannot use a root app like TiBackup to move the Signal data around anymore. Instead you have to use Signal's built-in backup and then place the backup on the new phone and setup Signal again (generating a new ID key).
That's also somewhat recent. I was able to do that with my latest phone, but for my previous phone, the only option was a plain text export that removed any media, caused any group chats to be split and included in 1 on 1 convos, and obviously didn't have the crypto included.
I lost my message history on Whatsapp recently just because I decided to rollback from the Android Q beta. Doesn't always have to be a new piece of hardware.
No you can't, I tried that, it doesn't let you enter the code yourself, it requires permission to read SMS, and then reads the received verification code itself.
Much of the criticism of how gently WhatsApp and Signal handle key resets misses the mark. Widespread adoption of end-to-end encrypted messaging is an effective countermeasure to passive collection and blanket surveillance. In order to get that widespread adoption, you can't be showing people skull-and-crossbones warnings every time they swap out a SIM card.
Speaking from my experience getting journalists and political campaigns set up with signal in 2017-2018, the early scary key change warnings were off-putting to people and made them reluctant to continue with the messenger. At the time, I was in contact with 100-150 people via signal and quickly ran out of patience with anyone who insisted on a safety number check. But the UI at the time encouraged that level of paranoia.
I continue to believe that making key changes as painless as possible for users is the correct approach as long as there are ways to harden this behavior in the settings, for the benefit of the far smaller set of people to whom MITM attacks are a credible threat.
I agree with your assessment about key change management. With that said, I do like the device history trail that Keybase uses. Keybase has a better multi-device story - in that it has a multi-device story at all. I understand what they're trying to do preserving message history - I do prefer my conversations to be ephemeral by default.
I'm OK with the compromise that Signal has made with key management -- there are people that I really care to have private communication with, and people who I prefer to have private communication with. I verify the former, and don't bother with the latter unless we happen to be bored together in the same room.
On a side note, thanks for your efforts over the last two years!
I agree that there will always be some instances of key changes, and that you can't make that too scary or else it just drives users away.
But it's not all or nothing. Signal could do better at improving their UX to reduce the number of key changes people make in practice. If they did, they could probably make the prompt slightly more scary (but not Keybase-level scary) – but even if they didn't change the prompt, the mere fact that users wouldn't see it as frequently means they'd be more likely to pay attention when they did.
In particular, according on other posts in this thread, the backup/restore mechanism that allows transferring a key between devices currently has poor UX, only works on Android, is buggy. Obviously that should be fixed. Once that's done, they should make the app actively prompt you to transfer the key as part of the setup process, using a QR code scanned from either the previous phone or a linked computer. You would still have the option not to do so, and some users would inevitably choose that option (if only out of laziness), but if transferring the key is seamless enough, a decent fraction of users would do it.
> you can't be showing people skull-and-crossbones warnings every time they swap out a SIM card
But isn't Keybase's solution precisely about ensuring that this doesn't happen? If you change your phone's SIM card, the phone still remembers its secret key.
A solution to this would be selective enforcement. Let the user decide if they want "light" (auto accept new keys), "strong" (skull and crossbones), or "paranoid" (auto-block any new keys) security.
The same should be applicable to browsers, I think. When logged into your bank, you should be able to click a button that says "Make this website use paranoid security" or similar, which would apply very strict policies that prevent most HTTPS attacks, and maybe even enable protection against phishing and similar domains, or something.
I don't know why apps keep painting users into a corner with one generic option rather then giving them more choices.
Wouldn't you actually want the opposite behavior... any "password" field should have a dialog with, "Are you certain this is the site you want to log into?" then you can check, "oh yeah that isn't my bank."
edit: If I don't get an autofill option from LastPass, I'm incredibly skeptical if it is the right site.
There's a couple problems with that. Not all sites get their password forms detected correctly. And sites may change their design, causing the old form to stop being cached correctly. It would be hard to tell if changes were just design updates or phishing, and I think the layman would be hard-pressed to make that determination.
Really I think we need something akin to app shortcuts. If you want to log in to your bank, you should be able to go to your browser's Home page and find some kind of signed login link that was created the first time you logged in. It's like a bookmark, but following a particular standard, and can be pushed by the site and accepted by the user one time, with warnings and instructions to keep users from accepting it from phishers. The intent would be to add a process that would be hard to dupe users into following by accident, and giving them easy access to sensitive sites. Then the site could tell users "never follow a link in an e-mail from us, always use your Home screen shortcut" or similar.
Browsers could even vet these via a kind of "app store for logins" to remove anything that looks like phishing. Maybe that's been explored before, though?
Matrix handles this by exposing the device keys to the user so they can make decisions about whether to trust new devices (and I believe identity key changes mean you wouldn't be in your rooms anymore -- but in order to change identity keys you would have to delete you entire Matrix account on the homeserver).
If a new device has shown up, your messages will be blocked from being sent until you verify the new device. To be fair, it is too easy to blaze past the warning -- and it can happen often in large rooms. As a result, it's a little bit cumbersome at the moment, but with device cross-signing coming down the pipe and the new verification system (which is much better than Signal's IMHO -- you just check both devices have the same string of 7 emoji on their screen) it's getting a lot better.
I still think Keybase is right — multi-device or some kind of multi-trust model is best so the key revocations aren’t happening so often. I remember this problem with PGP and most people did not take the key verification seriously.
And the problem with SSH that was pointed out in the article is funny now because of cloud services where servers are constantly being destroyed, so keys are changing frequently, unless you save and persist that private key for the server in your configuration. Which I’ve realized a lot of companies are simply not doing, which leads to people straight up ignoring key verifications in their ssh config.
SSH is even worse in many corporate environments, because you also have to cross an SSH gateway to transit into from the corp fabric into the prod fabric.
At AWS this meant we had to hop to a different gateway for each region, and then onto the (stateless) production host which was recreated from scratch on every deployment... There's no way a human was going to keep all those keys straight, and everyone just disabled SSH host verification.
This actually seems like the right use-case for SSHFP records. They allow for storing the SSH key fingerprint in a DNSSEC secured zone.
Normally this is considered rather weak, because of the reliance on DNSSEC. However, in this case I think its a very easy to take step that'll be miles better than just trusting everything.
The stronger alternative is Certificate-based SSH, but that is a lot of hassle. As it requires setting up a CA, handling revocation, and actually distributing the certificates.
A side note on the last part: @cert-authority is an underrated feature of OpenSSH. More companies should use it to make their infrastructure safer to use.
It has flaws in usage: issuer cert per group isn't easy to manage (and makes pubkeys 1000+ characters long), you have to distribute CRLs, one error in sshd_config could make server unmanagable (multiplied by your config manager). I haven't seen a pure SSH CA automation project too.
You can make a certificate with multiple principals, but in order to add/remove some group access you'll have to reissue the public cert (and add previous to CRL with removal of some), which is OK for short-lived certs. With multiple issuers, you can add/remove public keys for different CAs but the resulting meta public key will be a set of keys actually.
That is what cross-signing will accomplish, but the benefit of having per-device granularity is that users can actually tell if a new (potentially-malicious) device has been added to a conversation and be blacklisted.
My very naïve understanding is that in the keybase model there's no master, just several keys which can sign each other, so you can lose any of the keys (as long as it's not all of them), unlike in the PGP model, where you can't lose the master.
In practice, though, as long as you protect the master key appropriately, the PGP model isn't that much worse (and possibly in some ways better — I'm not sure what happens in the keybase model when one of your devices is compromised, rather than just lost, and you don't notice for a while).
This works great for incredibly tech savvy people who have an offline way of verifying public keys. This is completely and utterly useless for 99.9% of whatsapp's 1b+ users. Heck, how many times have security-aware software engineers blazed through the "THIS KEY IS NOT TRUSTED" warning from ssh?
Same here, at least privately - it happened a couple of times that I renewed or changed for whatever reason the key on some server and a few days later when I tried to connect using a secondary device I got the warning, which made me go to defcon-2 for at least a few minutes, hehe.
On the other hand, at the company I work for, internal machine certificates/keys are renewed without my involvement (I'm a so-called "IT-owner" of some apps and the infrastructure that the apps use), and therefore I get the warning when connecting to their Linux hosts but each time (few times every few years, but still...) I dismiss it hoping that it's because of the work of the admins renewing the certificates (clarifying who-did-what-when would be a potential major administrative undertaking) => this is absolutely a flaw of the internal processes.
Yes, it could be fixed by a more complex infrastructure, but again, if the internal processes are weak then even the more-complex-infrastructure could be compromised by side-attacks.
Same. If I reinstall the OS on my raspberry pi I will ignore the warning. I once had an issue where using the WiFi hotspot on a coworkers phone caused the SSH warning to trigger and neither of us could work out why. I did not ignore that warning.
Matrix's new key verification UX is incredibly intuitive. You press a button to verify the keys and then you out-of-band check that you're both looking at the same set of 7 emoji. This mutually-verifies both devices. I don't think it's possible for it to get any simpler.
With cross-signing, this will get even easier because cross-signed devices will be automatically trusted.
Just fyi, about "Matrix" (not directly related to this specific post, but I still wanted to mention something about it...):
I started testing "Matrix" using the "Riot" app on Android and the web-frontend ~4 weeks ago and so far things look good, which I think is fantastic!!!
I was especially happy to see that I could not read messages posted prior to me entering any encrypted chatrooms => gave me a real "good general feeling".
The "Riot"-app for Android (using it on SailfishOS) is so far quite intuitive and it did never crash so far. Battery consumption is relatively heavy (I used to charge my phone once per week, now it's once every ~3-4 days) but I guess that other users that have more frequently an active Internet connection (because of other apps) will be impacted less.
All in all I'm beginning to think to try to push it to my friends - I think that the first attempt might be to use an encrypted group chat for special themes (e.g. discussions about algorithms, stock market discussions, etc..., all what we do not dare to post in Whatsapp) to have a technical advantage vs. Whasapp & Co. .
Yeah the security of matrix + riot seems like the best of its kind. The issues come from the extra security causing some things to be a bit of a bad user experience but its getting better.
It does partially solve the issue of continuing to send messages without any sort of warning -- new devices mean your messages won't send unless you hit the "send anyway" button (and in your settings you can disable the send anyway option).
It also solves a problem that Keybase has which is that all new devices are automatically trusted (so a compromised device can just register more devices to avoid being blocked). To be fair, this makes for a pretty bad UX -- and the Matrix folks are working on cross-signing to make this easier -- but it is very useful for a user to be able to detect new devices to be detected (especially their own).
yup, one big issue is the warning coming up too often, not making it more visible or forcing it in like matrix does.
because then you just end up with less people on the system, so you'll talk to them over unsafe channels.
keybase's opinion is definitely my opinion so im a little biased there, but so far i can't think of anything that's better than making the reset occurs as rarely as possible by using multiple devices for recovery.
Yeah usability-wise it's not the greatest right now until those features get added, but security-wise it's better than Signal/WhatsApp in warning you about detected new keys.
They also have no (as far as I could find) way of installing Keybase without root permissions on Linux. I tried looking for a way to install keybase without "sudo dpkg -i keybase.deb" but had no luck. In the end, since the people I'm working with use it, I had to spin up a VM to install it in so that keybase wouldn't mess up my Debian installation.
Keybase uses root privileges only for making the magic /keybase directory available, where you can access your KBFS files (the redirector allows different users on the same system to see their own files). Keybase and KBFS run as unprivileged daemons (via the systemd user manager where available).
As giancarlostoro mentioned, you can unpack the .deb file and run the binaries out of there. If you put the binaries in your $PATH, you can even symlink the systemd unit files to your ~/.config/systemd/user and use the systemd user manager to manage your custom Keybase install. Note that the KBFS mount will not be accessible at /keybase, but instead at another location writable by your user (see https://keybase.io/docs/linux-user-guide#configuring-kbfs).
I have friends that I know use keybase for work with a specific corporate account. I want to talk to them using their private account which I can't because the app only supports a single logged in user. So, is it easy or possible to have multiple accounts logged in and used at the same time? This includes that support for android also.
Maybe the chat application should be extracted into a separate more native application maybe since another "problem" I see my friends talking about is that it is too slow.
Actually there was a discussion about adding Keybase to F-Droid repositories. However, it seem that contributors in that issue failed to build this app.
Would be great if there are helps or guides from Keybase.
You can also extract the files from a .deb file and place them wherever you so desire as well though if you really want to be that extreme about it. I see no issue with installing things as root, it's running random software as root that's the real issue. If you verify what the post-install script for a Debian package is doing (ie not running anything not already on the system) you should be fine to install KeyBase and any other package as root.
Packages don't run the software they install unless it installs a daemon or something.
My concern isn't that I don't trust Keybase to not be malicious, it's that I don't trust their packaging to not conflict with other packages. Debian has a very strict packaging process and it effectively guarantees a stable system, but installing packages that don't follow the standards that their packagers have could cause problems on upgrades.
I don't know enough about Linux to verify that the Keybase package does everything right; I delegate that to the Debian packagers and don't install anything as root unless it's from the Debian package repositories. Any software that I need that isn't in the distro is installed to a folder inside my home folder, where it might conflict with other custom installed software, but at least it won't break the entire system.
I believe if you do dpkg --contents keybase.deb (or whatever it's called) it will list out what files are in a debian file. You should be able to see if they're including their own that conflict with the rest of the OS, but also if a package is going to mess with a file the OS installed, my experience has been that the package manager will warn you of this or not allow it, but I can't remember off the top of my head. Sane use of dependencies on Debian means depending on the specific dependency from that specific version of Debian.
I've built my own Debian packages at work, but I'm not a total guru yet. I've never ran into issues with KeyBase yet on Linux, but honestly you could always open up a GitHub issue with your concerns to find out.
Edit:
Best I can tell from their github they install KeyBase to /opt/keybase specifically, or at least the main stuff, which is what third party packages usually do.
(A very tiny fwiw): you /can/ create a backup in signal and use it to transfer seamlessly to a new device, without triggering new safety number checks. The user flow sucks, but it is possible.
Backups are only possible on Android, and the most recent Signal non-beta builds have had this feature broken, making backups you had useless.
Unless your willing to open a bug and be a pest for 2 weeks (and still have access to your old phone/leave it registered) I wouldn't plan on retaining your messages or keys across phones. Its a huge weak point of Signal.
Even in that case, you could only backup on device storage, had to write down (!) a huge set of numbers and manually copy files around. The whole flow is awfully terrible to any user - at least WhatsApp can sync your profile over Google Drive.
Last time my phone died, I lost all of my Signal group memberships, because the app is incapable of transfering those to a new phone without backup... and it's also incapable of doing the backup automatically. Those UX choices continiously baffle me - it's like authors didn't learn anything from the failure of PGP.
Yep, I ran into this. It really soured me on signal, I definitely disabled it as my default SMS. I have a history of combined SMS and signal messages trapped on one phone. I can read them in the signal app, make backups, but not import them on a new phone.
Even if the backup works, the process is rather tricky. You have to get the backup file into a specific directory on the new phone BEFORE you open signal. There's no "import from backup option". If you mess up, you have to uninstall, then do the file copy.
Hmm, that's odd. I very recently had to cycle through using four different phones (one was dying, two ended up being defective, finally got a decent one), and each time I successfully transferred all my messages to the next. I thought that just transferred your messages though, not your keys (I could be wrong there).
But it is a bit silly that you need to manually move the file from the old phone/backup location to the new one without some in-app option to do so.
This was circa 1 month ago that I had this issue, and it was fixed in the beta build of Signal Android after a few days. Moving to the beta channel doesn't really help if the old device is not avaliable anymore, as the backup is still not usable.
Since this feature came out I haven't had to change my keying with Signal in over a year. To your point, for heavy use Signal users the flow is a bit arduous. I hope that Signal can offer a continuous backup option at some point so I can just have the app incrementally backup while I'm at home to my NAS. This would solve for both the problems of unforeseen mobile phone loss as well as the level of time it takes to migrate to a new phone.
In the interest of transparency I also use Keybase for things as well. The problem I ran into with Keybase my first go-round was I lost any ability to restore my account after I hadn't used it for about 9 months. That is mostly my fault for not having more bases covered, but had I had anything notable I would have wanted access to I would have lost it at that point. As an end user I don't feel that Keybase competes in the same manner as Signal. Since I'm an Android user Signal is 100% transparent to my normal use handling both SMS/MMS and Signal messaging. Keybase isn't a drop in replacement with that in mind. I realize this isn't the norm for iPhone users. Regardless, my question is: are many people using Keybase for messaging? About half my contact list uses Signal, yet I only have ever communicated with two people via Keybase.
It does suck incredibly badly. I don't understand why either, why can't I just initiate a transfer from a different instance, get notified on the old phone and initiate an encrypted transfer between the two? Instead I have to write down a ridiculously long number and transfer the backup by myself. Last time I couldn't get it to work on my first try so I gave up and restarted from scratch.
Given that these are mobile phones, a more sensible way of transferring a secret between devices would be to display a QR code on the old device and scan it on the new one. You could even make this secure against local eavesdroppers by encrypting the data in the QR code with a key provided to the devices by the server.
Having actually built such functionality for a different android app, I can assure you that 'just transfer the file' is extremely difficult. Problems include Android API skew across versions, different OEM android implementations of those skewed APIs, and more!
For transferring backups from one phone to the other, I used Xender. Probably best to just transfer with an sd card, though.
I can totally believe that, but Signal already has most of the infrastructure in place to sync messages and files across devices in a secure manner. It doesn't seem to me that it should be too difficult to repurpose it to do an initial sync between two devices.
The "WhatsApp Business API" client also offers the ability to backup/restore the Signal cryptographic identity, because it would be a bad experience for end-users to see identity change notifications from businesses every time their IT person decides to fiddle with their servers.
It would be nice if this could be done on smartphone clients, but that pretty much requires leveraging some sort of "trusted key store" external to the app. (Which I'll admit may exist on some platforms, to varying levels of trust. How feasible and acceptable it is for this use case, I honestly do not know.)
on Android: Settings, Chats and media, Chat backups - Backups will be saved to external storage and encrypted with [30 decimal digit password] ... ... Other than chat message contents, it is unclear exactly what gets backed up. It defaults to saving in the Signal folder on the INTERNAL sd.
on Windows: File, Preferences, Contacts - Import all signal groups and contacts from your mobile device. ... ... It is unclear what else gets imported.
Requirements to use phone number, let alone, as an identifier is major complaints I have for many of messaging apps. It really limits usable cases as I have plenty of people I would love to interact but not necessarily want provide my phone numbers.
I love Keybase for this aspect, but something I don't like about it is its device name handling. They don't allow decommissioning old device names, so I end up having 'MyLaptop' 'MyLaptop 1' 'MyLaptop 2'...
Oh interesting. I don't think we've talked about this decision publicly, so I can write about it for a second. Not letting people re-use a device name is an inconvenience, I admit, but arguably it's not like other cryptography inconveniences, where people are confused, troubled, etc. We figured people would say "huh, weird requirement" and pick a different name and move on.
The goal is a 1-1 mapping between devices (keys) and these names. So whenever we need our UX to talk about a key, it can talk, safely, about it in terms of device names. Once committed to your chain of signatures, "Laptop-Warhol" means a specific device key, and it can't be used again. So, for example, if one of your Keybase installs wants to tell you "oh, Laptop-Warhol just added a new device, iPhone-Vangogh" then it doesn't need to look like this: "Key 34858234589234895897234598734 added key 90123845890230948234234324."
If Laptop-Warhold could mean multiple devices (keys), well then we'd need to start talking about the keys. Which is a nightmare for usability.
A lot of this decision was driven by something we've seen with apple devices. Every now and then I'd get a popup on my computer - say when updating iOS - that said something like "you just started using iMessage on a new device, 'chris's iphone'. if you don't know what this you should freak your shit out." well - it has basically said that so many times with the same names over again, that I can safely assume that it's a near-useless warning.
Note I mean unique to you; 2 different users on keybase can name their devices the same.
Generally speaking...it's been a goal from the beginning that names on keybase are meaningful. Similarly if you look up "chris" in in our merkle tree (which is pinned to bitcoin) that leads to a deterministic chain of signatures. inside that chain, where I mention "work-imac-warhol", you're guaranteed to see the same answer as I am. So "chris" is as good as a key fingerprint or safety number. And so is my device name.
I understand where you are coming from, but I my irritation comes from the fact that they device cannot be revoked from my account and recycle the name. (e.g. OS reinstalls, etc. I don't really like naming my device MyMainPhone-2, logically because it's not my second phone. It's a same device to me.) If the device that's active already have that name, I would agree with your decision that duplication is not allowed.
Maybe some people are more irritated by that than others (I'm certainly former-type), perhaps consider "advanced" to remove existing names for those who knows what they are doing?
I dislike the phone number requirement too. Do you disagree with having to use it to register/activate an account or with having to expose it to other people you may want to communicate with directly or in groups?
For the former, you can use Wire, which allows you to sign up using an email address (from the desktop). It also syncs conversations across devices and is end-to-end encrypted. You can just share your name or username with others for them to find you.
For the latter, Telegram uses a phone number for registration/activation, but you can set a username and send that to others (or better, share a https://t.me/<username> link). Anyone who's not in your contacts list would never be able to see your phone number, even if you're in the same group chat. They can use your username to tag you in chats or message you privately. (General commentary about Telegram's encryption is out of scope for the point)
A few months ago I deleted Facebook and part of doing that looked into what messaging app to move to. I needed something easy to use as I wanted to get my family on it but also really didn't want to link my phone number.
I ended up landing on Wire and so far have found it to be really good. You can register with phone number but it is not required and you can register with an email instead.
Let's not forget what Signal and the Signal Protocol (used by WhatsApp) have achieved: making end-to-end encrypted chat EASY and accessible for the masses, for many of whom "security" is password123. It's important in our post-Snowden world.
I'm not sure why this is downvoted. The reason WA has so much popularity is that it is easy to use and you could use wifi. Signal is not as easy to use (less features) but more secure and so privacy conscious people like it. But people don't like switching to Signal because "it is hard". Making a more secure app is good, but we have to question "do we want people using pretty good e2e or do we want to make the perfect app first?"
I don't think your question is as easy to answer as you might be implying. I certainly don't know the answer. My main concern is: what is the downside to "pretty good e2e"? Without understanding what that means, we can't evaluate the situation.
"Good" is better than "best" if "best" is not available seems obvious, but it certainly isn't true in a lot of cases. If I tell you that X secure and you trust it to be secure, when it actually has problems -- that might be worse than me telling you that X is not secure.
People are bad at evaluating risk. If I want to pass notes in class and don't want my teacher to know what the note says if I get caught, then rot-13 is probably "good enough". But if I'm a whistle-blower for a government agency, my security needs are quite a bit higher. We can never make a perfect app, but I'm not sure I could define what "good enough" looks like for the general populous. It's completely reasonable to me that different groups have different opinions on the matter -- and I think that's a good thing.
Is Signal or WA in that regime? Literally the only reason I don't use WA is because it is owned by FB. But as far as I'm aware both are cryptographically secure. Yes, I'm aware WA has a metadata problem.
So what's good enough? I'd say that if you need a state actor to crack it, I'll call it good enough. At least for the general populous. Any more difficulty that can be made is a bonus imo.
Clearly we can't get a perfect e2e app. So at what point in time do we say "we also need other features that people want so that they'll use our app". That doesn't mean "stop working on encryption" (you can never stop that) but "we're at a good enough point to start targeting a larger market." I think something like Signal is there. Stop focusing on the security geeks and bring in the general public.
I don't understand the it is hard part. You register with a phone number after installing and that's it. Isn't that the same use case as whatsapp?
Sorry I never used Whatsapp, so it's really really hard to understand what are those features that people keep arguing about that are missing or so much easy of use on that application. I use Signal and Keybase regularly and I don't find them lacking.
So any write up anywhere talking about those misterious and amazing features and user friendly UI that only Whatsapp has?
I'm actually stopping there because Signal in the last year picked up a lot of other features that WA has and people want (like emojis).
What still could be useful
> Enabling md markup (I'd argue that this is more important to Signal because of the large amount of coders that use it). Note: this is in messenger.
> A better desktop app
> Way to add users without handing out phone numbers (low priority)
As just texting a single person it is just fine. But text is not alone. This needs to be a chat app, not a texting app. Because that's what people want.
> it's really really hard to understand what are those features that people keep arguing about that are missing
Ask them. Use WA. Figure it out. Figure out why non-techie people don't use Signal.
The argument is that people don't bother to keep their verified connections up to date. But come on, how often to MITM attacks happen? For those rare cases where people are doing stuff important enough that it becomes a possibility, I would guess the security conscious individuals would become more diligent.
For the rest of us, it seems that doing it on occasion is still worth it. As I understand, Signal is designed to never indicate over the wire who has checked safety numbers. Thus, a MITM anywhere on the network creates a risk of becoming discovered, which is a cost in itself.
I use signal to chat with my wife. I'm glad it offers good enough protection that my country's secret service is whining endlessly that it can't get in. I'm glad our inappropriate jokes don't easily become public.
At the same time, I'm sure that a determined and competent adversary could compromise my phone without needing to break signal's encryption or engineer fairly sophisticated mitm attacks.
I'm fine with that level of security, and don't want a more annoying UI when a device is reset. Because even with the current setup, the chat protocol isn't the weakest link anymore.
A big benefit of end-to-end encryption is that it makes it impossible for the service provider to suddenly start doing silent mass surveillance of their userbase. A solution doesn't need to be 100% perfect to achieve this goal.
Somehow, nearly every article on the subject completely misses this, and instead keeps moving goalposts on reasonable endpoint security.
I'm happy that I found this subthread here, which in a sea of comments talks about the really relevant point: Signal's goal is not to provide the most secure communication channel ever to targeted individuals. It's to mitigate passive eavesdropping on a whole population. This creates the smokescreen that the actual targets require, and prevents people from actually becoming a target in the first place.
Exactly. I only use signal for chat, and I tell my family and friends that they must install it if they want to send me text messages. It's not because I fear being hacked, it's because I don't want highly personal conversations about sex, drugs, chronic illness, and skeletons in the family closet getting picked up by global passive surveillance.
Yep. And Signal is as good as it gets for end users with zero technical knowledge. I've had many family members and friends install it on their own and not skip a beat.
It also means that an adversary can't go in long after the fact and dig up a message history from a server god knows where.
A lot of the eavesdropping scenarios concocted around key change in particular are fanciful. Practically speaking, end-to-end encrypted chat is often replacing email, not trying to protect secret agents hiding from the Reptilian Council.
>A big benefit of end-to-end encryption is that it makes it impossible for the service provider to suddenly start doing silent mass surveillance of their userbase.
Mass surveillance no, but if everybody blindly trusts any key provided (which seems to be the default "setting" for Signal and WhatsApp) then it's easy to start MITM'ing any connection at any point. You'll just get an innocuous-looking "your safety number has changed" and nothing else.
I do agree that it's still much better than nothing but I also agree with TFA when they say that "Signal cut a big corner by not planning device management properly", there are many ways Signal could make it massively easier for users to transfer their keys from one device to an other (for instance by deriving the master key from a passphrase bitcoin-wallet-style, or simply by making it easier to transfer your keys and history from device to device).
Because they didn't do this they probably considered that having an SSH-style intrusive "SOMETHING IS GOING WRONG HERE" message was too annoying and got rid of it. But really, they're fixing the symptom, not the problem.
Normalize end-to-end encryption, increase the amount of encrypted target traffic for attackers.
And I think it's not so binary as care/don't-care about being hacked. You might care a bit, but how much do you care? Just as important, what is your threat model?
MITM is an active/directed hack; encrypted chat covers all the passive hacks. My friend's arch-nemesis won't be thwarted, but a voyeuristic sysadmin will be. And my friends don't have arch-nemeses (that I know about anyway), but I know lots of people who would eavesdrop at any opportunity.
This is a great point, however as many security researchers will tell you, the cost of an exploit goes down exponentially over time.
Yesterday's hash attack that required $100m supercomputer will require a $10k GPU, which is 5 years will require a $100 GPU. (Not talking about the math changing, but it's been more about weaknesses in S-boxes and other parts of older hash functions that get slowly chipped away)
Similarly, yesterday's system that takes an attacker 3 days to MITM your machine, will take 3 hours, and then will be somebody's python script that installs and aggregates millions of exploits.
So in general, the cost and benefit variables are constantly changing under us.
Actually I was referring to the cost to the user who wishes to protect themself. Signal is designed to be fairly low "cost" i.e. easy to use. If they can find a way to make it more secure without making it harder to use they should and probably will do it. For most cases, the benefit of privacy probably doesn't warrant going through more trouble than this.
For those cases where somebody needs more protection, there's a way to go through a little more trouble to use Signal more diligently (agree with important parties not to change keys for a period of time).
To your point though: the cost of executing MITM doesn't just include the equipment, it includes showing ones hand by being discovered.
>Similarly, yesterday's system that takes an attacker 3 days to MITM your machine, will take 3 hours, and then will be somebody's python script that installs and aggregates millions of exploits.
The big cost of a MITM attack is getting in the middle. The cost of doing the actual attack is minimal.
By not knowing which peers have verified numbers, the risk of getting caught that you're in the middle is very high, so that big cost for getting there will be for nothing (and actually get way more costly because you alerted the subjects).
It _seems_ you (someone from the Signal project?) are actively diverting from the point, with what is ultimately a security theater request. Keybase's app - which IS open source - doesn't trust the server at all. We could be running anything server-side, regardless of what we do or don't publish. Meanwhile, Signal's story is "you MUST trust our server, over and over again," as the blog post explains. Unfortunately there's no way to know what's happening on the server. So being like Signal and publishing your server source is strictly worse than being like Keybase and not (yet?) publishing server-side source. At any time, Signal could be throwing in these fake key upgrades, either due to running other source code on purpose, or being forced to, or just plain getting hacked. The most malicious Keybase server could not.
This comment may be of interest (we could release server code at some point, and I will take this as a vote), but I hope people reading this aren't distracted by Signal's flaw here.
> But to be clear, you're actively diverting from the point.
...
I get why you showed up here, but you're really not addressing the point of the post at all, and in fact you're trying to distract with the suggestion that Signal's publication of that code protects people from this flaw. It doesn't. At all.
Wow, that's a pretty hostile (and accusatory) response to a fair ask. This is one (small) step removed from accusing someone of shilling/astroturfing.
Let your product stand on its own merits. If you have a good reason why you won't open source Keybase's server implementation, own it. Don't undermine requests to open source the code by publicly accusing people of supporting a competing product.
The person you're replying to didn't make an argument in favor of Signal - or any other competing product, for that matter. In my opinion, your response is actively distracting from their request.
Even if they publish their server code there is no way for anyone to verify that it's the code they are actually running and it would be just a PR move. If the client implementation is good there should be no way that the server can compromise any message.
It's a step toward people running their own servers, either federated with Keybase proper, or just as a personal instance. That would be valuable for quite a number of enthusiasts. Federation (like email/XMPP) is a very reasonable feature for any forward-looking communication platform.
Keybase's target is to become a central identity point. Other features (like team chat and git repos) are made to showcase what you could do with that.
Okay, and that's exactly the kind of reasoned response that's appropriate. What's not appropriate is implying the request is simply unfounded because of its source. It's not charitable.
Well, equally, GP could've disclosed their conflict of interests instead of just using a hit&run one-line red herring. OP makes a post advocating not having to trust a server, and most upvoted comment is someone asking them to open their server so they can trust it? Doesn't make much sense..
Agreed, they could disclose a conflict of interest. But I don't think it matters here, because their request could reasonably have been brought up by someone unaffiliated with Signal.
In other words - you don't need to be affiliated with Signal to be in favor of open sourcing the server-side code. It's a fairly common complaint on HN, and I can see why it was the top comment for a while even if I don't ultimately agree with the need to open source the code. Likewise, if you look at the link to the GitHub issue you can see many other people likewise asking for - or reacting to responses to - open source the server code.
Do all those people have conflicts of interest? Is it possible that the affiliation with Signal doesn't matter here? Then be charitable, and let your actual reason for not fulfilling the request stand on its own.
I think it's a pretty reasonable way to combat FUD.
It's tough to compete on security because users struggly to know what's actually better (on top of needing convincing security is a worthwhile differentiator in the first place). A client that doesn't trust a server is a great improvement and "show us the server" is a terrible response.
Then the reasonable thing to do is to explain why it's a terrible response. What's unreasonable is to imply that the request is a sideshow in favor of a competing product because of the identity of the person who brought it up.
If you have a good reason not to fulfill the request, charitably responding to the request with that reasoning is an educational opportunity for the audience. There's just no need to bring identities into the mix like this, and I think a dispassionate response outlining why the server need not even be trusted would stand on its own.
Fair enough - I don't want to dilute my point by coming across as too hostile, even though my point is that it seems like a well-crafted diversion. Let me edit it down and your quote of the original can stand.
Whoa... I don't work for and have never worked for Signal. Feel free to ask Moxie if I have ever worked for Signal and the answer will be an abject "no".
I think it's worth meditating on the tradeoffs of your system design. Nothing is perfect.
Signal is trying to do the best that it can, and I really think that the starting line in writing secure software is open sourcing the whole thing from top to bottom. Anything less isn't auditable.
Note: I edited this post to make the language more addressable. I love the work Keybase is doing, but I want them to open source their server.
How does open sourcing the server help you audit what their servers are running? There's no way to know if what's open source matches their running code, and if the security of the system depends on the server being open, it's not secure.
Because if you don't trust it you can run the server yourself and see if the behavior is correct AND over time as we move to a world where servers become better able to verify the code they're running, we can improve the trust model.
Given the choice between having the server code open sourced or not, the choice that is higher trust has to be open source.
That's just false. You can get all of the trust information necessary from the client. It's exactly the same amount of trust.
edit: To be clear, even if they open sourced the server right now, I would not even look at the code to determine if running the client was safe. The only time I would care to look at the server code is if the client's correct operation depends on the server running specific code. If it doesn't do that, then the server code doesn't matter. And if it does do that, I wouldn't trust the system, anyway.
So audit the client assuming that all of the traffic it sends is in the clear (i.e. not over https to keybase's servers). If that's not sufficient, then don't use keybase.
Regardless, until you have a way to ensure the server is running the code you expect it to be running (can this even exist? what about hardware level attacks on the servers keybase is running?) the server code is useless from a security perspective.
Releasing the server code allows concerned users to run their own servers, and this way they can ensure that they are using a server that runs the code that they expect.
There is no need to run the code that you expect if the client is designed properly, and you are only sure you're running the code you expect until your server is hacked, which also becomes a non-issue if the client is designed properly. There are other reasons to want to self host: to be in control of your (encrypted) data so that it isn't lost if keybase goes under, for example. But security is not one of the reasons.
Why assume that the server will eventually get hacked? Why shouldn't the server be designed properly, just like the client?
I still think it makes sense in terms of security. If I run a piece of client software X that connects to a server Y, it will always be better in terms of security if I'm in control of what runs on Y. This is independent on how X has been audited. So yeah I would argue that security is also a reason.
Assuming the server will eventually get hacked is how you design secure systems. You assume the worst, and ensure you are still secure. The server code being designed well had nothing to do with the hardware running the system being hacked. For instance, no amount of good design in my server stops a remote 0-day against linux, for example.
This is a subtle point, but that thinking is misguided. You, as a client, have no control over what server you’re actually talking to. The only way to be sure you are secure is to be secure independently of if the server has been hacked, or is malicious, or whatever. Thus, you design your system such that the client only discloses information that is allowed to be public to the server (public keys, encrypted messages where the server can’t decrypt it, etc). In that way, you don’t care what code the server is running, and auditing the server makes no difference to security.
The whole purpose of cryptography is to reduce the set of things you need to trust. Including the server in the trusted base is not only a worse design but a false sense of security if your trust ends up misplaced (hacked).
So, this is different than how a bank would work, for example. In the case of a bank, you have to trust that their servers are secure, and their software being open would help with auditing that. In this case, keybase is not the endpoint you’re talking to: another keybase client is. The keybase server is just an intermediary, just like any router on the internet.
You can construct a client such that the server IS outside of the scope. You can determine if a client has such a property from the client alone. I would only want to use a client such that the server is outside of the scope. It's strictly better if the client has this property even if the server is open and magically trusted to be running the code you expect. The keybase client is such a client. If it isn't please inform someone with details on why.
edit: Do you audit all of the software running on all of the routers between you and keybase's servers? Why or why not? If not, why does this reasoning not extend to the servers? Why would the routers not part of the whole system, top to bottom?
It would be great to audit all of the software running on all of the routers I use. I hope that we someday move to a model where all routers run open-source attestable software.
Let me phrase this another way: if there's nothing to hide on the server, why isn't it open source?
We can go back and forth on this forever. My position is simple: strictly speaking, it's more rational to trust open source software. Trusting closed source software ultimately boils down to "trust me". I would love to reduce the degrees to which we have to blindly trust the systems we use.
And yet, all of the routers are not audited, and I presume you believe in the security of some applications that use them. In other words, the trusted base of the system does not depend on all of the components in the system (you start with the assumption that any closed components are hostile). The keybase servers are exactly the same.
The answer is that they do have things to hide: anti-spam/anti-scam systems, for example. The question is if they are hiding something that matters for security. You can determine this by auditing only the client.
Sure, open source software is great, and has many uses. In this case, it has no use in ensuring that keybase is secure. Somehow, you don't have to trust a great many components in the secure software you use on a daily basis, and yet you have to trust keybase's servers because.. reasons? And somehow this trust is important even though you'd still be blindly trusting that they're running what you hope.
I won't argue that some people would benefit from the server being open source, but to argue that open sourcing it has anything to do with security is just inane and FUD.
The question boils down to "what can an attacker learn by owning the server" and right now you don't know. You can pretend that doesn't matter, but it's not inane or FUD.
You have to model the whole system to understand the threat model. Anything less is blind trust.
You can know exactly what the attacker can learn because you can see ALL of the information that your client passes to the server by auditing ONLY the client. Your argument applies equally well to every single router or middle box on the internet, and it's just as wrong there.
You prove that it doesn't matter by assuming that keybase is running the most malicious code possible, auditing your client, and deciding that the system is still secure. This is what auditing the client means.
Additionally, to bring up this fact again because it has only been hand-waved away: Even if the server was open source, there is no guarantee they are running that code. Thus, there is no benefit to security until systems exist (somehow?) to prove the server is running the code you expect.
Double additionally: even IF you can prove that the server is running what you expect, how do you know that some box, after https is peeled off, but before the request makes it to the server, is not sending the same request off to some other, malicious, server?
I am going to say this one more time because I think it's a real point and I think you're dismissing it out of hand is unreasonable: there are things that can be learned from the server.
It's one thing to tell people that you aren't logging anything. It's another thing to show everyone you're not logging anything except the account creation date and last access date by open sourcing the software and then show exactly that in a response to a national security letter: https://www.aclu.org/open-whisper-systems-subpoena-documents.
Did you notice how your proof rests entirely on the NSA letter, and not the source code of the server at all? Isn't a world conceivable where they open sourced the server, with no logging, and then sent an NSA letter that contained information that wasn't logged in the open source code? If this is somehow impossible, please explain how.
Did you notice how you can compare the NSA letter to the source code and realize the effect is that they're the same?
If you didn't have the NSA letter, would you be able to verify the source code? If another project got an NSA letter and responded to it, would it tell you anything about the source code?
This is simple: Having the source code means you get to learn more from the other signals, no pun intended, of how that source code is used.
Again, as we move to a world where servers have more verifiable code running on them, the value of having open source code will increase.
I don't understand your points about the NSA letters, which makes me think that my point was missed. I am saying that the NSA letter claiming that only some information was logged is fully independent of the open source code of the server. Assuming the NSA letter reflects the truth, there could be more information or less information that what appears to be collected from the open source server code because, once again, the server does not have to be running the open source code, and even if it were, that does not preclude other systems from running against the same information the server has access to. Hence, open sourcing the server does not affect the security of the system at all. If the system is insecure without knowledge of how the server works, then the system is insecure. Period.
I think you're trying to argue that open source is good, and I agree with you. Open sourcing the server has many benefits. The only point I have consistently been trying to make is that open sourcing it does not help with determining the security of the system, whatsoever.
edit:
> If you didn't have the NSA letter, would you be able to verify the source code?
No, but even if the code was open sourced, you would not be able to verify the code that is running.
> If another project got an NSA letter and responded to it, would it tell you anything about the source code?
It would tell you something about the code they are running, yes, but nothing about the code they open sourced.
> This is simple: Having the source code means you get to learn more from the other signals, no pun intended, of how that source code is used.
This is equally simple: the source code that is open may have nothing to do with the source code that is running, and you must assume that they are not equal when auditing the security of the system.
Just to be extra clear, the chances of someone lying to the NSA in a letter are really, really low. Given that we can compare the response to the NSA to what is expected and it matches, we can make some inferences that the software running on the servers is as presented.
In contrast, if you received an NSA letter for keybase and they delivered similar information, you couldn't make any suppositions about the server's code.
To be extra, extra clear, to me, the future of the private internet is further verifiability of remote systems. That begins with Open Source. I concede that we aren't there for most parts of the systems we use today, but we are getting better (see attested contact discovery in Signal as one example).
Why would I not be able to make inferences about the software the servers are running if the chances on lying on the letter is low? I haven't read Signal's source code, and yet I believe with just as much confidence that they aren't logging extra information as if keybase had sent the same NSA letter. To me, Signal's source code is effectively closed, and reading it wouldn't increase my belief. (Have you read all of their server's source code? If not, why do you justify your belief?)
The article on attested contact discovery states "Of course, what if that’s not the source code that’s actually running? After all, we could surreptitiously modify the service to log users’ contact discovery requests. Even if we have no motive to do that, someone who hacks the Signal service could potentially modify the code so that it logs user contact discovery requests, or (although unlikely given present law) some government agency could show up and require us to change the service so that it logs contact discovery requests.", which is exactly the point I'm making. They choose to solve it by signing code and ensuring that exactly that code is running (seems like they just move the trust to Intel. Hopefully SGX never has any bugs like https://github.com/lsds/spectre-attack-sgx or issues with the firmware, as noted by the Intel SGX security model document), which is fine, but an equally valid way to do this is to make it so that the secure operation of the system does not depend on what code the server is running.
Doing that has some tradeoffs: there's usually overhead with cryptography, or an algorithm you need may not even be possible (Signal disliked those tradeoffs for this specific algorithm), but for some algorithms, it's entirely possible to do. For example, one can audit OpenSSL's code base, and determine, regardless of what the middle boxes or routers do, that the entire system is secure. Just replace OpenSSL with keybase's client, and middle boxes with keybase's servers, and do the auditing. Hence, open sourcing the server is not necessary for security. Would it be great if more systems could be audited? Absolutely. Is it always necessary for security? Absolutely not.
edit: Another quote from the article: "Since the enclave attests to the software that’s running remotely, and since the remote server and OS have no visibility into the enclave, the service learns nothing about the contents of the client request. It’s almost as if the client is executing the query locally on the client device." Indeed, open sourcing the code running in the secure enclave is effectively open sourcing more code in the client.
Just to be clear, code running on a remote server is not code running in the client. Just because the server attests to the client doesn’t mean the client is running that code. You still have to do all of the threat modeling for the attested code differently from the threat modeling for the client.
I’m not yet prepared to publicly get into all of the nuances of SGX, but I think it’s worth noting that there’s something very interesting happening there. I look forward to being able to discuss my team’s technical findings on the subject in public.
To summarize why this is so interesting: the attack surface is the whole system. Enclaves let us extend parts of our trust model to systems we don’t own. That is a real change and, if it works, it’s going to change how systems are designed at a deep level. The problem is that there aren’t very many working implementations of sgx in the Wild (signal is the only one I know of).
> And yet, all of the routers are not audited, and I presume you believe in the security of some applications that use them.
Why do you presume that? I certainly don't believe in the security of many applications that I use. I generally try to avoid putting any damaging information into them though.
I didn’t say you believe in the security of every application, I said you believe in the security of some applications. For example, websites secured by https do not require the security of routers to be secure.
This looks even less chill than the previous message, which didn't (iirc) accuse "someone from the Signal project?" of making "security theater requests".
Your comment would be much better without any allusions to this person's affiliation. Just answer the question directly without casting aspersions. You seem confident in your answer, so that shouldn't be hard.
Imagine how much harder it would be for your dastardly competitors to "distract" and "divert", if you or someone else from your project actually addressed the obviously legitimate question of keeping the server closed-source?
I mean other than coquettishly dropping a tantalizing hint by saying 'yet?'. That's nice, but insufficient.
The only named advisor for MobileCoin is the founder of Signal. "Work" is not the only conflict of interest in the world, and you're dodging by consistently talking about it.
We know you don't work for Signal. We also know that you very obviously have a close professional relationship (at the very least) with its founder.
I have 0 control over the Signal project in any way shape or form. The fact that Moxie advises MobileCoin has nothing to do with his work at Signal. I can't force Moxie or anyone at Signal to do anything.
Moxie and I have a close professional relationship but I'm not sure what bearing that has on asking for the code of Keybase's server to be open-sourced. That's not a biased statement, and I would say the same thing in any thread about Telegram, WhatsApp, or FB Messenger's privacy. It's all the same.
If you want trust, you have to be open source. That statement has absolutely nothing to do with Signal.
Yes I use Signal. Yes I'm a fan of the Signal team's work. No, I don't think Signal would be better off with a closed source server. Yes, I do think Keybase should open source their server.
I honestly have no idea why this is even controversial :/.
> Moxie and I have a close professional relationship but I'm not sure what bearing that has on asking for the code of Keybase's server to be open-sourced.
You have a professional relationship with the founder of one of their competitors. It's appropriate in those cases to note that you have a bias. I realize you don't think you have a bias, but that's the whole point.
> If you want trust, you have to be open source.
That's such a confusing statement, and a particularly misleading one coming from someone that works for a crpyto company.
The whole point of end to end crypto is that you don't need to trust the server.
> I honestly have no idea why this is even controversial :/.
That's the core of the problem. You have a professional relationship with the founder of Signal. If you comment on a thread about Signal, or it's competitors, we shouldn't have to click the link to your site to find that out.
I see. I will include that disclaimer in all threads I comment on about Signal in the future. Thanks for explaining this to me.
I've been a fan of Signal since it was RedPhone and TextSecure and my professional relationship with Moxie is quite recent in the scheme of watching the rise of his projects. I apologize if my lack of awareness was offensive, it was unintended.
Edit: Just to be clear, I don't think you need to have the server open-sourced to trust the end to end encryption of the messages, but that's just one part of the overall trust model.
I don't think any kind of disclaimer was necessary. Your point about the Keybase server being closed-source has nothing to do with Signal and is completely valid no matter what competing interests you may have.
You're also diverting from the point to argue that Signal is worse than Keybase for server trust. Releasing server-side code comes from the open-source philosophy that Keybase claims to be a part of on its website. Several people in that Github issue would like to self-host Keybase servers - open source is all about that kind of accessibility. No one's making the claim that publishing the code would make them trust Keybase more, though Signal arguably has benefited from releasing its code for public audit. What would be the cost of publishing your server-side code?
It's a shame that it's not open source from the perspective of people being able to self-host, but if you don't need to trust the server (as Keybase clients don't), then the security of the system is not weakened one bit.
That's a very useful property, because even if the server is open source, that doesn't guarantee that what the Keybase team is actually running in production matches the source they've released.
So I would say a system where the clients don't (and don't need to) trust the server, even if no server source is publicly available, is still strictly better from a security standpoint than a system where the clients do need to trust the server, and source is available. (Assuming, of course, that the design of the system as a whole is sound, and that clients have been audited to ensure that they follow the design of the system correctly.)
I’m saddened this is the top comment, as it appears to be little more than trolling.
It is better to not even have to think about looking at the server-side code.
The best case would be there is no “server” at all, but that’s the Internet we have to work with today to enable the usability and user experience they are going for.
If the client is open and you can see what it sends to the server, why do you care what the server is doing? It shouldn't be able to do anything nefarious and all the crypto happens locally afaik.
Why I like Wire in this regard, they seem to have their front-end and back-end open sourced. I believe their servers AGPL licensed, but it still allows you to run your own instance. Sadly people are highly more likely to use either Signal or KeyBase than Wire, which I think is my favorite since it doesn't tie you to a phone number (tip: register from the desktop first), and you can delete all your account information.
I use all I have listed, I like the UI for KeyBase the most personally especially the way it handles social proofs where you need to verify yourself through a crypto key or another device. I wish Signal would just do this, but it's too married to phone numbers that it's suspicious in that simple regard. I use Signal with my wife, so for us to verify our keys it's very easy and simple.
This can be applied to pretty much any security-focused product, or product claiming to be secure/encrypted. If it relies on an unknown number of proprietary/closed-source components, it can't be trusted. Keybase, WhatsApp, Telegram, and iMessage are all good examples of this. If your code can't stand up to public scrutiny it might not be able to withstand attack. I'm not willing to blindly chuck messages into a black box and hope what comes out the other end is exactly what I put into it. (This complaint is not specifically about Keybase; in fact the linked article and a few comments say their client doesn't trust the server at all, which is really cool. Point still stands.)
Keybase looks really really cool, and I would love to use it, but I can't in good conscience recommend something that is cagey about the half-open nature of their product, especially when the server. It is by definition "man-in-the-middle"-ing all your data, and if you can't inspect it you can't reasonably trust it.
> ...if you can't inspect it you can't reasonably trust it.
That is the point the article is making, and that's what it sounds like the Keybase client is doing--not trusting the server by relying on client-side encryption rather than storing keys on the server.
The fact that you can change keys and see old messages without the other party sending them to you using your new key points to serious problems with these other apps. It seems the Keybase client rightly considers the channel insecure.
To Keybase people here, I'm not sure why you can or won't open/free the software. However, it might be possible that you meet your goal by using shared source that licenses published source for specific uses as educational/academic, security review, and/or build locally from source. Doesn't allow commercial use or forks of that code. There's precedent for that in security software, like Cryptophone doing it (below) for review. I suggest considering an option like that if open/free source doesn't work since it still increases confidence in Keybase.
I'm wondering why the article fails to mention that there is a sufficiently good and easy mechanism to compare and verify the new safety numbers. You just talk to your peer and read the numbers - and the peer can verify them.
This will fail when AI software gets really good at imitating voice in real-time during casual talk, but we're not there yet (or - if that is my threat model, I'll find an out of band way to verify)
I'm not a security guy, but wouldn't the most seamless approach be to encrypt the key collection with a master password and store the encrypted key collection on the server? So on a new device you'd download the encrypted key collection and then decrypt it locally?
If they forget their password, they can re-upload it from a validated device with a new master password.
Biggest issue is single point of failure for total access to all devices. Get\guess\beat out the master password and its game over on all connected devices.
Not only that but this also enables offline attacking of the password. If you can compromise the Keybase server and grab the encrypted passwords, you can then attack it at your leisure with whatever computing power you can scrounge up, over whatever time duration you want. And when you break it, as long as any of the included devices are still on the account, you'd have complete access to everything.
Requiring existing devices to be actively involved in provisioning a new device prevents all of this.
So in Keybase, what does device to device provisioning look like? "Hey, you've just set up this device - a message has been sent to all your other devices, OK the message and come back here and you'll be good to go"
Yes, but you can use 2-factor auth to control downloading the keystore, so in order to hack the person's account you need to already have physical possession of one of their devices and the master password (either to access their keystore or to get through 2FA).
And if it's not a 2FA device, the victim can use 2FA to push a disavowal of the old keys and set up new keys. That's an ugly and server-centric solution, but "halp I got hacked" is going to be an ugly case no matter what.
Isn’t this pretty close to how Apple iMessage works? I agree it’s a decent compromise to give encryption to the masses, but it has its downsides if Apple is compelled by a government to manipulate that.
All security is compromise though. And I think Apple has done something amazing with what is provided given their install base.
The goal is "encryption for the masses" and "the masses" aren't going to put up with "wait, I have to jump through hoops to log in on more than one device?"
I would actually like to see some numbers or a survey on how many people actually do this. Anecdotally I don't think I know a single person who ever verifies safety numbers out of band. Of the approximately 50 people I use to talk on signal with regularly not including large group chats of people I don't know as well, not a single person has ever tried to reverify me nor have I tried to reverify any of them.
Would be cool to see what the numbers are for reverifications.
At some point, continued conversation is verification enough. The MITM isn't a catfish, and they are going to have a hard time keeping up the charade acting like someone you chat with regularly.
It doesn’t have to be a human in the middle though - if you’re in a position to MITM, it’s much more scalable to put a bot in between that relays messages between one person and the person they intend to communicate with.
And Signal does allow you to mark a safety number as "verified", which I think does put up an extra barrier. I don't think the article makes any mention of this.
I'm curious how you verify them out of band, because that would reveal your threat model and why or what for you take such pains (not arguing that out of band verification is useless or wrong).
The following questions are not intended as attempts to poke a hole on the requirement for out of band verification. Do you meet in person and speak in whispers or show some code that's hidden from the view of others (and cameras)? Or do you use a voice call or SMS (both expressly designed to support surveillance)? Or do you use another end to end encrypted app or email (if yes, how do you verify the keys for that)?
I feel like, unless I'm about to transmit some data that is absolutely crucial, the act of regular conversation through Signal is enough to verify the person. Unless I'm missing something?
Why can't we just have government certificate authorities for the average Joe?
Ultimately, while people may have very little trust for the government in general, the one thing that people do trust the government for is establishing identity. We use government ID papers to establish our right to work, to open bank accounts, to enter legal agreements, and to cross borders. Why should communications be any different? We don't need to trust the government with the content of the communications (and we shouldn't), by not providing the government with the private keys. But why can't I get the government to sign a public key for me?
The issue it raises is whether people will eventually get locked out of society if the government decides to get antagonistic with somebody by revoking their public key and refusing to issue a new one, given a society where such a scheme is popular. But we don't have any sort of such protection today - the government can seize your passport, seize your driving license, freeze your bank accounts. A society in which the government solves identity issues for the digital age is only a net improvement over the status quo.
Certain countries like Portugal, Estonia and so on, already emit digital certificates stored inside the chip of every citizen's ID card. I believe these cards are made by Gemalto[1]
Not only that, but the government will inevitably stay requiring you use a key pair which they will supply. Citizens won't know the difference and won't fight it back hard enough. Hello government-sanctioned cryptographic surveillance.
> Similarly, in SSH, if a remote host's key changes, it doesn't "just work," it gets downright belligerent:
Funny enough, I have ranted to friends/coworkers about sysadmins completely replacing machines and not telling anyone. How do I know it happens? BECAUSE OF THIS EXACT WARNING.
If a single party in a chat has their keys reset that's not a MITM attack because a MITM would need to rekey to both parties. If the two clients communicate via at least one uncompromised service to communicate that their counterparty's keys have been reset, they might be able to detect a MITM
Keybase's example Cozy Street is not a MITM attack (i.e. an attacker has inserted themselves between two parties and can get the plaintext), it's just impersonation.
If it was a real MITM attack both Alice and Bob would get rekey notifications, unless they both confirm whenever they get a rekey notification a MITM attack is possible.
I also think that crypto is stuck in the early 90s by thinking real world meetups are the only way to authenticate keys. If you know what your chat friend sounds and looks like, willing to submit a video, and don't think your adversary can fake such a proof, a simple video of you reading your pubkey/safety number is sufficient. Is that scalably practical? no but it is possible.
That said: keybase is doing cool novel work, I commend them for advancing the state of the art.
Look at the number of non technical end users who will determinedly download .EXE files, or run them from their mail client, and click through all of the Windows 10 "do you really want to run this untrusted software?" warnings in order to successfully install cryptolocker type malware on their computers.
If you give people a way to click "yes/accept/run" and they are determined to accomplish what they think is their intended task, they will just blow through any warnings.
You don't even need to observe the average end user. Just look at software developers aka "technical experts" using npm, NuGet, Maven, and all the other package managers. Digital signatures? Nope, just run the code on your machine, please. Bonus points for allowing code execution in user context to "configure" the package and placing executables in $PATH.
npm here being exceptionally secretive on what it will install as dependencies as it can reach tens of thousands packages very quickly.
Actually, I think it is wrong to call it TOFU as it simply doesn't require the user to opt-in to anything. Instead, it seems more like the thing the XMPP people call 'blind trust before verification' [1]. I am not quite sure if it is exactly the same as 'blind trust before verification' changes its behavior as soon as you explicitly verified the keys.
That way everybody can use somehow e2e encrypted messages, but if you really care about the security you validate your keys and get a real trusted e2e encryption.
I've been using keybase for quite some time now and I absolutely love it. I suggest everyone give it a seriously try and check out some of the popular public teams chats before finalizing your opinion of it. I'm glad I did.
Keybase's UI is still bad. It prominently highlights the "Let them (the eavesdropper) in" button, and the warning has so much red that it's hard to read the text.
Keybase allows you (nay, encourages you) to set up a "paper key", which is a private key that you store offline (e.g. print it out and stick it in a safe). You can then use this paper key to provision new devices. This way if you lose all of your devices simultaneously (or just have one device to begin with) you don't need to go through an account reset to add a new device.
Note that even with this, any new device added using the paper key still generates its own independent private key. Having per-device private keys is important to be able to revoke devices, and to be able to track which devices were responsible for any given action.
1. A server runs KBFS which syncs data with the server. No one could stop you from disabling it on startup and running wherever you need it.
2. Any app running without AppArmour/SELinux or outside of network namespace could get your real address. Latter is relatively easy to set up, I'm running VPN by default and all the apps inside are running only in the namespace with VPN tunnel device.
3. Last time I checked I could make my own package.
4. I don't have time to check it atm.
5. Link for audit results is somewhere in the comments on this post.
6. Works with uBlock on.
7. Some paranoia rant without looking how KB works: it's push/pull mode, all messages are stored as files you could sync.
8. Based on previous points.
9. Each connection is under TLS. Plaintext messages could be read via RAM, and if someone could read it, KB would be the last problem.
10. You could make your own.
11. Fixed. #ls -l /keybase would show you the symlink KBFS_NOT_RUNNING to /dev/null, but shows correct directories under user running the app.
I like it! They should apply this to the problem of TOFU in HSTS. For many HTTPS sites it's trivial to hijack them by getting a user to switch to a different device, as the devices' apps often don't synchronize HSTS databases.
Key rotation is a hard problem. One idea is to host the public key in a txt domain record. For example yourname.com that you can also use for your email and blog.
There are a few claims made here which I'd like to see clarified. In the spirit of transparency, let me declare upfront that I use Signal but I am not affiliated with them, and I don't really have a dog in the race here. I'll reference the published NCC security review[1] for this comment. Overall I'm happy to see a published cryptographic review of this protocol.
First, under "The Full Security Picture" heading in this article, it's claimed that forward secrecy is supported via time-based exploding messages. Pages 19 and 20 of the NCC report explain that, "The default chat protocol does not allow for forward secrecy since the same keys can retain indefinitely on a users device." So forward secrecy is not assured by default under this chat protocol, is that correct?
The NCC report goes on to say that, "Exploding messages introduce mechanisms for message deletion and forward secrecy; however, it is not clear to the user that keys and messages could remain on their device beyond the period specified during message creation." I interpret this to mean that there is a way to assure forward secrecy - which is exploding messages - but you're not making that explicit in this announcement. This seems a little disingenuous to me because in your FAQ, the first answer criticizes Whatsapp for compromising forward secrecy using the backup feature, but you don't have forward secrecy enabled by default in your chat protocol.
Likewise, this announcement makes a point of mentioning how other apps require you to trust the server due to resets, and why trusting the server is bad. But page 20 of the NCC report explains that, "While the default Chat encryption protocol does provide for message confidentiality and integrity, it does not provide for security in the face of device and server compromise, as keys and ciphertext are stored for a potentially indefinite period of time." So is it correct to say that unless users specifically enable exploding messages for their conversation (which is not the default), they actually do need to trust the server?
There are also a few drawbacks to the ephemeral messaging scheme NCC found that I think should be explicitly disclosed, because they don't require too much technical detail:
1. There is no deniable authentication on the default chat protocol. While exploding messages provide deniable authentication, this property fails in a group with more than 100 participants. It's fair to question whether that's a realistic place to expect deniable authentication, but it should probably be called out.
2. Exploding messages are based on the local client's system clock. Therefore it's possible for an exploding message to be indefinitely retained on another device by e.g. manipulating the local time.
Correct: forward secrecy isn't on by default. We think there's a trade-off here. With forward secrecy, your old messages won't be visible on a new device, but users want this since Slack (and others) make it seem natural. However, you can opt-in to forward security on a per-message or per-conversation basis.
The report says "device and server compromise." Decryption keys never leave the user's client. What they mean is if: (1) the server's stored data is compromised; (2) your phone is also compromised; and (3) the messages weren't marked ephemeral; then the attacker might be able to read past messages, even if the user tried to delete them (i.e., did Keybase really delete the ciphertexts?). This line of reasoning is correct and one of the primary motivations for key ratchets. I don't think the report is claiming that users need to trust Keybase's server in general. They do need to trust Keybase to delete messages that are marked deleted, which would mitigate the attack above if conditions 1 through 3 are met.
My issue with Keybase's exploding messages is they're time-based exploding. I wish there was an option to do forward-secrecy messages where the message is visible indefinitely to current devices, but not visible to future devices.
I don't know the moderation policy on title changes at HN, but I just changed the title of the post. Internally at Keybase - and thanks to a conversation with a peer - we've been feeling pretty guilty about calling out a specific project that we think is basically the gold standard outside Keybase.
We'd rather focus on the positive solution to the problem (which Keybase has implemented), rather than just pointing a giant finger at any other services which have the problem we're trying to address. I think I personally will sleep better tonight this way.
In an ideal world, I would love to use a really secure app for communication. However, my choices are limited when most of my friends are on WhatsApp or Signal. That is just the unfortunate reality of the way social networks work.
With those apps, you throw away the crypto and just start trusting the server: (1) whenever you switch to a new phone; (2) whenever any partner switches to a new phone; (3) when you factory-reset a phone; (4) when any partner factory-resets a phone, (5) whenever you uninstall and reinstall the app, or (6) when any partner uninstalls and reinstalls. If you have just dozens of contacts, resets will affect you every few days."
I guess I don't have "dozens of contacts", but getting a new phone/resetting a phone isn't really that common of a thing in my circle. I feel like for the average user, they wouldn't do this with their phone more than like once every year or two. So I guess if you have like 600 people you talk to on these apps regularly then that works out to daily, but for me at least this isn't that big of a deal and was pretty much understood from the outset.