What I find even more worrying than the issue itself is the reaction. It indicates that the developers lack basic crypto skills and that this service was never reviewed by anyone with crypto knowledge.
"This is not a vulnerability in Tutanota. We have built Tutanota with multiple layers of protection for our users. We currently use TLS and DANE to protect authentication and data integrity and (only tunneled) RSA-OAEP and AES-CBC to provide confidentiality. We have always communicated this transparently, it is nothing new. Neither the confidentiality nor the integrity of our users' data has been at risk.
However, we know that the implementation is not perfect regarding this detail. That is why we are going to implement the following features as soon as possible:
- Signatures/MAC
- 2-factor authentication
- Algorithms resistant to attacks of quantum computers
- Simple verification of downloaded Tutanota apps
Regarding the described issue, we know of two possible attacks on AES-CBC. Neither of them is feasible against Tutanota users:
- Bit flipping: You need access to the plain text email and you have to be the MITM. - -Plaintexts are available at the sender and recipient only. We use secure TLS algorithms and DANE to protect against MITM.
- Padding oracle: There is no padding oracle in Tutanota.
Tl;dr
There is no known vulnerability in Tutanota. Security is the heart of Tutanota, and we will fix vulnerabilities immediately."
After clicking through the link, I have no idea what "Tutanota" is. Am I supposed to know? Hubris (of course everyone knows what Tutanota is)? Proximity (working with it every day, forgot to give a one-sentence explanation)? Deviousness (reader says, "what the hell is Tutanota? I better click through and find out)?.
> Tutanota automatically encrypts all your data on your device. Your emails as well as your contacts stay private. You can easily communicate with any of your friends end-to-end encrypted. Even subject and attachments are encrypted.
It might have been better to submit a different blog post or web page which might have provided more context. Or the submitter could have left a comment about what Tutanota is to provide the extra context.
Note that I don't really mean to single out this submitter, this seems to be a common problem. It's probably also worth pointing out that posts on official company blogs should not assume that readers of any one blog post are regular readers of the blog.
Edit: As an addendum, I should probably point out that I agree with rossjudson's comment to which I am replying.
I upvoted him because I could see the problem immediately. There were a lot things said but with little real information for comparison to other things. Had to dig around the site a bit before that. A number of these new offerings have this problem. Compare to this product [1] where I know at a glance exactly what the thing does with enough specifics to begin mental comparisons and know what its likely strengths/weaknesses are. Anyone doing reviews would benefit if the new stuff was that clear at a glance.
The title originally submitted here was more descriptive before it got changed to the text of the heading from the blog post in the link. Regular readers of the Tutanota blog will presumably already be aware of what it is, by virtue of being regular readers of the Tutanota blog.
Most of them use risky web tech, insecure endpoints, hosting (always risky), or developers/servers under risk of coercion by major nation-states in surveillance game. All untrustworthy. My main post above outlines what it takes to make a strong assurance argument and let's just say there's few that can do it. Their time is also expensive.
My temporary solution is to combine endpoint encryption (eg GPG), MyKolab for address/storage, air gaps, and a guard. The MyKolab account gets me Swiss storage with associated legal protection & lack of clever Google-style snooping. I assume the servers are compromised along with messages. To deal with those threats, people send me either GPG messages or otherwise encrypted files. For protection, I can download them to a disposable, hardened PC; send them through a guard or data diode for reading; use a separate computer for writing and signing with a data diode. This is Markus Ottela's architecture for Tinfoil Chat. His diodes with separate PC's are simpler than my guards with KVM-connected PC's. So I recommend it his way these days.
You can swap out MyKolab for any other service for delivery or storage. You just have to make sure they're totally untrusted, incoming messages can't compromise your keys, and keys/secrets can't leak out. Tricky stuff for any of these developers. TFC already does this. I suggest these people modify its latest incarnation to do email (maybe apply GPG), find any other flaws it has, and improve on docs/distribution. Will get more mileage.
Security-oriented Live CD's or virtualization tech can be used for any of these except the links between systems (eg guard, diodes). QubesOS lowers its attack surface by using Xen instead of a full Linux distro, albeit with risk in Dom0 & hardware attacks. That they isolate their firewall and such is a good thing. Linux or FreeBSD, more mature but larger attack surface, should include full usage of any hardening guides, software protections (eg Softbound, Control Pointer Integrity), mandatory access controls (eg SELinux, SMACK), device protection (eg IOMMU or PIO interface), and so on. Whatever the most paranoid people use basically and do this in any applicable parts of QubesOS as well.
You just want these systems hardened from attack as much as possible along with ease of detection and easy recovery. The disposable part means exactly what it says: the Internet-connected computer is the target and filter of the most risky functionality. It will be toast at some point, maybe often. So, use a throwaway device for it.
Do one thing and do it well... I like how simple the interface is and how trivial it is to toggle between sending encrypted emails and non-encrypted.
Great move opensourcing this. I respect the devs stated ethos and reasons for doing this. Hopefully this project will benefit from 'Linus' Law' and will get help to address the sec issues noted in the full disclosure.
I would like to be able to integrate this with something like keybase - but where i hold the private key (which you can do with keybase but it is not the default).
An interesting project and seemingly moving in the right direction.
Except it isn't using actually encrypted shared key emails, its just sending links to open the email with a shared password on their website.
I mean, encrypted email is a hard problem because nobody supports it except you, and that means you can never encrypt your emails, but Tutanotas UX of making users view the messages on their site (and the fact you cannot use it with even GPG friendly mail clients) kind of sucks. I'd rather just use OTR XMPP for encrypted private communications. Or my mumble server, which is ironically the best implementation of encrypted chat I can use with other people because I can use certs or passwords at my discretion.
What bothers me most is that i can't download any e-mail (like normal e-mail clients) to my local machine (on OS X and/or Linux).
So, i can't make any backups either and if the Tutanota servers disappear (for whatever reason, maybe beyond their control like Lavabit), i no longer have access to any of my e-mails :s
(same problem with their iOS clients, no connection to the servers = no access to any of your e-mails)
I think this service proves the lack of value of code review or code release in isolation. They give you the option to save your login on a "private computer", which stores a cookie that will be sent over non-encrypted connections.
Which means that if the user connects to a wifi connection that you control, you can trivially inject something which will cause the browser to make a http connection to www.tutanota.com and leak the cookie.
There's more to security than encryption and open source code. #include plug for FastMail - we know what we're doing.
We don't do the end-to-end encryption, because pre-agreeing to a high security password is nearly as much work as setting up PGP - and with PGP you're not trusting that Tutanota are actually running the code that they claim to be running.
Besides which, Tutanota don't actually send an encrypted email, they send a link back to their server where you can read the secure message - which means you're going to need to be online whenever you're reading a tutanota message - with access to their server, and you're going to have to agree on a highly secure password with everyone you correspond with.
I haven't tried unsending an email or revoking a password yet... maybe I'll try revoking the password...
WOAH. OK, so I did this:
Account A == brong@tutanota.com, signed up for testing
Account B == brong@brong.net, my personal email.
I created a shared password "this is bound to work" on account A and sent myself an email to account B. It came with a link that I clicked, which asked for the shared password, and logged me into the tutanota interface as brong@brong.net I guess, then I:
1) deleted the contact from my tutanota account to try to revoke the send message.
2) clicked the link from brong@brong.net, which took me to the email.
3) replied from the tutanota interface as brong@brong.net.
4) replied from the tutanota interface to THAT email as brong@brong.net. It asked for a new shared password, because I had removed the old one when I deleted the contact.
5) clicked the new link in my brong@brong.net account. I got an error, because my shared password was now wrong. I entered my password, and I could read BOTH the emails, including the one only sent with the old shared password.
At least the old link is invalid, but any new links shows old email that was sent with a different shared password.
I am left concluding that this is so much snake oil. sigh. I know encrypted email is all the rage these days, but I'm not sure that I would trust a site just because it used the right buzzwords. Two massive security fails in 15 minutes' testing.
You've forgotten to include the plug for FastMail. And maybe you should include the fact that you work for FastMail (it's not that you're hiding it, it's in your profile, but it's nice to mention in the text if you're working for the competitor).
Personally, I stay clear from any hosted e-mail services. I don't care if their backend is open source or not. RMS explains all problems with SaaS in his essay "Who does that server really serve?".
It's sad that the current selection of open source e-mail clients is not that great. Especially, for less technically inclined people.
I figured the "we" after FastMail said that I work here. Quite a lot of our backend source is open too (particularly the Cyrus IMAP server, which makes up the bulk of my work now that I have people with a more dedicated ops role for day-to-day tasks).
We encrypt everything to disk, and everything on the wire that is practical (connecting to other providers still falls back to plaintext if they don't support STARTTLS, because encrypted-only isn't practical yet)
But client connections are ONLY secured now, we don't allow any plaintext channels where you could accidentally send your password.
So you're stuck trusting us, but only us. The only sane alternative that I can see is to run your own server, on your own hardware, preferably hosted inside your own home for maximum legal protection. Of course, unless you really know your stuff then your data could well be at greater risk from both legal and illegal intercept.
(and that's nice if you're providing it just for yourself - as soon as it's for anyone else, even just family, you become on-call tech support)
> The only sane alternative that I can see is to run your own server, on your own hardware, preferably hosted inside your own home for maximum legal protection. Of course, unless you really know your stuff then your data could well be at greater risk from both legal and illegal intercept.
This is what I do. At home I have a Chromebox with FreeBSD and a fully encrypted disk. I have a VPS with an OpenVPN server and the required ports are forwarded to my own box. IMAP and SMTP submission require TLS so those are fully covered. Like you said though, the only thing you can't reasonably forcibly encrypt is SMTP itself. Most of the mail I receive comes with STARTTLS but not all.
With this setup the VPS provider can't see anything when SMTP happens with STARTTLS. Obviously if they really want to read my mail they can start MITM'ing the STARTTLS away because it isn't forced but this is the best setup that's reasonable.
My ISP for my home can only see encrypted OpenVPN traffic too. In fact the VPS is in another country but that's only a consequence of the silly VPS prices in my country.
Obviously with this setup I don't have to surrender my private key to anyone either, it sits on my own box (and I use a legitimate CA-issued certificate).
Unfortunately no, I haven't really gotten into that stuff. It doesn't take that much time though if you have a basic knowledge, it took me like a night to set it up.
One thing that bothers me about these "encrypted webmail"-services, is that they all depend on TLS for whatever thin sliver of security they provide. Then they go and use something that's not S/MIME and/or x509 for end-to-end (or whatever kind of) encryption/authentication.
At least leaning on pgp makes sense in because it is already somewhat deployed and in-use.
But since they all fall apart if TLS has a hole, it seems odd to add another layer. The complexity of any other solution for encryption/authentication must surely outweigh the benefits of OurCleverCryptoSystem(tm)?
I'm not aware of any advances that have changed the possibilities of asynchronous secure messaging: you can't have PFS, key distribution is hard.
At least with x509/gnupg you can partner with someone like youbikey, and at least pretend to lower the ux friction and increase the real-world security of the system.
You're right about you're being clear about working there, I'm sorry if I've sounded a bit harsh. Thank you for your analysis BTW. I think it's always useful to provide constructive review to fellow developers.
I agree with you that the only alternative is self hosting. That's why I'm anticipating a new wave of low power mini-computers with solid state memory, a fully configured embedded GNU/Linux or *BSD distro, and a webbased interface for management which would make it possible to host our own services at home using OSS only. The FreedomBox is an example of this.
I'm not a Fastmail fan, but it's sad that what appears to be the only substantive technical commentary about Tutanota on this thread has been voted down by people who can't see past tit-for-tat between email services. Tutanota exists to provide security, and appears to do so poorly. That seems like one of the more relevant things to discuss.
Curious: is that just a personal preference? I still use Google Apps for almost everything, but have a couple domains on Fastmail and have been extremely happy with their service. I have often considered moving more of my business there, and would greatly appreciate a substantive warning if it's warranted.
We don't allow a way to send encrypted email directly, because the intersection between people who wanted it and the people who would trust us with their private keys was too small to be worth it. Instead, we provide standard protocols (IMAP/POP/SMTP) so you can run the crypto software on your own computer and submit and receive through us. This gives you full encryption support.
We're probably similar in security to gmail - they're very good as well - most of the big services are. The main comparison at that level is the unpatched server running in the basement of a business somewhere - or indeed the "roll your own" where it gets updated once every few months if the person running it remembers and isn't on holiday at the time... it's nice having a team and always someone on duty for security.
Ok - I initially misunderstood that you were saying FastMail does the same thing as Tutanota. When I reread your comment I see that you aren't saying that - just that you secure transit with TLS, etc.
> #include plug for FastMail - we know what we're doing.
At the risk of sounding too negative, er... well, do you?
I'm a paid FastMail user right now, and after first signing up a couple years ago, I filed my first and only bug against FastMail last month about the inability to use spaces in passwords. (I feel like it should go without saying, but it's painful to have to use symbols instead of spaces on mobile, and even a bit jarring at a real keyboard, and it makes password input prone to typos.)
What I got in response was some handwaving about the problem that amounted to a "REQUEST DENIED". (In truth, I did find that a bit frustrating. The free email service I also use that's notorious for offering no support finds spaces in passwords to be perfectly acceptable, but the one I have a subscription for won't let me? The one whose benefits are frequently touted as including, "Believe us, it's totally worth it. Look how you can talk to a real human being." If the choices are not being able to talk to a human being but not needing to, and being able to talk to one who doesn't accept that there's a problem, much less provide a solution for it, then the former pretty clearly wins out.) But the frustration from that ends up amounting to a minor one wrt the digression that the developer who was responding went on to write:
> Probably later this year we plan to require client specific passwords for all external software. When we do that, we won't allow people to use their login password for IMAP/POP/SMTP/etc clients, you'll have to use a generated one. At that point, the only login place for your password will be the web browser
Okay, so here's how my security setup works now:
Create a very secure password, and then... just use it. Every time. I.e., do not ask Thunderbird to save it, and do not set up a client to receive messages on mobile.
In the proposed new scheme, users will be forced to choose between memorizing whatever unmemorable thing the generator spews out for a non-web client, or enter it one time and set up their clients to save it. Which is no choice at all; the latter is effectively the only one available (see "unmemorable"). What this all means is that your pursuit of trying to make account access more secure actually ends up demanding that it be very un-
The end result is that at some point "later this year", I'm going to have to take the same approach as noinsight and run my own mailserver[1], point my domain away from FastMail, and hope that my request for a refund will be granted due to the conditions changing during the middle of my subscription.
So we have an FAQ link which amounts to "too many clients have bugs around spaces in passwords". I'm not as certain that this is true as it was when we instituted that rule, but it definitely was at the time.
As for the autogenerated password thing. You might have followed that Google are making it more and more difficult to "just use your very secure password everywhere" as well, because they find that it leads to a higher account compromise rate than per-device password or oauth. We also see a higher account compromise rate than we would like, and so we have to design our processes to be robust against human error and phishing.
What we might do for the zero point something percent of users like you is offer a way to re-enable password based login for clients (just like Google do) after you read a warning pointing out the security downsides of not having selective revokability and guarnteed non-password-reuse of device passwords.
I can appreciate the problem of people using clients that are broken, but why does this matter here? Thunderbird is not among them. So why are we discussing whether a customer would be able to successfully use their client to access the account of someone else who has spaces in their password? (Also, could you share the data you have about those clients?)
> You might have followed that Google are making it more and more difficult to "just use your very secure password everywhere"
I haven't followed this. Do you have a link? But again I ask: why does this matter? What do Google's actions have to do with using either FastMail or running your own mail server?
> What we might do for the zero point something percent of users like you
... uh?
> is offer a way to re-enable password based login for clients (just like Google do) after you read a warning pointing out the security downsides of not having selective revokability
Again, this is a way of responding to something that is completely orthogonal to the problem. Having control over the passphrase has nothing at all to do with whether you can or cannot issue multiple ones for use on different clients so that you can maintain revocability. This isn't a complaint from me, because this is never going to come into play for me given the way I'm accessing things, but it's so weird to continue hearing approaches like this that are, with respect to the thing being discussed, just... sideways.
> and guarnteed non-password-reuse of device passwords.
Google are doing it because the risk landscape has shifted from people signing up fraudulent accounts to people stealing existing accounts in good standing and using them to spam. You can rate restrict new free-trial accounts, but it's harder to rate limit long standing good accounts without annoying legitimate users - but once their account is stolen, that means a fair bit of spam can get out before reports come back or we can block the limit.
The zero point something percent comment - most users aren't at your level of proficiency - and we do have to play the percentages here. If 10% of our users get phished and their accounts used for spam, you can't send email reliably through us any more because we'll be on every blocklist in existence.
The vector for accounts being stolen is almost never weak passwords - it's phishing or viruses or password reuse. We just don't see people enumerating passwords. You flat out don't need a super strong password, it makes no difference beyond not using one of the top 1000 most common passwords (unless our entire password DB gets stolen, but that's a different class of risk - whole system vs individual)
Well, we can't guarantee that you don't go ahead and use it on another service of course, but by generating the device access token ourselves, we can be sure that you aren't reusing a password that you are using somewhere else.
You can already do this yourself, we support alternative logins, including one-time passwords - but as I said, it's about the percentages, we need to make it easier for average people so that more accounts are more secure.
Carussell: have you ever worked in first-line support?
To paraphrase a friend: "If I never hear the phrase: Why isn't my email working? I have an iphone...", I'll die happy."
I can appreciate that it's nice to be able to use space as a character, any reason you can't just substitute "." or "," on mobile (in terms of UX, obviously)?
Btw, I have no affiliation with fastmail.
> and guarnteed non-password-reuse of device passwords.
This should be easy to guarantee at creation-time:
For user A, all devices a...x
When generating a new device password p
Check p against all historical device passwords
for devices a..x
If no match, use p
Else generate p' and try again
When a valid (non-reuse) password has been found, it can be stored in non-reversible form (salt+hash, optional stretching).
I would too like to know what fastmail actually do, though.
Your service, on the other hand, operates in one of the Five Eyes countries by their own citizens. Thats risky to many. Further, it's a web service (easier to attack) with a ToS that's immunizes you from about any negative event. I did like the privacy policy but above things add up to plenty risk some competing solutions don't have.
Personally, I'd like to see a thorough look at existing and new providers to do a point-by-point analysis of such strengths and weaknesses of each. It's work worth a government grant or something.
And Snowden leaks explicitly endorse one and implicitly endorse the other. All I needed for confirmation of what I should use. Anything wanting to be better should build on the good properties of PGP while not possessing the drawbacks.
Highly secure messaging, email, and Internet services has a long history in military and defense sector with issues well-understood. I mention here [1] the framework I used in high assurance security engineering. The system must be built using strongest engineering techniques with the right requirements. It must run on an endpoint with specialist security engineering techniques resistant to talented hackers. The protocols, parsers, networking stacks, and so on must be carefully implemented to prevent problems. Modern attackers are hitting various firmware, too, so protection is needed from devices. Then, we must be sure the software displayed to us for all this is what's actually running, on non-subverted hardware, and with non-malicious insiders.
The whole thing is beyond tricky to the point that no hosted service is rated to high security in any honest way (eg outside hand-waiving arguments). The only proven model has standalone apps (eg PGP, Nexor Sentinel) acting as proxies between trusted mail/messaging apps and untrusted side. Ideally, user-controlled, vetted code handles secrets with untrusted side (eg Internet host) simply a transport or storage layer that has no influence on endpoint or security past availability. The trusted software must also run on strong endpoints that don't run any other risky software. Given target market, that disqualifies most users of email and messaging software in general.
So, about this one. It seems to not meet many of these requirements and its users don't either. That puts it in Low-Medium assurance category where it might still be helpful against regular black hats, snoops, and attackers without 0-days in what their users have. That will necessarily require decent design & implementation. I commend them on having it pen-tested & open-sourced for review to that effect.
Meanwhile, users wanting to increase resistance to High Strength Attackers should use air gapped, hardened NIX boxes with GPG or Markus Ottela's Tinfoil Chat. Snowden leaks showed using GPG correctly, esp with Tor correctly, gave NSA hell. Markus has also improved TFC many times in response to our critiques to the point that many attack vectors are impossible, risk is lower in others, and endpoint risks are possibly lower than all solutions if right hardware is used. Still work to be done but he's way ahead of the competition.
Note: I second rossjudson that the site, although with beautiful artwork, should be redesigned so it's clear what the app does without a lot of digging. I've seen competing apps where they were clear on the specifics upfront while still not drowning readers in technical detail. The technical detail was a link or so away if I needed it. Right not, it looks too much like a marketing team's work.
"When the serial cable used to transmit information between two computers is enforced with an RS-232 data diode to funciton in unidirectional fashion, exfiltration of encryption and signing keys without physical access becomes impossible."
Right. Or use a smart-card. I'm not sure it's more sane to trust a typical pc to not have a hw backdoor (eg: intel managment cpu with wlan access -- does need to be enabled in bios. At least that's what Intel says) -- rather than trust a smart-card (idea is to send data to card, get signed/encrypted data back. Keys never leave card).
The modules look huge though. Plenty of room for someone to slip in a listening device with burst-capable transmission. I don't think the actual security is much higher than just using a smartcard?
The smart-card vendors might be working with any number of people. You can't be sure and subversion is at an all time high. The reason he used our data diode recommendation was you can use arbitrary hardware for the send, receive, and network nodes. Receiver can't leak anything outward even if hit by malware. Sender can't receive attacks from Internet. And network can be toast without ill effect. This can be verified by looking at the wires you modified rather than giving ChipWorks millions to R.E. it. ;)
Later we told him OTP wasn't going to get takeup. I described my cascading cipher. That led to his multiple encryption etc version. I told him high assurance crypto NSA uses defeats covert channels with (a) fixed sized transmissions, (b) fixed rate transmissions and (c) not letting errors have a visible effect on that. Goes way back. He changed it to do that.
So, he was clever with the design and has been responsive to updates. Those are just a few I remember. He used Python because it's easy to read. He only has so much time for the project. Other stuff I suggested included converting it to a language like C, Ada, or Pascal for control over memory & extra visibility. Also, using the Dresden Nizza architecture (or MILS architecture) on transport and sending stack to further enforce isolation and secure decomposition. More work to be done but it's a nice executable specification of something that can give NSA hell with low end equipment.
Far as email, that must be a side project he started as a result of some of us suggesting he port GPG or something to the architecture. I'm not familiar with it. I only endorse main chat architecture with encouragement that implementation keeps improving. :)
Oh, don't get my wrong. I think the project is interesting, and it's obvious open hw is the way to go - especially if the design can be kept both simple and useful.
I don't really see anything wrong with having the cable be shorter (ie: an open platform smart card).
Lets just say that from a practical standpoint, I'd be much more interested in getting "most" people to use s/mime and/or gpg with keys and encryption on a dedicated device -- no matter if that device is a re-purposed Android without a baseband chip, running some open Linux based OS (full distro or something like Replicant) -- or it is a smart card, or some kind of dedicated open hardware.
The cascading idea is interesting, but probably more useful in a more adversarial scenario than most people need. Good for running drugs, or a revolution (or insurgency) though ;-)
On a serious note, I do see some real overlap between this military grade approach and "normal" use-cases. Especially for people that find them selves at odds with their government. Be that FBI targeting #occupy in Zuccotti Park, the German intelligence services/NSA spying on elected officials in Germany -- or people opposed to current policies in China, or advocating gay rights in Russia.
We live in a time where there's enough oppression to go around :-(
The cable is just what he had at his house: a smaller one is better. Not sure why you keep bringing up smartcards as alternative to long cable: two totally different things. Only realistic alternative to his cable w/out extra risk is point-to-point IR transfer: cheap & hard to grab if nearby (unlike Bluetooth/Wifi).
It would be nice to get more people on GPG or Linux. It's what I use, too. The problem is their Trusted Computing Base [1] is ridiculously huge. Even amateurs regularly break NIX systems, browsers, and so on. The methods for designing things highly resistant to attack are out of reach for most projects. See my framework [2], for example, to see the gap between high quality coding and truly secure systems. Most developers, even many security "pro's," have no idea about so many of these things. Just ask anyone building one of these "NSA-proof" crypto tools to see their Covert Channel Analysis with breakdown of all residual storage, timing, and resource exhaustion channels. Observe the blank or confused stares.
So, Markus was trying to shortcut around that using concepts he learned and we discussed. It had to be provably immune to software attack despite weaknesses in components. GPG on Linux/Android means GPG or Linux/Android must be breached. Although GPG is solid, Linux/Android breaches abound because they're insecure crap. So, that won't work against event a good black hat. He couldn't build a High Assurance Guard [3] by himself so he had to eliminate almost whole TCB. TFC was a clever workaround. TFC and Linux/Android-based clients have no comparison given only one can make a strong security argument under all conditions of software attack and the others just have so many real-world attacks... Apple to oranges, my friend.
For cascading, it might be overkill and might not. Note that many real-world algorithms working in isolation had problems. Cryptophone used a AES + Twofish cascade as insurance. The idea is that one algorithm or trick failing won't compromise the system. It applies to regular crypto users as much as anyone else given we use same algorithms. I agree it might not be necessary for majority of people, though. They use Gmail or Facebook over HTTPS for their critical communications. Clearly different privacy needs, there. ;)
I agree on the overlap: it's why I push the strong stuff. :) The reason, though, is that all the bad guys aim for the same thing. Plus, the nation-state methods (esp firmware attacks) are starting to be used by non-nation-states. The methods for stopping software attacks by nation-states are same as for anyone else. It's why our systems need to be redone (example [4]) to simply enforce fundamental protections to stop all that shit. And meanwhile, we have to use GPG, TFC, air gaps, and other kludgy solutions to make up for how bad we have it.
> The cable is just what he had at his house: a smaller one is better. Not sure why you keep bringing up smartcards as alternative to long cable: two totally different things.
True, a smart card doesn't give you separate input/output terminals. But it can isolate the key and encryption bit. It can be quite secure in case of theft, off-line access.
[ed: I wasn't talking about just the cable, I was thinking about all the devices soldered into it. They look big enough for there to be a possibility to embed a listening device. That is if you're a direct target by something like the Egyptian Secret Police etc]
Certainly this system has different and stronger security properties -- but also usability issues (even if you could probably sandwich most of it into a single laptop case. Would be interesting to have two screens side-by-side for input and output).
Do you know anything about throughput for this? Would it be viable for high-quality video chat?
> TFC and Linux/Android-based clients have no comparison given only one can make a strong security argument under all conditions of software attack and the others just have so many real-world attacks... Apple to oranges, my friend.
I meant and Android hw device, similar to running a stripped down OS on pc hardware. Sort of as a replacement for the hw in the terminals used here. I didn't mean a full Android software stack. Preferably a system without baseband, networking etc.
I'm not sure about your use of "NIX" here. Is this a combined hardware/software platform? Google wasn't very helpful.
It is of course true that if you can compromise the keyboard, display driver, kernel, i/o for gpg etc -- you can actively compromise the system.
As far as I know, typical Linux/bsd installs are not vulnerable to compromise either via a usb stick or via tcp/ip (assuming updates are disabled). So it would seem that using a dedicated (mostly) air-gapped laptop would practically be as secure. In such a case, keeping keys/crypto on an open-hw smartcard might be a prudent extra step that would add a little more security against certain threats.
> For cascading, it might be overkill and might not.
As long as one can show that cascading doesn't weaken the system (eg: perhaps a construction opened up some kind of oracle, along the lines of compression+encryption, perhaps key derivation would leak information on a master secret if one uses related keys) -- I don't see much of a reason not to.
On the other hand, if you double the number of crypto systems in use, you double the number of bugs. Of course, it might be that the attacker can't attack bugs in the inner systems easily, so perhaps by layering you get to choose which system are most easily exploited...
Either way, I think both an air-gapped computer+smartcard and this system would be secure enough, that if you are a target, someone might want to try and sell you special, compromised hardware. It might not compromise the system as such, but even just a microphone+transmitter in any one of the components might be enough to pick up sounds of typing, and be able to infer plaintext. Not sure what the easiest way to read the screen would be, but probably some kind of signal leakage from the gpu/cable/screen.
The smartcard can isolate the key. The problem is that they don't need it if they compromise the PC. It's pointless if the goal is to protect the current communication. Far as implants, they're possible with anything. Rule is that enemy can never physically possess your stuff or it's considered compromised.
"but also usability issues (even if you could probably sandwich most of it into a single laptop case. Would be interesting to have two screens side-by-side for input and output)."
You should've seen my old VOIP design: a briefcase of cables, boards, and shit lol. Yeah, it will take up more space and have a learning curve. Any strong solution usually does, though. Be skeptical of anything claiming high usability and high security. ;)
"Sort of as a replacement for the hw in the terminals used here. I didn't mean a full Android software stack. Preferably a system without baseband, networking etc."
If you keep the three nodes, then you can certainly use Android devices. If you condense them, then you loose the protection due to all the attack surface. One drawback with Android is most of them have embedded wireless hardware. Tiny risk maybe but hard to tell if you've disabled it for sure. Android on device w/out wireless chip is fine.
"I'm not sure about your use of "NIX" here. Is this a combined hardware/software platform? Google wasn't very helpful."
UNIX or UNIX-like systems. Many of us called them NIX's for short in the old days. Their complexity and security track record make them untrustworthy for defending against strong attackers. They're a last resort you use while still monitoring for compromise. Unfortunately for that crowd, the systems with high security are all proprietary (often defense-only) and similar open-source systems have less assurance and usability. They're all alpha stage, actually.
"So it would seem that using a dedicated (mostly) air-gapped laptop would practically be as secure."
It's lower risk than most things. It's why most of us use that strategy. Your risk is being hit in the kernel stacks, the drivers, or peripheral firmware. If data goes back and forth, then the risk goes up. To be clear, this is a targeted attack by professionals that know what they're doing. Average hacker doesn't do this.
re cascade
They haven't shown evidence of this for years past the meet in the middle. So, it seems fine long as I avoid that. Far as adding risk, it's unlikely given this is merely a basic algorithm application. If you said protocol engines, I'd totally agree. With algorithms though, you can usually get three right if you can get one right. Still want a specialist coding them, though.
re other stuff
The system in question must be evaluated by security pro's before we can trust it. Meanwhile, GPG + air gapped machine is your best bet outside TFC. As for hardware subversion, they might do anything so acquire your hardware in different, unpredictable places or have others order it for you. Far as screen, the monitor cable is the best place for leak. I proposed long ago a shielded cable that works except amplifies signal along unused frequency. Later on, TAO catalog leaked and there's a VGA cable modified to do exactly that. So, there you go.... ;)
Thank you. I usually use, see *nix. Arguably Android sans Google apps, over-the-air updates fit into that box.
The idea of three nodes, trivially separated and air-gapped is interesting.
One should be able to do the input with an adruino or something (most obvious choice, a keyboard, but could also tack on a mic/camera for audio/video).
Link that with a "one-way" cable to a rpi2 (the "compromised"/networked node), and a cheap android tablet w/o baseband/gsm chips -- and perhaps solder off the antennas/kill the wlan/bluetooth. Preferably one w/o NFC. Use the tablet as the screen, and the "out" node.
Use lobotomized usb-cable for power from the Android-devices battery, and run everything off that.
I do like the idea of having the separation be obvious and simple -- easy to audit.
Suppose one might as well run freedos on the two nodes -- but Linux/BSD is probably less painful.
Now you're thinking on the right lines! All of that should be fine. Didn't think about using USB for power: just had a strip in the design. Will have to think on it. Standardizing on Linux/BSD is wise, too, as it lets us easily adapt it to new software applications.
And, in case I forgot, you can modify this architecture for voice or video but will need to replace serial cable with higher bandwidth line. Risk starts to go up there. You either need a real data diode or must physically modify Ethernet/Fiber cables and/or cards to do one-way transmission. Might take custom, microcontroller board to be sure it's done right.
It's a bigger project to say the least. There's examples online but the security is debatable. That's why the defense sector builds and certifies the big guns [1]. That it takes them that much hardware & they mention TEMPEST hints at how much work goes into this one, tiny problem.
From what I could tell, this is only the Web front-end client, not the backend server stack. Therefore, it is not possible to run a full Tutanota server using the open-sourced code. I would love to be wrong about this, but I don't think I am.
Physical location security
Our main servers are located at New York Internet (NYI) in New York City, USA. Their facility is a high security, video monitored location; with backup power, air conditioning, fire systems, 24x7x365 monitoring, and onsite technical support.
I am familiar with NYI - they are a good datacenter - but I do not think they are in any way more or less secure than the Equinixes, Internaps, Telxs, etc.
The security of fastmail is really the security of end to end TLS with forward security. All good practices but industry standard, no?
"main" servers. Most services are mastered at NYI, with replicas at other sites (currently Amsterdam, soon LA as well). Soon some services will have the option of being mastered elsewhere. Maybe I should reword that help doc a little bit.
Our security is about as good as is practicable. It might not be the absolute best but I'd wager its better than most, especially when balanced with the usability and reliability guarantees we make. Obviously encasing your server in concrete and dropping it into the ocean is more secure, but that doesn't give you much of a service.
I don't think I can say NYI do better than every other datacentre, because I haven't used ever other datacentre, but they certainly seem to be far above most other players. It also helps that we've worked with them for years and know most of the key staff personally.
We don't currently encrypt traffic between our servers within the same datacentre. We own all our servers and network equipment, so there's no inter-server traffic leaving our own equipment. Of course its possible for some kind of network tap device to be installed but at that point the attacker already has physical access to our servers so we've already lost. This point was addressed in the first blog post alfiedotwtf linked to.
We do encrypt between datacentres, of course.
So to your final question, what differentiates us from other services, its hard to say exactly because I don't know which other services you're talking about. Our general approach is to use the best tools and techniques available, and to understand everything we use so we know what compromises we're making at what our attack surface looks like. Our ops staff know this stuff well, respond quickly (eg we patched Heartbleed before start-of-business in the US, when most of the mainstream media hadn't picked it up yet), we talk very openly about what we do and how we do it, and we offer a generous security bounty to anyone that finds an exploit.
If you think we could be doing more, let me know! I'm happy to be contacted directly (robn@fastmail.com or @robn on twitter) or you can open a support ticket or ping @fastmailfm.
Not directly, though we collaborate with Americans on open source projects - one of the main contributers to Cyrus is based at CMU (which is where it came from in the first place) and of course we run plenty of other software developed by people all over the place!
I don't think Numberwang is interested in our developer's nationalities because of any patriotism/racism. I think the question was more about if any of us can be compelled to compromise our user's privacy by either the PATRIOT Act (no), National Security Letters (no), or some other insane US law.
The more modern one - it just looks quite dated. Don't get me wrong - I'm glad I'm using Fastmail but there are certainly areas that need improving such as the calendaring / contacts management / load times. I know that the backend is really solid - the UI just needs a bit of a spruce up in my opinion.
What I find even more worrying than the issue itself is the reaction. It indicates that the developers lack basic crypto skills and that this service was never reviewed by anyone with crypto knowledge.