This analysis leaves out the fact that Pavel Durov is, with Telegram, in approximately the same position Ladar Levison was with Lavabit. Unlike Meredith Whitaker, Durov actually is in a position to furnish documents to the French government, where he has citizenship. He's in that position because he has repeatedly made deliberate product decisions, to the bafflement of cryptographers around the world, to keep himself in that position.
If you literally have plaintext documents responsive to criminal inquiries in a jurisdiction you are subject to, we don't reach the "internet censorship wars". You're in a place not dissimilar to a 1970s telephone company; the "random people can't simply declare themselves above the law wars". Don't be in that place. Encrypt end-to-end.
Indeed. Two of the more common questions I get with Tarsnap are
Q. How do I know that Tarsnap is secure?
A. Read the source code.
Q. Ok, but you're really smart, what's stopping you from putting in a backdoor and hiding it really well?
A. I don't want to get tortured, and ensuring that I can't decrypt your data protects *me*.
IMHO second answer does not hold water. If you will end up in situation where you are tortured they will torture you until you will you say how to add the backdoor.
His point is that he can't backdoor it: you can read the code before you install it. I'd go further, and say that this is true of anything end-to-end encrypted, open-source or not, because it's not 2002 anymore and reversing ordinary client software is table stakes. (I'd still rather run something open source, ceteris paribus).
Feeding the paranoia above is that cperciva would verifiably be the smartest person in the room. A canny torturer would respond to this bringing in djb as the primary instrument of torture. "First one to break or weaken scrypt or 8-round salsa20 gains their freedom". The loser is forced to give talks at AWS marketing conferences for the rest of their natural
Not being able to "backdoor it" (presuming this means "exploit a backdoor the torturer presumes you have already put into it") does not prevent you from getting tortured to backdoor it.
All it does is, should that occur, prevent you from giving the torturer what they want to end the torture.
OTOH, convincing the torturer by, among other means, public statements in advance that you have failed to consider this anhd believe that not having that ability prevents torture, and that for this reason you do not have it, might prevent torture. But that's a big gamble on potential future torturers believing your public statements of motivation.
Exploitable but obscured backdoors in software distributed in form that is compiled and installed by downstream users is not impossible, though sufficient auditing may make it improbable.
He probably should have said that if it's what he meant. In his answer he implies that he could in fact back door it but chooses not to because of the liability.
Reversing ordinary client software is table stakes, sure. I'm not so sure about reversing client software which has a deliberately hidden backdoor. (You can hide a backdoor in source code too, of course, but I think it's easier to hide one in a binary because you could e.g. ensure that a buffer overflow overwrites cryptographic keys, where a C compiler would have the freedom to change the memory layout.)
That is to say, it's entirely possible that you were already tortured and the backdoor is already there by using the same logic - no time machine needed
Like already said unfortunately the only safety would be reading the code
Coerce you into sending something like "All users must upgrade to client version xyz because of a backdoor discovered by the NSA in the encryption used in older clients. I'm not allowed to tell you what it is, however, rest assured, the latest versions do not have this vulnerability." (but do have a backdoor that I've been tortured into adding).
And then wait for a scheduled backup with the
backdoored client.
Though XZ says that's impossible, so I won't lose sleep over that scenario.
I am confident that if I sent a message like that, the top application security and cryptography experts in the world would collectively descend on the Tarsnap source code to figure out what changed.
Honestly, I really wish the Tarsnap server was open source. I imagine it has not been released as such because that would probably hurt the business a lot, especially given that the costs per GB are currently approximately 50 times more than I would pay for simple object storage on B2.
I built our company's first backup solution on Tarsnap, but when I projected out what deploying that to our entire fleet would cost, I rebuilt on Restic. We currently pay something like $250/mo for our backups, as opposed to the approximately $12,500/mo they would cost on Tarsnap.
Colin, if you've ever hoped to compete with your own software and providing support to people running your whole stack so they can avoid paying you anything, you should give some serious thought to open-sourcing the whole thing.
Yeah I get it, if one wants to make money off one's software, one shouldn't give it away for free, right? I'm just highlighting why I do not recommend Tarsnap professionally. It's great if you're going to be storing under 1 TB of total backups. Otherwise, you're paying 50x as much as you need to. Back when it was released, the alternatives were not as good. Today, restic seems to work just as well (and yes, I've done restores, both as a test and under real data loss circumstances) and supports object storage natively.
By the way, I absolutely love spiped. It beats the pants off stunnel in both stability and performance. Maybe Colin should close-source that and start charging $0.25/GB for traffic that flows through there too? :P
Consider that Colin's target customers might be paying for things other than raw storage, that most products are poorly marketed with cost-plus pricing, and that trying to make everybody happy is usually a bad plan. Make something that some people love, not something that everybody likes.
He's been doing this long enough, I'm not even prepared to dunk on him for picodollar pricing anymore.
It could be designed that doing so will generate some alarm to other people. For example, the backdoor do not exists and it has to be developed, so the attacker has to keep them hostage for some period of time and loved ones may report a missing person. The software then might have to be signed with a key that generate alert to the whole engineering team, which someone else in the company may investigate the unauthorized release as cyberattack. Perhaps the release signing key is physically stored in the office (eg. Yubikey) which also require the attacker to perform a heist in the office.
Surely some three letters organization probably could pull that off, but it add risk to their operation that the operation could be leaked.
Surely some three letters organization probably could pull that off, but it add risk to their operation that the operation could be leaked.
This is basically a point I've made in a few of my talks about security and cryptography: The point of cryptography isn't to guarantee that your data is safe; it's to raise the cost of an attack to the point where a potential attacker decides not to attack. In particular, there's usually a human involved somewhere (sending or receiving information, or both) and humans are squishy and fragile; but torturing people attracts far more adverse attention than torturing data.
No, he won't, because there is no back door. Or yes, because his torturer-contractor thinks there is. Either way, the last part of your sentence doesn't hold water.
Q. How do I know that Tarsnap is secure?
A. Read the source code.
This is a "good enough" but less than reassuring answer in the post-Solar Winds world. (It wasn't before, but less so since the advent of "package managers" and the like.) How would someone evaluate the quality and security of the build process and minimal dependencies (which might have their own problems [0])?
As a non-security person thinking of how might one could evaluate this: Could adversarial builds (say performed in and using tools commonly available in several locations with different types of government spying) generate the same binary? Could that act as a sort of proof of an untainted toolchain? Or a canary for where a build process is tainted?
>> A. I don't want to get tortured, and ensuring that I can't decrypt your data protects me.
There is a line in RickAndMorty about this, which I won't repeat here. To paraphrase: the one thing worse than bring tortured for information you have is being tortured for information you don't have.
Right, which is why I try to make it very clear to everyone that there's no point torturing me. The problem is if someone thinks you have information which you don't in fact have; if they know you don't have the information, why would they waste their time?
In a way ot reminds me of the Phantom Secure story. If you are suspected to purpusefully facilitate crime, you can be held responsible. This seems to be true as well in the US.
In the phantom secure story the intent was crystal clear. In the Telegram case it seems that the refusal to cooperate with investigations cast enough doubts to arrest the CEO and put him in similar shoes.
It's so much worse than that (at least under US law, but I assume French law, which has fewer speech and evidentiary protections, is worse still). By putting himself in the position where he was clearly and straightforwardly able to furnish assistance to criminal investigations, he likely acquired some form of accomplice liability (or whatever equivalent they have in France) as soon as he refused to comply with a lawful order: the refusal itself is a purposeful facilitation.
That's a distinction between end-to-end encrypted applications and cosmetically "secure" apps like Telegram.
That's what I meant, the lack of cooperation is what shows the lack of intent, in the end.
I do not understand why this is turned around a "freedom of speech" thing as there is nothing about censoring speech in the first place, this all about criminal activity happening on the platform and the responsibility of the business behind the platform.
While I agree that in many (all?) ways it is that simple, this is an area where there is a lot of scope for there to be unseen pressure from intelligence agencies. The French literally invented espionage (I choose my words carefully here) let alone whatever pressure comes at Telegram from elsewhere. It is hard to be confident in the whys of decision making around large communications tools and security.
Although, ironically, if the French are arresting him now that says good things about Telegram and their willingness to dob customers in.
If the data that the French government wanted was in plaintext they wouldn't need to use the $5 wrench. Also not sure why E2EE is the answer here, governments can pass laws as we have seen with the UK to water the encryption down.
It’s worse than that, the ‘find people nearby’ feature is a public drug and prostitution advertising billboard with zero moderation and has been for years. They’re in a closer position to silkroad than lavabit
While I heard about this feature I thought it was made to find friends or dates. What shows that the intended use is to faciliate drug trade?
Also, in my country they usually just write the ordinary website name on the wall and add "VPN Tor" whatever that could mean. Maybe they try to hint that Tor and VPN are apps for drug trade? Do Tor and VPN have moderation and do they cooperate with law enforcement?
That user seems to be misinformed, and appears to be discussing client-server encryption, not end-to-end encryption. That's unsurprising, because, among the many decisions Durov has made that have baffled cryptographers, attempting to confuse users about the implications of E2E vs. client-server encryption is one of the most notorious.
> Telegram uses the MTProto 2.0 Cloud algorithm for non-secret chats[1][2].
> In fact, it uses a split-key encryption system and the servers are all stored in multiple jurisdictions. So even Telegram employees can't decrypt the chats, because you'd need to compromise all the servers at the same time.
Yes. An employee can impersonate a user by registering a device in their name and intercepting the confirmation code and then read all non secret chats and private groups of that user.
At least one employee must have the ability to intercept the code.
(Unless the user has 2fa enabled, but that is not the default configuration.)
There are probably easier ways if we knew more about how the administrate their infrastructure.
Maybe? When you login from a new device you're asked to provide an OTP so maybe there is at least that layer of protection and, hopefully, requires some circumvention at the application code level.
However I think the real question is: even if that's possible, can law enforcement compel Durov or an employee to do so?
> can law enforcement compel Durov or an employee to do so?
The E2E encrypted comms are a red herring. There is plenty on Telegram that is public, plaintext and presumably illegal.
If Telegram refused to respond (note: not bend over and comply, just respond) to French legal requests in respect of plaintext criminal behaviour the way any other company would and should, that’s somewhat damning. If Durov went above and beyond and interacted with that content, his goose—as the author put it—is cooked.
If you don't use 2FA then the government can simply intercept SMS code for any phone number. Russian government did it against opposition activists, and it prompted Telegram to add a password as second factor. So any service which allows login or restoring access using SMS (incluging Gmail in default configuration) is vulnerable to such kind of attacks. It seems that people in the West are unaware of this type of attack.
There existed one server which sent the code, so whomever administrated that server could trivially have intercepted it by just modifying the software running there to copy/log it to them.
This could be extremely unfeasible. For example the code could be generated by a third party and encrypted before arriving on a server controlled by telegram and sent to the user. Or it could be generated inside a nitro enclave. Sure ultimately someone could modify the server code somewhere to log the code or any other specific message before it gets encrypted, but at this point we are talking about inserting a backdoor.
There is quite a large amount of people believing that Telegram stores messages in plaintext. I would like to know how they got that idea.
So far the best I've got is something along the line of: if you can get your chats when you log in with a new device, then so can a Telegram employee. With no proof of the claim of course.
If the chat is not end-to-end encrypted, which Telegram “cloud” chats are not, then by definition Telegram (the company) has access to the chats. Full stop.
Something being true only by definition is unfortunately a very weak claim.
For example the company servers could be hosted on an island with armed guards instructed to burn everything if anyone approaches and the decryption happens only on those servers: sure they have access by definition, but they really don't.
The guards could decide they’re not getting paid enough and steal the data. Or the government could arrest them. Or the government could MITM the data center. Or any hundreds of different scenarios.
At the end of the day, the only thing preventing somebody from accessing the data is that they just… don’t.
This is very weak security and it is why cryptographers and security professionals call it “effectively plaintext.”
I am saying that in practice the security might be structured in such a way that it requires several different parties to connive, rendering it essentially fine.
I mean, having to modify server code in order to access data that is "effectively plaintext" is not so different from installing a backdoor inside the client: it's not like the user has any choice of client, so even for apps like whatsapp and signal that run E2EE one is still making a leap of faith.
If we add the fact that everything runs inside an os built by companies who may or may not be constantly spying on their users we could say that by definition there's a lot of stuff in our lives that lives in "effective plaintext".
EDIT: regarding the part about signal and whatsapp I must clarify that of course the possibility of inserting a backdoor on the server side is far more dangerous than the client side: Signal has verified builds so a backdoor would be evident and the user could stop using the service. And the same actually holds true for any app using E2EE if the user simply avoids autoupdating and wait for some confirmation that it is ok to update, at least as long as we can assume that any client side backdoor would be found by independent researchers.
I also want to repeat the original point that started this whole conversation: the point was how easy it would be for Telegram to access the chats and if the justice system can compel them to do so.
When people say it has the data in plaintext, I take as a "they can access them whenever the want right now without changes", and yes of course the could ultimately access the data (in fact they don't claim to be unable to). What they claim (and I believe it feasible) is that even if a judge seized all the assets and servers under his/her jurisdiction it would be impossible to decrypt any user data.
If the only thing stopping them from decrypting your messages is instructions to their own employees to not allow it to be done, that is not a defense against providing access to law enforcement. They can just change those instructions at any time without anybody knowing. Just like they can just change the server software to allow it.
Somehow they must transfer the chat history from their servers to the user. Either it's plain text, or encrypted and they either use the keys to decrypt or send the keys to the user along with the encrypted content. In all cases they can simply access the contents themselves.
I think this statement requires a stronger argument, since even if they could have access to the data in theory there are concrete implementations where it could be extremely unfeasible.
For example, since we are in the realm of speculations, I propose the following alternative to the plaintext or accessible decryption keys: the decryption could happen inside a nitro enclave making it essentially impossible to access the data without changing the application code.
I'm not saying that this is what happens, just that I don't think that one can so easily deduce that "they can access the data" just from the fact that "they send you chat history to you".
The protocol is fully documented. You are free to read it for yourself without resorting to guessing. [1]
Messages are not stored in plaintext. The claim they are stored in plaintext is false.
One can have cogent arguments about one's preference for E2EE or not but the repeated claim here and elsewhere that messages are stored in plaintext is simply hearsay.
No false claim was made, and nothing in that thread was relevant to the analysis of this story or on this thread. I'm very comfortable leaving it there, and that the people who will take my word on none of this mattering are the only ones I need to care about.
If you literally have plaintext documents responsive to criminal inquiries in a jurisdiction you are subject to, we don't reach the "internet censorship wars". You're in a place not dissimilar to a 1970s telephone company; the "random people can't simply declare themselves above the law wars". Don't be in that place. Encrypt end-to-end.