Hacker News new | past | comments | ask | show | jobs | submit login
TrueCrypt Security Assessment [pdf] (opencryptoaudit.org)
206 points by silenteh on April 14, 2014 | hide | past | favorite | 96 comments



The iteration count used by TrueCrypt [in its PBKDF2 key derivation] is either 1000 or 2000, depending on the hash function and use case. In both cases, this iteration count is too small to prevent password guessing attacks for even moderately complex passwords.

Until TrueCrypt gets patched to use scrypt for key derivation, roughly how long should a volume password be to put it out of reach?

Edit: There's a table in the scrypt paper from 2002 [1] that estimates the cost of various brute force attacks. Back then, a PBKDF2 iteration count of 86,000 and a password of length 40 would cost $200K to crack. TrueCrypt's choice of 1000-2000 iterations look staggeringly low in comparison. And that's not even accounting for hardware advances in the last 12 years.

[1] page 14, http://www.tarsnap.com/scrypt/scrypt.pdf


I do not think this is significantly new as it has been documented in the official TC source code and various 3rd-party source code (TCHead, tcplay, etc.) for many years.

I was the first to crack the majority of the TrueCrypt volumes in the 2012 Defcon - Crack Me if You Can password cracking contest using TCHead running on an old Intel Celeron processor.

Write-up: http://16s.us/software/wm/Defcon/cmiyc_2012.txt

Contest Page (3rd table down): http://contest-2012.korelogic.com/stats.html

TCHead: http://16s.us/software/TCHead/


> PBKDF2 iteration count of 86,000 and a password of length 40 would cost $200K to crack

Bear in mind that's a 40 character english text password which they estimated at 56bits of entropy.

40 completely random characters is ~1x10^79 (95^40 - assuming a character space of 95), i.e. 100% completely infeasible to crack forever, even if you could do a hundred trillion guesses every microsecond

100bit entropy is probably fine to avoid brute force attacks for the foreseeable future, which is around 16 completely random ascii characters (95 character space)


> a password of length 40 would cost $200K to crack

I'd love to see someone try. Here, an md5, those are super vulnerable right? This is a 9 character password. Piece of cake if I should believe the news.

    f1f107c27cae21b5b5b01002e9c9ead8


Some numbers for you.

Using non-optimized, managed code on my laptop, I just went through 250 million iterations of plain text in ten minutes. You didn't specify what counts as a character, but let's assume it's a printable ASCII character. 95^9 = 630249409724609375 possibilities.

Divide by 250 million, and we get about 2520997639 times ten minutes for the time it would take me to enumerate all possible plain text passwords. That's a long time, about 47000 years. Not going to happen today on my laptop. But if I were a government, maybe I'd get a few thousand laptops together and set about calculating this over the next five years or so. In 1997.

But if I were to throw some purpose built hardware at this problem or use the video card to speed this up and not just laptops with general purpose CPU's, it's very likely I could decrease this time by thousands of times.

At the end I's have all MD5's for all passwords, and now the problem becomes a rainbow table lookup. And I only have to do this once and I forever gain the capability to break any 9 character (and below) password with just a lookup.

And that's assuming that MD5 is a perfectly secure hash with no shortcuts allowing you to narrow down the input set.


Did some more calculations just for the fun of it. It looks like you'd need nearly 16 exabytes of storage to hold the MD5 table for every 9-character hash [1]. Not accounting for any database overhead. In high density, you can fit a petabyte in what now a days? Half a rack? So around 8,500 racks. Certainly in the realm of possibility for a government but it would be a lot for storing just one type of hash list.

[1] https://www.wolframalpha.com/input/?i=%28630249409724609375+...


I feel like there should be a more efficient way to store these things. You have a complete enumeration of possible inputs (passwords), but unfortunately we need to go in the opposite direction, so we couldn't just index into a n*9 bytes array. Depending on how well distributed the md5 hash space is, it's possible a prefix tree might save you a bit of storage (you'd only have to save a couple of bytes per entry to cancel out the overhead) but I couldn't tell you the numbers there. Otherwise I think you're right, and we'd just have to store a map of md5 -> password.

(Edit: Also being limited to alphanumeric for hashes and printable ascii for input gives you pretty good compression potential.)


They are actually much more complicated than just a big table of 'password | hash'. Check out https://www.freerainbowtables.com/en/articles/ for in-depth explanations.


That isn't how a rainbow table works, it doesn't need to store a hash, plaintext pair for every possibility.

Precomputed hash chains are a simplistic way of explaining how rainbow tables work. http://en.wikipedia.org/wiki/Rainbow_table#Precomputed_hash_...


> I just went through 250 million iterations of plain text in ten minutes

Oh I thought you were going to say per second. I can recall reaching 500 MHash on some 2008 nvidia gpu with barswf.

Anyway, you're right of course. A nine character password is not really military-grade safe for countries with a security budget like the NSA's. But 9 characters ought to be many, orders of magnitude easier than 40 characters, and md5 is many orders of magnitude easier than pbkdf2 even at a weak security setting. This md5 might just be within reach of governments, but 40 characters certainly isn't.


To continue a slight derail about MD5:

For some fun performance figures, check http://golubev.com/gpuest.htm

A new AMD Radeon™ R9 295X2 is pretty speedy, clocking in at 23GHash for single MD5 (there are cheaper per $, but this is extremely impressive imo). Dividing 630249409724609375H by 23e9H/s leaves us with 27e6s, or 0.8 years. That means we'd need 42 cards to crack it in a week, costing somewhere in the region of $80k (1.5k/card + extras). That puts us easily within the range of governments, companies and just generally rich geeks. You can get ~ twice the hashes/$ so maybe $40k, and if you're cheap about the computers you connect with then probably lower.

But that's consumer kit. There are also speedy MD5 FPGA cores, I've not got a great reference but found one from 2007 which did 44MHash / core using a Cyclone II which I can only find described as "built for low cost". Assuming a 5x cost/hash reduction would grab us at $8k which would be in the range of hobbyists.

Beyond this, several private companies have been able to build double SHA-256 ASICs (for bitcoin mining). You could buy a while ago kit that did 7GHash for $300ish, and there's supposedly some 1THash ones in development. Granted, double SHA-256 is not equivalent to MD5, but I'd be surprised if MD5 was significantly harder, and expect it to be a lot simpler. Now the development cost of these is the expensive bit, but then your hashing rate would be insane.

Ramble over, if anyone has any figures for FPGAs or ASICs then I'd be really interested.


You say "9 characters" -- is that "printable characters", or alphanumeric, or what?

If you have a decent GPU, you can try several billion MD5 hashes per second: http://www.golubev.com/hashgpu.htm -- so if you stick with, um, lowercase alpha only should be crackable in under 10 minutes. If you choose from the 95 or so printable ASCII chars, you can expect a bit more of a wait (possibly a few years)... though of course if you don't restrict yourself to a single GPU you can speed things up.

Honestly, using an actually-random password that's more than 8 chars and not just alphanumeric offers decent security even when it's stored with a fast hash algo. Unfortunately, very few people do this in practice, and instead choose (for example) one of the 43 billion passwords in the md5 database here: http://www.md5decrypter.co.uk/


Well, Google search reveals nothing. I'm done.


Lol yeah that's how I usually check md5 hashes too. But that doesn't work: this is from /dev/urandom while filtering out only /[ -~]/. It seems people don't like that I'm proving the point by a super simple example though. And yes, my passwords are as strong as this, I don't use text passwords. Cracking 40 character passwords is absolute bullshit.


Misinterpretation, not bullshit. Per another reply, the original document was about 40 characters of English text.


> Misinterpretation, not bullshit.

Fair enough. I'm still curious about the generation method though: I can imagine that testing all texts ever published takes $200k if you go up to 40 characters, but if you use a unique text it might be different. There are a lot of variables here, including honest misspellings, punctuation use, spacing (non-tech people seem to find spaces weird in passwords), capitalization, usage of names, etc.


Maybe if we start changing the label to "Passwords" or "Pass phrase" to indicate that you can do something more expressive.

It bugs me that even today, more systems (hello banks) don't support more open passwords. Even what's on a US Ascii keyboard would be nice. Two of my banks only support letters and numbers, no special characters and case insensitive.


Your passwords are as strong as the protection of the place where you store them.


I think > Your passwords are as most as strong as the protection of the place where you store them.

would be more appropriate


Brainz.

I think while typing them in is when they are most susceptible to being stolen.


"Just to prove some guy on HN wrong" is not much of an incentive to spend the required resources. It says nothing about the ability to crack MD5 when there are significant incentives. Why not md5 your credit card info and post it here if you are so confident?


Somebody more knowledgeable please correct me if I'm wrong, but as I understand it:

If you're using AES-128, if you have a random password with a 128 bits of entropy it shouldn't matter what key derivation function you use. That means, if your password can contain any printable ASCII character, a password length of 20 would be sufficient [1].

The huge caveat here is that the password has to be random, and most people are either incapable of remembering such a password or have no way of securely storing it.

But I have no qualifications in this area, so don't take my word for it...

[1] http://en.wikipedia.org/wiki/Password_strength#Entropy_as_a_...


The use of AES is orthogonal to the issue of key derivation. In short, when we talk about key derivation, we're speaking of keys in a general sense - irrespective of purpose, they just need to be secure.

With that said, if your "key" actually consists of 128 bits (16 bytes) of secure random data (with all the conditions associated with that), then you're right: the choice of KDF is mostly immaterial, so long as it preserves entropy. That's why, for example, if you're just generating a symmetric key for bulk encryption, you just grab (say) 128 bits of random data and use that as the key directly with no KDF involved.

KDFs really only enter the scene when: (1) you want to generate multiple keys from one "master key" or (2) you want to generate "secure" keying material from "insecure" keying material (say, a user's password).

The choice of KDF, however, mostly depends on the strength of the original keying material. For instance, if you have 256 bits of secure random information as your keying material, using something like scrypt is unnecessary; something like HKDF would be more appropriate there. (No one will ever brute-force your 256-bit value. It's just not happening: ever.) On the other hand, where the original keying material is weak, like in the case of passwords, you want to use something that prohibits rapid guessing, e.g. scrypt.


According to the chart in that article, case-sensitive alphanumerics give at most 5.17 bits of entropy per character. So that's 25 random characters. (edit: sorry, this bit is redundant because I misread your comment.)

You are wrong about the key derivation though, and here's why: A key-derivation function that is very cheap and fast to calculate means it is easy for an attacker to brute-force lots of passwords to find one that matches your key. Using a slow, expensive KDF makes an attack much less feasible, by a factor of a thousand or more.


Thanks for the explanation regarding key-derivation functions. Regarding the calculation, I was using the value for printable ASCII characters which is 6.57 bits of entropy per character.


Disclaimer: I have not tested this extensively. This is merely an outline of the ratios involved, not a recommendation. Don't come after me if you get your password broken after reading this.

The goal is to make it at least as difficult to break the master password as it is to simply brute-force the resultant symmetric key. If the key is 256 bits long, this is true even for a one-round SHA256 KDF if the password has 256 bits of entropy. Of course, that's at least a fifty-character long alphanumeric password from /dev/random. A password structured for a typical human to remember it should probably be 5-10x as long or more to meet a similar standard.

Fortunately TrueCrypt does not use one-round SHA256, it seems to use 1000 rounds in a PBKDF2 construction. That doesn't mean we can use a password 1/1000th as long, because SHA256 is still very fast. Someone could use a Bitcoin ASIC to get a cost structure where hashing 1000 times is about as costly as verifying the aes256 key.

Speculation: You can maybe go down to 25 random or 150-200 human-memorable characters.


>doesn't mean we can use a password 1/1000th as long

It'd never work that way, right? You get to use a password with log2(1000) bits less. So e.g. if you use 2^24 iterations, and figure 80 bits is strong enough, then you can use a 56-bit password.


You're right; it would only be a fixed (and not especially large) length reduction.


Unfortunately TrueCrypt max password length is 64 chars so in your speculation you have to use random chars.


From the same paper:

> By using a key derivation function which requires 2s cryptographic operations to compute, the cost of performing a brute-force attack against passwords with t bits of entropy is raised from 2t to 2s+t operations [19].

1,000 iterations is just shy of 2^10.


Last i saw, django uses 1000 as the iteration count for PBKDF2. So maybe that's common nowadays and needs to be reviewed. On another point, can anyone point me to an article explaining the state of the art for user passwords storage?


OWASP do a decent job of aggregating currently accepted practice: https://www.owasp.org/index.php/Password_Storage_Cheat_Sheet


I wish someone would also audit tcplay [1], the BSD-license system with full TrueCrypt compatibility. I would much rather use that, if at least for the license concerns repeated many times in the comments here.

[1] https://github.com/bwalex/tc-play


License review is listed as #1 among goals on http://istruecryptauditedyet.com/ .


"The assessment explicitly excluded the following areas... Cryptographic Analysis, including, RNG analysis, Algorithm implementation, Security tokens, Keyfile derivation"


A feature I'd like to see on both our laptops and mobile devices to protect their privacy from random and abusive border searches: being able to hide the fact that you have an encrypted account.

Use case: Say you pass the UK border, and they can willy nilly decide to check your laptop or mobile phone. But you have an account that is password protected and encrypted, and they see that, and ask you for your password. You say no - and you get arrested for it. If they couldn't see you have that password protected account, and all they could see is a "normal" (clean of sensitive stuff) account, then they'd just check that and move on. Or you could even password protect that, too, and give them the password to it, and they'd be none the wiser.

All you'd need is to be able to hide that account from the main screen when you turn on the laptop or mobile device, and you should only be able to re-enable it from a menu prior to booting into the OS. It shouldn't be easily accessible either, otherwise it defeats the point.

I don't see Microsoft doing something like this - ever. Apple might if enough people asked for it, but I'd incline to say they wouldn't for now. Google probably won't do it either for Android or Chrome OS, since they probably see no benefit to it. But it would be nice if that feature at least came to Linux and some custom Android ROMs like Cyanogen.


It's called deniable encryption[1]. TrueCrypt supports it[2]. Bruce S is not a fan of their implementation, however[3].

[1] http://en.wikipedia.org/wiki/Deniable_encryption

[2] http://www.truecrypt.org/docs/plausible-deniability

[3] https://www.schneier.com/blog/archives/2008/07/truecrypts_de...


> Bruce S is not a fan of their implementation, however[3].

Its worth pointing out that his concerns seem to apply only to using Truecrypt to encrypt a portion of a drive, as opposed to whole-disk encryption.


Huh. I hadn't thought about multiple-image attacks.

Could that be mitigated by having TC occasionally randomly write random data to unused blocks?


The table of unused blocks would violate your deniability. Indeed, any such mutating of ostensibly free space by TrueCrypt would give it up.

You have to consider the adversary. Are you a non-targeted individual who want to keep some things private from a spurious search by law enforcement or border agents, then simply the ability to boot up and not have sensitive stuff and not have an obvious encrypted partition lying around will do the trick.

If, on the other hand, you're targeted by someone with a strong reason to believe that you are hiding stuff on your computer, maybe even someone who will break into your apartment every day to image your computer, you're gonna have a bad time (and a keylogger on your computer, but that's a different kettle of fish). Basically, evidence of the existence of your hidden partition can leak out into the real world, not because of bad cryptography, but because you're human.


For the purposes of border searches, the best approach is to use full-disk encryption and have a separate OS installed. You can then configure your bootloader to default to the plain OS and not to show itself unless a key is pressed. This will mean that the encryption is not obvious to the 'turn it on and look' inspection.

If you want to hide the presence from a disk image (for example), then it will be much more difficult. Truecrypt's hidden partitions can make it difficult to prove that the hidden partition exists, though if you're not extremely careful then circumstantial evidence can point strongly towards it.


Here's an idea that I've been toying with for several months now.

Many modern laptops have an mSATA port in addition to a regular 2.5" SATA hard drive. If you install a tiny SSD in the mSATA port and move your OS to the SSD, you can enjoy the speed of SSD without giving up the capacity of the hard drive.

But what if you installed a large (480GB or 960GB) SSD in the mSATA port and used it for all of your storage needs? Then the regular hard drive becomes completely redundant. But TSA agents are not known for their tech savvy, so most of them will not realize that you're not even using the regular hard drive.

So put a decoy OS on the regular hard drive, with no encryption whatsoever. All the encrypted stuff goes on the mSATA SSD instead. Prior to the trip, configure your BIOS to boot from the regular hard drive, and dislocate the mSATA SSD just a couple of millimeters in its slot so that it's not even recognized. When the agent boots the laptop, all he's going to see is a clean OS that takes up the entire hard drive, and even if he takes a disk image, there will be nothing suspicious on it.

Now you only need to make up some story along the lines of your computer crashed last week and you had Geeksquad reformat it but you didn't have time to copy all your family photos back from the external drive at home.

Well, now that I've spilled the beans on HN, they'll probably try to pull my mSATA SSD the next time I try to enter the US. Shit.


I used to do this on a pre-unibody Macbook Pro. The main drive still had OS X, I used that only when I was in coffee shops or airport lounges. But I also had a 40GB SSD expresscard that was bootable that was fully encrypted that I used for my personal use. It worked really good, you just slipped in the expresscard and held down the option key when you started up, and chose the other drive. I carried the card on my person when I traveled, if someone stole my machine I didn't lose anything, and I suppose if it was searched at a border, nobody would know it wasn't my main OS.


On Windows, you can hide accounts from the logon screen using the SpecialAccounts registry key. I use it to hide admin accounts on relatives' computers that I setup. You can still run programs as them using the "Run as different user" option.


> "you should only be able to re-enable it from a menu prior to booting into the OS."

How would you be able to hide this from police once such feature is built into every OS?

Assuming that feature by default asks for a password regardless of whether such protection has been enabled by the user or not (first time setting up the device/computer) -- so that no information is leaked on whether the user IS actually using the feature -- I supposed the police can still arrest you or keep you longer there to investigate whether you are lying or not.


It depends on the sophistication of the attacker. If you have TrueCrypt installed and a big chunk of random data on your disk then it's pretty obvious that you're hiding something. Even stenography is pretty easy to detect for non-trivial amounts of data.

Another issue is that you never know if the border patrol is going to add some unwanted monitoring software or hardware to your computer. It seems unlikely but with the NSA revelations we had I wouldn't be surprised anymore.


Put glitter nail polish along all the seams of the device. Then photograph the dried pattern. Once the device is through the border, compare the patterns to the known ones to check for tampering.


And re-flash all your EEPROMs that are scattered across devices.


Did anybody give this a thorough read and can give a cliff notes on the results? How did Truecrypt do - good/bad/indifferent?


All things considered, pretty good.

No massive exploits; the worst problem was using too small a number of iterations of a keygen.

There were a bunch of other minor problems, but most of them were information disclosure only trigger-able by malicious software running in the encrypted environment (ex: finding out if a file you don't own exists) or things that could only be triggered by someone with raw access to the hard drive (at which point they could just overwrite your bootloader)

Although this explicitly doesn't cover a large chunk of TC.


tl;dr

Follow TrueCrypts recommendations (FDE, long password) and you're mostly fine. The only things that can make it better require program changes.


Vulnerability Summary

  Total High severity issues Zero (0)
  Total Medium severity issues Four (4)
  Total Low severity issues Four (4)
  Total vulnerabilities identified Eleven (11) (incl. three (3) Informational)


> Issue 4: Windows kernel driver uses memset() to clear sensitive data

> Calls to memset() run the risk of being optimized out by the compiler.

I would be curious to know which compilers or which options actually still do these ill-fated optimizations?


They should do it only in situations where it doesn't change the program behavior. I use memset frequently in C just to be safe, but if it's written to later on before it's ever read from, the compiler can optimize that away. I'm guessing their recommendation here is if you did something like

    char* plain_text = malloc(size);
    ///do stuff with plain_text
    memset(plain_text, 0, size);
    free(plain_text);
For most programs that last memset is unnecessary (and may even be unnecessary according to the standard, but it's probably implementation defined, not undefined behavior) and it makes sense for the compiler to optimize it away. But for crypto purposes you have to be afraid of someone being able to read plain_text later, so the memset is important


I checked 3 compilers: icc 14, gcc 4.8, and clang 3.3. Clang is the only one which optimized the memset away.


Did you enable optimization? I know first hand that GCC will optimize something similar out in many cases.

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=8537


Yes, I compiled with -O3.


Not a windows user so I can't check but given that it is the kernel driver the compiler we would be concerned about is the MS visual c++ compiler.


Visual C++ 2013 doesn't remove memset either.


It is not advised to use C library functions. The driver should use RtlZeroMemory() and in that case RtlSecureZeroMemory(), but otherwise this optimization makes a lot of sense in many cases when you have for example an useless constructor call before an initialization.


they still do, but everyone (including the windows DDK) offers a "secure memset" that casts to volatile and preserves the call. in Windows, it's called RtlSecureZeroMemory and I would think that know about this would be one of the pre-requisites for writing security software on that platform...


All of them will under circumstances that tend to manifest when wiping secrets. This surprises a lot of developers, but it's a very common finding when auditing crypto code or other secret storage.

It's typically done when you don't need the value anymore, of course. So, the compiler sees a write to memory that is never used. Which is exactly the type of thing compilers try to optimize out (if it can be proven that it is never used).

From the C99 standard:

In the abstract machine, all expressions are evaluated as specified by the semantics. An actual implementation need not evaluate part of an expression if it can deduce that its value is not used and that no needed side effects are produced (including any caused by calling a function or accessing a volatile object).


After a little bit of digging seems to be the best way to currently wipe out sensitive data is to implement something like in libsodium https://github.com/jedisct1/libsodium/blob/master/src/libsod...

1- Try to use SecureZeroMemory() on Windows

2- Use memcpy_s() if available, which from its manpage is guaranteed not to be optimized away by compilers

3- Use a loop to manually zero-out the buffer after having casted to a volatile pointer, again to prevent compiler optimizations


Pretty much all of them do. Many of the big leaders in the C++ world, like Google, are pushing the idea that every last optimization technique is crucially vital. Google can bank on tenths-of-a-percent of performance, so why can't you? Google can devote armadas of armadas of test machines to run asan/tsan/space-alien-san and everything else continuously, and it works pretty well for them, so why can't you do this too?


Does anyone know if Tomb [1] or ecryptfs [2] is a safe choice for encrypting a local directory?

Everyone always talks about Truecrypt which is clearly most popular but both of the above are fully open-source. The latter is even part of the kernel. I'm curious what the appeal of Truecrypt is besides maybe portability (which is a pretty strong selling point).

[1] http://www.dyne.org/software/tomb/

[2] http://ecryptfs.org/


TrueCrypt is also opensource: http://www.truecrypt.org/downloads2

The security assessment is for the Windows version of TrueCrypt only. Which also answers your question, TrueCrypt is appealing because it works on *nix, BSD, and Windows. It is also free and open source. The Windows version is easy to use, has a nice GUI and actually works as stated.


The license has been criticized as not being truly "Open Source" or "free".

https://en.wikipedia.org/wiki/TrueCrypt#Licensing_and_Open_S...

Wikipedia lists it as "source-available".


tomb is a front end to cryptsetup[3] and it uses LUKS as an encrypted volume format. LUKS is more secure than TrueCrypt encrypted format because it takes steps to make it harder to brute force the LUKS header.

TrueCrypt for example uses fixed and low pbkfd2 iteration counts as specified in the report.LUKS on the other hand uses adaptive pbkdf2 iteration count number that goes up automatically when a LUKS volume is created on a faster computer.

The appeal of TrueCrypt other than portability atleast among linux users is an easier to use GUI.Fortunately, there is a project called zuluCrypt[1] than supports TrueCrypt volumes among others and has a GUI window that is comparable to TrueCrypt's GUI.

If you are a user of tombs,zuluCrypt can also open tombs through its "gpg" and "tomb" plugins[2]

[1] http://code.google.com/p/zulucrypt/

[2] https://github.com/dyne/Tomb/issues/105

[3] http://code.google.com/p/cryptsetup/


The appeal of Truecrypt is that it works on Windows. As you've suggested, there are better encryption programs and schemes for users of GNU/Linux operating systems.

The questionable license, and obscure nature of the project should be significant reasons not to use Truecrypt. That doesn't even begin to consider the actual security of the program.


The appeal of Truecrypt is the deniable encryption and the alternatives for that are sparse and bad on any operating system


Too bad Windows 8 isn't supported, so Truecrypt loses much of its appeal to Windows users.


I replied to this comment using a TrueCrypted Windows 8 machine.



Encrypted 32gb microSD card containing all sensitive information removed from phone prior to flight/border checkpoint thus rendering what is on the phone itself to be unimportant and maybe even substitute another password-protected microSD card with interesting but non-sensitive info (maybe a risqué photo or two of the wife and some saucy conversations)?

Anyone know if something as small as a single microSD card positioned in the densest part of the suitcase will show up on an X-ray?


In America, you are better off putting it in your sock under your foot. No TSA search includes the bottom of the foot, and you'd have to really arouse CBP suspicion to get them checking out your feet.

But, to answer your question, no, airport security screeners are not x-raying for objects of that size. If visible at all, it would be ignored.


I have been waiting for the security audit report since the first time it was mentioned. Now that it is out i feel a little disappointed that there are no real intentional risks


Why would you be disappointed that there's nothing wrong with it? Why would you be hoping that the program millions use to protect their sensitive information was broken?


I'm not disappointed by a fairly uneventful report, but quite honestly, I'm always a little bit worried when nothing horrible is discovered in the course of testing.

It's not that I want there to be bugs, but that in a large enough codebase, there's always a game-over bug -- major information leakage, arbitrary code exec, whatever. As a security consultant, I'm always more confident in a test when I find a horrendous bug than when I don't; I know that bug will be fixed, and it makes me feel like the test is more complete, even if I know full well that I did the test to the absolute best of my abilities regardless.

I've heard similar sentiments from most testers I know.


Heh, a clean bill of health always has that elephant in the room attached.


I'm not OP, but I suspect it's a bit like watching a hyped sporting event. There's a build up, lots of discussion, some naysayers and the tech equivalent of trash talking. In the end you sort of expect something more exciting than what we got. It's sort of like watching the favorites secure a clinical win in the Superbowl. It's ok, but no one's going to be talking about it for years to come.

That said, as a Truecrypt user, this is good. I don't have the technical expertise to understand truecrypt myself, but a second set of eyes (and all the eyes watching that second set) make me more comfortable. Follow standard security recommendations and you're pretty safe.


I recall seeing something about reviewing the TC license. Does anyone know if this is something that will be part of Phase II or am I misremembering the details?


I would really, really like to see an effort like this for OpenSSL ... or better yet, for one of the OpenSSL alternatives that are not a spaghetti mess.


Hindsight bias? We should audit libnss and all the other cryptography libraries too then. And don't forget how much of the world relies on closed-source solutions like Microsoft's Bitlocker. Better shun those because they had no public audits and for all we know they're even more spaghetti code.


No, it's not hindsight bias. I have posted (and so have others) on many public forums for years about the need for an audit of OpenSSL and OpenSSH and there have been many discussions about the sad state of the codebase in OpenSSL.

I can think of a particular discussion on the cryptography mailing list at randombit from ... two years ago ?


Would Threefish be a better cipher than AES for TrueCrypt/disk encryption, considering it can have 1024-bit blocks?


For security, somewhat. The current de-facto standard mode of operation for disk encryption utilities is XTS, which effectively encrypts each block on the disk with a different key, where the blocks are of the same size as the cipher block.

Whether this is of any significance depends on your adversary model. If the adversary controls your storage medium (imaging putting an encrypted container on Dropbox or Google Drive), they can mix-and-match (e.g. copy-paste) different versions of blocks from your history. Imagine your disk to be in a version control system; the adversary could pick the value of the block 1 from version 50, the value of the block 2 from the version 42, the value of the block 3 from the version 100 and so on. They could also potentially discover usage patterns (seeing e.g. that the value of block 3 remained constant between versions 20 and 200, while block 5 remained the same). Additionally, they could corrupt any of the blocks, by turning the corresponding plaintext into random bits.

Having a smaller block size means that they can perform any of these with finer granularity. Increasing the block size thus increases your security; ideally, your entire disk would be just one block (the only thing the adversary could do in that case is to completely restore an old version of the disk); but this is hugely impractical, since the performance would be abysmal (you'd have to re-encrypt the whole disk to change just one byte). So you have a spectrum of performance/security tradeoffs. Where on this spectrum the 1024-bit blocks lie, I'm not sure, but I suspect that they are better than 128-bit ones.

Note that we do have schemes which can do sector-level encryption (the EME mode), but they're not used since they're 2x slower than the schemes with smaller sizes.

Edit: in conclusion, for pretty much every scenario, other security concerns are much more significant than the block size :)


> Additionally, they could corrupt any of the blocks, by turning the corresponding plaintext into random bits.

Given that the premise is that the adversary controls the storage medium, this point doesn't seem terribly interesting. I feel like I'm missing something.


The theoretical attack there is that one could selectively garble the (decrypted) parts of your storage, and thus destroy the contents of e.g. your configuration files. E.g. they could corrupt your firewall configuration file and thus leave your computer open to outside network connections. I don't think this has been done in practice, and it's probably not very feasible.

Such an attack could not be detected, because the encryption modes commonly used do not provide integrity protection. This is due to the desire to have equal sizes for both the encrypted and plaintext sectors (as far as I understand, that's an efficiency/ease of implementation concern). Incidentally, you can easily see that such a scheme can never provide integrity, since the encryption operation has to be a permutation, and you have to be able to decrypt any ciphertext.


No; part of the point of XTS is to adapt 128-bit AES to 512-bit blocks, but the other part is to seamlessly incorporate sector numbers into the cryptosystem. Even with a cipher whose block size accommodates whole disk blocks, you still need a construction that achieves more than ECB does, which is what you'd get by naively applying Threefish.


Threefish is actually a tweakable block cipher - it takes a 128-bit tweak in addition to the key and the input, so it should be well suited for disk encryption.

It's a bit disappointing that it hasn't seen more uptake since the SHA-3 competition. I think the tweakability and large block size makes it a more flexible primitive that AES, and even naive implementations should be constant-time. It's hard to compete with hardware-accelerated AES, though.


Any of the encryption algorithms used by TrueCrypt would be fine. The real security lies in the strength of your password. http://xkcd.com/538/

EDIT: Thought OP said Twofish


I read this whole thing, but they left out a description of the NSA planted back doors. As everyone knows, the NSA has compromised all crypto systems. Since the audit did not reveal the NSA back door or doors, what else is missing?

More likely iSECPartner's is the NSA's new RSA 2.0


Build tools from 1993? What kind of nonsense is this?

> Page 8


To offer full disk encryption, boot code for the MBR must be produced, and it has to be 16-bit. Current compilers just produce 32 and 64-bit binaries. So you have to use an old compiler...


Thanks for detailing that (I didn't know)


tl; dr

"1.3 Findings Summary

During this engagement, the iSEC team identified eleven (11) issues in the assessed areas. Most issues were of severity Medium (four (4) found) or Low (four (4) found), with an additional three (3) issues having severity Informational (pertaining to Defense in Depth).

Overall, the source code for both the bootloader and the Windows kernel driver did not meet expected standards for secure code. This includes issues such as lack of comments, use of insecure or deprecated functions, inconsistent variable types, and so forth. A more in-depth discussion on the quality issues identified can be found in Appendix B....

The team also found a potential weakness in the Volume Header integrity checks. Currently, integrity is provided using a string (“TRUE”) and two (2) CRC32s. The current version of TrueCrypt utilizes XTS 2 as the block cipher mode of operation, which lacks protection against modification; however, it is insufficiently malleable to be reliably attacked. The integrity protection can be bypassed, but XTS prevents a reliable attack, so it does not currently appear to be an issue. Nonetheless, it is not clear why a cryptographic hash or HMAC was not used instead.

Finally, iSEC found no evidence of backdoors or otherwise intentionally malicious code in the assessed areas. The vulnerabilities described later in this document all appear to be uninte ntional, introduced as the result of bugs rather than malice."




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: