Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
TRESOR Runs AES Securely Outside RAM (uni-erlangen.de)
35 points by blasdel on Dec 31, 2011 | hide | past | favorite | 20 comments


I think this is not as useful at it looks. Most of local attacks that allow you to read arbitrary memory locations also allow you to run arbitrary code which can simply read the debug registers, it's interesting to note that state of debug registers is not defined after reset, which probably means their state is retained across resets.

Also one could assume, that whole architectural state of CPU - including debug registers - can be read out using JTAG test port of CPU (I haven't find any official documentation about JTAG capabilities of Intel's CPUs, but there are commercially available JTAG attached ICDs for x86 CPUs).


I think this, or a similar implementation at any rate, was previously posted on HN. It's certainly a good idea, and I think that the more popular crypto libraries (I'm looking at you, OpenSSL) would do well to consider implementing something similar.

The key point being that (and this was my biggest concern), it is fully compatible with AES-NI, the Intel's hardware-accelerated AES instruction set. According to TFA, with AES-NI there is no performance penalty for keeping the AES keys stored in registers and out of the RAM. What's not to like?


> What's not to like?

you need to patch your kernel to use it. this isn't something that openssl alone can push.


So TRESOR needs to submit an acceptable patch to the nice kernel.org folks, and then it can ship as a distro default.


It sounds fairly cpu-specific. I'm not sure it would lend itself very well to being packaged up for general distribution.


Linux takes CPU-specific patches all the time, and has numerous modules to take advantage of CPU features. As long as the patch gives it a clean interface and meets the Linux kernel's standards, I don't see any major roadblock.

I don't, however, think that the magic early-boot changes to prompt for a passphrase almost immediately after boot would have much chance. And I don't really see the point of that compared to the usual mechanism of prompting for a passphrase from the initramfs before mounting encrypted partitions. Either way means that you have key material very briefly in memory before ending up in CPU registers, and thus that you must very carefully wipe that key material from memory before proceeding.

Looking at their patch, it would need a major overhaul to become suitable for acceptance into the Linux kernel. They've also re-implemented sha256 themselves rather than using the kernel's existing implementation, and they've used that sha256 implementation for a very simplistic key derivation function (2000 iterations of sha256 directly on the passphrase itself).



This is a very cool development. I hope the OpenSSL project verifies this method and imports it so that more projects can benefit from this.

Being a FreeBSD fan, I also hope this can be made to work with the "geli" device crypto driver.

Out of curiosity, has there ever been an exhaustive study about the effectiveness against cold boot attacks against various crypto systems on various platforms? I hear much about cold boot, but is it used much in the wild for practical use, like in criminal investigations? I have never been able to find any reputable confirmation that anyone actually uses this attack to any useful end with regard to cracking real crypto installations.


While it is nice to see improvements like these, I sure would like to see more work done on ARM implementations, especially given the huge number of mobile devices based on ARM these days.

That said, does anyone know of recent ARM optimizations for OpenSSL or other SSL implementations?


This seems like a good idea, I'm curious what OpenSSL or other crypto libraries do to keep secrets from being paged out to disk by the virtual memory machine? That seems like a legitimate attack vector.


It is, but keep in mind that on general-purpose computers running AES, if you can extract secrets from virtual memory, you are already in a position to wiretap the AES process directly.


A cold-boot attack as detailed in [0] makes it vastly easier to extract secrets from physical memory than to wiretap the AES process, especially if you're not in possession of a privileged user account or a route into kernel space.

[0]: http://citpsite.s3-website-us-east-1.amazonaws.com/oldsite-h...


I don't know how to respond to the argument that it's "vastly easier" to chill a bank of RAM with inverted compressed air cans or liquid nitrogen and then extract its entire contents than it is to just use a debugger to watch a program run. I guess "No, I don't agree." suffices.


When you don't have the privilege level to connect a debugger to the process or the secrets are in kernel space, it's probably easier to just reboot with a cold-boot kit attached via USB than to attempt some form of escalation exploit. Inverted air cans or liquid nitrogen are entirely unnecessary in most uses of a cold-boot attack - they're only a requirement if the machine has some form of boot protection and it's desired to transfer the memory modules for some sort of offline forensic analysis.

As an additional plus, the tools which the authors of the linked paper have written for identifying key data in memory dumps are simpler to use than tracing the execution flow of even a simple application using a debugger. I'd probably use their approach even with full debug access, as obfuscating control flow around functions handling key data seems to be a more common practice than keeping said data obfuscated in memory.


How is that possible? Virtual memory is written out to the hard disk, so presumably, anyone with physical access to the drive could read off anything that might be stored there. You wouldn't need admin rights to the box the way you would to force aes to run in gdb, right?


My point is just, local attacker, "the call is coming inside the house!". You have big problems at this point.


On most platforms, mlock(2) and mlockall "lock" data in memory, theoretically preventing it from being paged to disk.

OpenSSL provides hooks for adding a custom allocator which uses mlock, and some of its interfaces allow for the app to allocate its own key space and pass pointers (hence theoretically allowing the app itself to use mlock or whatever platform-specific method of allocating "safe" memory is desired).

However, this attack vector is often overlooked and is very commonly exploitable - FileVault and the Login Window on OSX were vulnerable to this brand of attack last I checked. The easiest workaround is to use encrypted swap (and the linked project would be a perfect way to encrypt swap in a way that's not vulnerable to a cold boot attack).


GPG uses mlock() to prevent swapping, and warns if it cannot do this: http://www.gnupg.org/faq/GnuPG-FAQ.html#why-do-i-get-gpg_war...


Anyone know if this can be applied on Mac OS X Lion?


Trevisor yes, because it runs below the OS and the recent MBAs provide AES-NI, if I remember correctly. Without the hypervisor you're out of luck, as it needs to to patch the kernel and Apple is extremely unlikely to do that.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: