Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> encrypted SWAP

When I was setting up dm-crypt for my Linux dev box at work I opted for using an encrypted swap as it really made sense: why keep on-disk stuff encrypted if swapped memory is in plain sight.

It turned out to be a somewhat bad decision, because when the system did run out of memory (thank you, Firefox and Skype) and began swapping, the whole box just freezes as the kernel was too busy encrypting more and more stuff, not having time to even run the OOM killer.

Maybe the systems use unencrypted swap to avoid triggering such bugs in default setups.



I don't doubt your story, but modern CPUs can perform AES-GCM at gigabytes per second per core (5GB/sec on my laptop). Disk encryption is a different block cipher mode, but it shouldn't be much slower. So if implemented correctly, encrypted swap should not slow you down vs unencrypted swap. If in your setup encrypted swap is much slower than unencrypted swap, it's a bug in the kernel crypto and it should be fixed.


Even if a modern CPU can to do, the question is whether on a stock distro kernel they will do that. I could see the default kernels not being compiled with the extensions that may allow that speed for compatibility purposes. Then again, I remember when distros often shipped different i386, i586 and i686 kernels, and when Red Hat switched away from i386 as the default kernel (which might be a lot more recent than you think, if you weren't thinking of all the cheap i386 devices it was still working on in the early 2000's). Then, even if the stock kernel does support it through a kernel module, the question is whether that module was loaded. Finally, from looking at an Intel/AMD AES intructions compatibility list[1], not all processors as of even a couple years ago supported AES instructions (notably, it appears Intel used it to differentiate the i3 line from the more powerful ones)..

1: https://en.wikipedia.org/wiki/AES_instruction_set#Intel_and_...


Actually, Samsung 960 Pros are fast enough that encryption becomes a bottleneck, at least on my Xeon E5-2620v4.

You can test it yourself with `cryptsetup benchmark`:

    # cryptsetup benchmark --cipher aes-xts
    # Tests are approximate using memory only (no storage IO).
    #     Algorithm | Key |  Encryption |  Decryption
            aes-xts   256b  1874.2 MiB/s  2042.6 MiB/s
I was surprised and it made me doubt whether it's using AES-NI at all, but then again it's not an issue in practice.


    cryptsetup benchmark --cipher aes-xts
    # Tests are approximate using memory only (no storage IO).
    #  Algorithm | Key |  Encryption |  Decryption
         aes-xts   256b  2266.2 MiB/s  2273.8 MiB/s
perf says

    21.86%  cryptsetup  [aesni_intel]           [k] _aesni_dec4                       
    19.83%  cryptsetup  [aesni_intel]           [k] _aesni_enc4                       
    17.80%  cryptsetup  [aesni_intel]           [k] aesni_xts_crypt8                  
    13.73%  cryptsetup  [kernel]                [k] copy_user_generic_unrolled        
     2.20%  cryptsetup  [kernel]                [k] get_page_from_freelist            
     1.86%  cryptsetup  [glue_helper]           [k] glue_xts_crypt_128bit             
     1.53%  cryptsetup  [kernel]                [k] put_page                          
     1.36%  cryptsetup  [kernel]                [k] blkcipher_walk_done               
so it's using the code from:

https://github.com/torvalds/linux/blob/master/arch/x86/crypt...

I don't know if using AVX would speed it up. openssl is faster (openssl speed -evp aes-256-gcm and aes-256-xts).


I'm uncertain if it's safe to use AESENC and friends in kernel mode in Freebsd. A number of kernels can't safely use e.g. FPU registers in-kernel and aesni operates exclusively on SSE state.


It's safe. The FreeBSD kernel is capable of preserving user FPU state when kernel threads (e.g., aesni(4)) need to use FPU registers.


In my experience, swap usually does not actually work out helpfully. The system slows down so much it's unusable, and you wish the OOM killer killed something so you can recover some processes/state, but it takes way too long to do so, many minutes while the system is effectively frozen and you are helpless. (Some people think the OOM killer kicks in too soon. They're crazy ;)

I also ran into this when setting memory limits on docker containers. It basically made them freeze for entire minutes when they hit the limit instead of killing them. So instead I wrote my own script to check and signal them to gracefully exit before memory runs out (and after a few more seconds SIGKILL them).


Swap wins when you have different large processes open that you aren't actually using simultaneously. On my laptop with 2GB of RAM, it's why I don't have to close my browser before opening something else.


Most new drives can do encryption in hardware with zero performance penalty using either the ATA password or Opal for NVMe. For those doing software/CPU encryption on the swap, or any other partition for the matter is simply wasteful and often slower.


Isn't on-drive encryption considered untrustworthy?


now i’m curious if anyone has tried to write their own encryption routines to any common hard drive firmware stacks


Depends on your threat model. If you want to protect your data when your laptop is stolen, on-drive encryption is probably more than sufficient.


In my experience, linux locking up for minutes at a time while in a swapping situation happens if the disk is encrypted or not, and is a function of a crappy swapping algorythm.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: