Hacker News new | past | comments | ask | show | jobs | submit login

Skimming through the docs, I'm still not sure how they can guarantee that an enclave can't be meaningfully emulated. Do they simply mean that you can't emulate a specific processor (since each processor uses a unique key)? That wouldn't be the same guarantee at all. Or is it just impractical to emulate SGX with acceptable performance? I'm probably missing something. (Note, I know SGX can be virtualized properly, but I'm specifically referring to emulation that would allow you to effectively debug an SGX-protected program).



https://software.intel.com/sites/default/files/332680-002.pd...

Page 36. After building the enclave you can have the CPU sign the state with an intel key. You can then check the signed state against a known-good state and only submit your secret data to the enclave if signature can be verified against intel's pubkey.

Additional overview: https://software.intel.com/sites/default/files/managed/3e/b9...


So, I only have to take apart one single CPU to destroy the whole concept?


Probably not, I would guess that they're using something similar to a certificate chain, i.e. a per-CPU key which is signed by intel which could then be revoked when the leak gets noticed.


You can emulate an enclave, but, if you do, the emulated enclave doesn't get access to its seal key, etc. The idea is that you provision secrets accessible only with access to the key and then write an enclave to access those secrets.

An enclave can verify itself to another enclave on the same machine, and Intel supplies a mechanism by which licencees (sigh) can verify their enclaves remotely.


I think the question is, is there any way to verify—from the userland of a cloud VM instance, where the enclave is talked to via hypercalls—that you're "installing" an enclave and its key into the processor itself (where it would be protected from access by rogue datacenter ops staff) rather than just into a hypervisor-emulated processor that is actually fully accessible to the machine owner?

The sibling post mentions that the processor signs its enclaves' outputs with an Intel private key, which would help a bit—but if that key is static, that'd be pretty easily thwarted by decapping a single processor to get the key, just like extracting the DRM keys from set-top boxes or game consoles. (TPMs are supposed to "self destruct" when you try to decap them, but that's only scary for the individualized keys in a TPM; if you're willing to sacrifice 1000 chips to reconstruct one key common to them all, it becomes just a matter of persistence.)


This is, indeed, a problem. I'm not aware of any mechanism to protect against it.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: