I did not propose to replace intel's signing key, although that would also be possible[0]. I'm suggesting to add an additional keypair that can be used to decrypt a secure enclave's memory. Let's call it the backdoor key. Because this is a legit backdoor, to be used by the owner of the house.
Attestation would be performed with intel's key (so you know it's not an emulator) but also indicate which keypair could be used to break the enclave (so you know who has access to the backdoor).
By default that could be an invalid (unusuable) key. If you want to debug enclaves, e.g. because you suspect they run malware, you would add your own. If you want to run on a cloud provider, you send your own to the cloud provider. If you want to protect your own stuff from malware or rubberhose cryptanalysis, you don't need one and can leave the invalid key or one with a discarded private key in place.
To preempt a possible objection: This backdoor does not work retroactively. Only enclaves created after changing the key will be affected by it and it will show up in their attestation. So an attacker with physical access would not gain access to past encrypted data, forward secrecy remains intact.
[0] To fully replace intel's key it would either require a write-only procedure to get it into the hardware at boot time, which would only make sense with physical access to the machine, or one would have to replace the old key with the new one from within an enclave, that way you would ensure trust-continuity and thus avoid the emulation problem.
> Attestation would be performed with intel's key (so you know it's not an emulator) but also indicate which keypair could be used to break the enclave (so you know who has access to the backdoor).
> By default that could be an invalid (unusuable) key. If you want to debug enclaves, e.g. because you suspect they run malware, you would add your own.
This wouldn't solve the malware problem. The problematic kind of malware will use attestation, because otherwise you would simply emulate it from start to finish. If the malware author is paying any attention, then their malware will simply refuse to run (C&C won't provision it with a payload) if a backdoor key is installed, just as it would refuse to run if emulated.
You would gain the ability to use SGX for your own internal purposes with a form of escrow, but I think you could achieve this even on existing SGX by writing an enclave that emulates other enclaves and tweaking the key derivation process a bit. Admittedly it would be awkward.
> If the malware author is paying any attention, then their malware will simply refuse to run (C&C won't provision it with a payload) if a backdoor key is installed
That looks like a solution to the malware problem to me. Only people who want to run SGX-based DRM would leave the default key in place. And DRM is structurally indistinguishable from malware.
I theory you could also use a whitelist-approach. Write an open-source enclave-loader which initializes an enclave and then pulls in signed code into the enclave (from within) to execute. But in practice malware would just trick the user to add something to the whitelist. "Want to play this porn video? Just add our DRM!"
So really, just choose between owner-only-debuggability and no malware or DRM and malware. The choice is yours.
I did not propose to replace intel's signing key, although that would also be possible[0]. I'm suggesting to add an additional keypair that can be used to decrypt a secure enclave's memory. Let's call it the backdoor key. Because this is a legit backdoor, to be used by the owner of the house.
Attestation would be performed with intel's key (so you know it's not an emulator) but also indicate which keypair could be used to break the enclave (so you know who has access to the backdoor).
By default that could be an invalid (unusuable) key. If you want to debug enclaves, e.g. because you suspect they run malware, you would add your own. If you want to run on a cloud provider, you send your own to the cloud provider. If you want to protect your own stuff from malware or rubberhose cryptanalysis, you don't need one and can leave the invalid key or one with a discarded private key in place.
To preempt a possible objection: This backdoor does not work retroactively. Only enclaves created after changing the key will be affected by it and it will show up in their attestation. So an attacker with physical access would not gain access to past encrypted data, forward secrecy remains intact.
[0] To fully replace intel's key it would either require a write-only procedure to get it into the hardware at boot time, which would only make sense with physical access to the machine, or one would have to replace the old key with the new one from within an enclave, that way you would ensure trust-continuity and thus avoid the emulation problem.