The document is not clear how did you achieve goal #1 ("must verifiably not be able to access or decrypt data"). There are lots of low-level details about Rust and padding attacks, but ultimately your security model is:
We keep the keys in AWS KMS and we use AWS policies settings to ensure that only executable generated by our Github Action job can access those keys.
This immediately shows how the Evervault (or a hacker impersonating Evervault employee) can access or decrypt user data:
Option 1: Go to AWS dashboard and change that KMS policy to be less restrictive. Then you can fetch the keys directly using awscli or download them from console.
Option 1a: if you are using some IaaS solution (terraform, cloudformation etc..) you can take over this, it may be easier.
Option 2: Commit a code into E3 repo that exfilatrates the keys and have that code deployed. Enclaves cannot connect to internet, so you'll have to be a bit creative -- perhaps encrypting a string "HELLO123" should produce a master key?
Option 3: If you cannot commit a code without someone noticing, go to Github settings and get all the environment/secrets from the repo. Then find an ssh key used to manage the self-hosted action runner which signs the images. Once you have those, you should be able to run the action steps locally on your repo.
Thanks for commenting! One thing worth clarifying before diving into specifics is that Evervault doesn't store user data. We do store encryption keys, but the model is that our customers store encrypted data but not keys; Evervault stores keys but not encrypted data.
The attacks you mention (correctly) base themselves on the security of the signing key used to sign the E3 binary. The security of this key is obviously extremely important, but is only one part of the attestation process. Nitro Enclaves exposes attestation documents with platform control registers (PCRs). PCR8 is a hash of the key used to sign the binary itself, but there are also 5 others which contain hashes of the specific binary that is being run as well as the in-memory state of the enclave. Combining these together, means that attestation focuses both on deployment integrity (and GitHub Actions is the weakest point here) as well as the code itself. This is why exposing the code that runs inside E3 is a major focus of ours.
Re: KMS policy integrity, having the enclave communicate with IAM with TLS certificate validation is one way of E3 proving to the end user providing keys/data that the enclave key has not been tampered with. It's a messy solution, but cryptographically robust assuming that the AWS Root CA has not been compromised.
No, only "Option 3" cares about security of the signing key.
Other options do not rely on compromising any encryption keys or production servers, they just need a developer's machine being compromised.
You are talking about PCR and hashes, and they are all good, but ultimately, it all ends up as a "Attestation" field in the request to AWS. And then it is evaluated against a policy, like that one: [0]. So what prevents someone from planting a trojan on arcurn's machine, waiting until they log into AWS console, and changing this policy to remove "Condition" block? Once this is done, anyone can fetch AWS keys and decrypt customer data.
And I don't think "TLS certificate validation" is going to help here, as it will be talking to authentic server. Nor checking of PCRs -- because this is done in that "condition" lock which is so easy to remove.
Yep, but the binary running in the enclave has access to both the attestation document (including PCRs) directly from Nitro as well as a mechanism to fetch IAM policies and verify that they are from the genuine IAM server (verify the TLS CA).
Making sure the IAM policy hasn't been tampered is just a case of adding logic to the enclave app to make sure that the IAM policy is configured correctly for that particular enclave (compare PCRs, make sure there's no wildcards, etc.)
How exactly is this provably secure? Is it secure against dishonest AWS? (eg. Their Nitro enclaves don't perform as specified). How about dishonest Evervault?
Hi there, our current root of trust is the AWS Nitro Security Chip. This means that if there was some kind of rogue supply chain within AWS' procurement + installation of these chips, the root of trust could be tampered with. We've spent a lot of time with the AWS team and have been deeply impressed by the thought they have put into this, to the point that we trusted them with something as security-critical as Evervault. They have also worked with us on our specific implementation, which was extremely helpful.
The "dishonest Evervault" scenario is solved by us exposing the Nitro Enclaves attestation documents to customers (as well as E2E TLS where only the enclave has access to the plaintext). We currently only share source code + the corresponding platform control registers (PCRs) to prove that the E3 running is a "valid" E3 to enterprise customers, but over time we're expending a lot of energy to make these kind of proofs accessible to our smaller customers as well. Stay tuned!
That doesn’t seem to answer the question. He isn’t asking how AWS protects against a supply chain attack, he’s asking how this protects against AWS lying about how nitro enclave functions, and/or intentionally giving themselves a back door into it.
Makes sense — the practical answer is: it doesn't. This is the eternal debate with TEEs. At some point, a company/fab/service provider has to be trusted to be acting in good faith. HSMs have existed for a very long time, and compliance approaches like FIPS 140-2 have been (although painful) quite successful.
When compared with other TEE alternatives like Intel SGX and AMD SEV, we are extremely confident that AWS Nitro Enclaves is the best choice.
The solution then is proper end to end encryption, not the solution you masquerade.
Your data is not really encrypted if Amazon, Intel, or AMD can be compelled by a secret government order to decrypt it. All of these trusted execution environments... rely on a trusted party with master keys, i.e. Amazon, Intel, or AMD, who can reflash microcode and expose all plaintext trivially and silently.
Hey Danny, I completely agree. Full end-to-end encryption is the ideal scenario.
The biggest challenge is how we can bridge the gap between how companies build software today (very little, if any encryption) and how companies will build software in the future. End-to-end encryption is great for scenarios where it's a closed ecosystem (e.g. messaging apps like Signal — although Signal actually trust Intel SGX as a single point of failure[0]), but modern web applications are not that. They interact with third-party APIs, they have UIs; they are not built in complete isolation.
Things like Fully Homomorphic Encryption are exciting (and FHE is ultimately the endgoal for how we built Evervault), but still a long way off being practical for a typical company to build general purpose software with. It also doesn't solve the data sharing scenario — certain companies just can't escape using third-party APIs and services.
Our mission is to encrypt the web, so the first hurdle we have to cross is getting developers who would normally not think about encryption to bake it into their software from day one. We think TEEs, and specifically Nitro Enclaves are the best way to make that happen.
If a better solution comes along, we'll be the first ones to pounce.
Yep, I think that makes sense. Certain use cases will have a need for some kind of on-prem/HSM approach, less from a practical perspective but more from a "doomsday modelling" perspective. Reminds me of the "nobody ever got fired for buying IBM" adage :)
We're building encryption infrastructure for developers. At the core of this infrastructure is our encryption engine, E3.
Today, we’re excited to share how we built it. Our blog[0] goes into more detail, but there's a quick summary below. We'd love to answer any questions you have!
- E3 is a simple encryption service for performing cryptographic operations with low latency, high scalability and extreme reliability.
- E3 is where all cryptographic operations for Evervault services happen.
- E3 ensures that Evervault never handles key material in environments that are not provably secure.
- E3 is written in Rust.
We’re grateful to the AWS Nitro Enclaves team for welcoming us to be a closed preview design partner, and are delighted to have been featured in the Nitro Enclaves Press Release[1] and on the Landing Page[2].
How does the customer prove that their data is secure? It seems like you can prove your own environment is secure (as long as you trust the TEE), but the customer is still trusting you to run in that environment, no?
The summary version is that we share source code and platform control registers (PCRs) with enterprise customers who need these kind of security guarantees, and also expose the Nitro Enclaves attestation documents to them so they can establish secure channels with E3 in a provable way.
So basically: as a retail consumer, we can't trust you. You might as well as be a malicous honeypot. Scan and log for cryptocurrency keys and then "get hacked" and retire in Thailand.
Or maybe you're a government honeypot, like Crypto AG, or the numerous other cryptography companies that turned out to actually be mass decryption companies.
If you're building an encryption company, the onus is on you to prove it. BitWarden for example is fully open source, and you can self host the server.
Hey Danny, correct — we do not currently expose attestations to consumers. Over time, this is something we absolutely plan on doing.
One thing worth focusing on is that Evervault is built for developers. Developers do not have to build using Evervault, so a developer using Evervault to mislead their customers about their security isn't something we focus heavily on. There are much easier ways for developers to mislead customers about their security, but that's a conversation for another time :)
I completely agree re: the onus being on us to prove it. It's something we're actively trying to improve, and sharing how we built E3 is just the beginning of us sharing more about how we design & build. Transparency is an existential requirement for us to become a standard part of the developer toolkit. Watch this space!
I agree, the justifications in this thread rely a lot on inherent trust of the facilities provided. I get that your example is somewhat a bad-case-scenario, but today nothing is a surprise, and it's entirely possible as unfortunate as that is.
Hello! If I may, some (hopefully constructive) criticism:
- I skimmed the article, and I honestly am not sure who the customer is. AWS customers I suppose? It would be nice to have a quick "TL;DR" section so I can quickly tell if I am the right customer. (I'm also not an AWS customer, so maybe I just don't understand the jargon).
- A big pet peeve of mine when reading security related things is not seeing security assumptions. It would be nice to understand what your infrastructure does and does not protect against.
Our customers are mostly developers working at startups which process high volumes of sensitive data (things like payment details, healthcare data, credentials etc.). We abstract away all of the infrastructure that E3 runs on, so we don't have any requirements for where our customers our hosted. Our customers run on a very diverse range of stacks/clouds, so being cloud-agnostic was a key design requirement.
The core security assumption with Evervault is that your API key is reasonably well managed. API keys can be easily rotated in our dashboard, but having encrypted data and an Evervault API key could potentially lead to data leakage. Over time, we plan on adding some cool features around data leakage whereby we can proactively block data exfiltration/leakage. More on this soon!
A simple model for how security with Evervault works is: you store encrypted data, but not keys; Evervault stores keys, but no data. This creates a nice dual-responsibility model which far exceeds what most companies can do today.
Feel free to email directly if you have any other questions! I'm on shane@evervault.com
> It is worth noting that this change could potentially leave implementations vulnerable to Bleichenbacher’s attack on PKCS#1 v1.5 RSA padding. In our implementation, these concerns are not an issue as there we have no access to any of the decrypt responses or stack traces.
You don't need those things to exploit a padding oracle. A timing leak is sufficient.
Unless you're doing what s2n does to blind response times of any potential timing leaks, you're probably still vulnerable.
Good point! We can safely isolate ourselves from any timing leaks by E3 because all E3 requests with potential timing leaks simply get transparently tunnelled to the end destination (e.g. an API you're passing through Relay), so a malicious end user can not determine if there was a crypto error and, specifically, how long that crypto error took to happen.
More broadly, though, we don't currently use RSA and chose ECDH instead — so padding oracles aren't something we have to worry too much about. We also have similar safety models for things like invalid curve attacks.
> a malicious end user can not determine if there was a crypto error and, specifically, how long that crypto error took to happen
That sounds interesting. I'd like to test this hypothesis sometime ;)
> We also have similar safety models for things like invalid curve attacks.
What is your defense against invalid curve attacks exactly? I'm very curious about that (although your target audience largely won't care, so this post probably doesn't need to be updated).
Two defenses that work:
1. Always check that the (x, y) coordinate is a solution for the curve equation
2. Use compressed public key points
I prefer option 2 (especially since the patent on point compression expired years ago), but option 1 works.
> That sounds interesting. I'd like to test this hypothesis sometime ;)
That sounds great! Feel free to shoot me an email on shane@evervault.com if you'd like to get further into the weeds :)
Our main defence against invalid curves is compressed public key points. It's not often that security mitigations also give some other nice advantages (smaller public keys!), but for this scenario it made total sense.
I'm surprised that something like point compression can even be patented, considering that it's relatively straightforward mathematically (all of the difficulty seems to lie in the number theory to compute a solution, which I assume wasn't invented by whoever owned the patent)...
Yep, but there will be no variance in timing from the end user's perspective (regardless of what payload they send to E3) because neither encryption results nor timing data get returned to the end user.
Yes! You might be thinking of node-secureworker[0]. We've been the maintainers for the last couple of years, but haven't been using it in production (concerns around Intel SGX)
Sure! Our model is focused on making sure that data gets encrypted before it hits our customers' infrastructure, so that there's nowhere on the backend where the data exists in plaintext. This isolates developers from misconfiguration and key management.
We keep the keys in AWS KMS and we use AWS policies settings to ensure that only executable generated by our Github Action job can access those keys.
This immediately shows how the Evervault (or a hacker impersonating Evervault employee) can access or decrypt user data:
Option 1: Go to AWS dashboard and change that KMS policy to be less restrictive. Then you can fetch the keys directly using awscli or download them from console.
Option 1a: if you are using some IaaS solution (terraform, cloudformation etc..) you can take over this, it may be easier.
Option 2: Commit a code into E3 repo that exfilatrates the keys and have that code deployed. Enclaves cannot connect to internet, so you'll have to be a bit creative -- perhaps encrypting a string "HELLO123" should produce a master key?
Option 3: If you cannot commit a code without someone noticing, go to Github settings and get all the environment/secrets from the repo. Then find an ssh key used to manage the self-hosted action runner which signs the images. Once you have those, you should be able to run the action steps locally on your repo.