Critiques on TEE

This was posted buy Loong from the REN team. Was hoping to get a response from the team on the validity of these critiques

TEEs offer very different features. But, before touching on that: TEEs are not secure when running other peoples code. There are well-known vulnerabilities (and new ones being discovered all the time) that compromise security of the host system (the TEE itself) as well as the guest system (the code in the TEE). The TEE is also centralised around the manufacturer because you need to verify its keys with the manufacturer.

Okay, so that’s the practical problems stuff about why you shouldn’t use them in BFT systems. But, let’s assume they’re perfect (they’re not) and think about the feature differences.

In a TEE, you can not see what you’re executing. You can still execute it on your own though. This means you don’t need “permission” (consensus) from the whole network to run a computation and get a (potential hidden) result. There a couple of things at play here:

  • You cannot use this for interoperability. You need the whole network to verify that the user has in fact deposited BTC before minting an ERC20 representation of that BTC. In a TEE setup, the TEE can go ahead and do whatever it wants without permission. In sMPC, this is not possible so you can guarantee that an ERC20 will not be minted unless the majority of the network wants it to be.

  • You cannot have easily permissioned access to the data inside a TEE. In sMPC, the network can decide on some trigger that information is allowed to be revealed and then reveal it. A TEE has no such control. This can make working with programs that require inputs/outputs to/from specific people very difficult define.

  • TEEs require specialised hardware so not as many people can easily get their machines configured and start participating. This reduces practical decentralisation.

Ok, but there are a few pros that drive projects to use TEEs. The biggest one: sMPC is hard to get right in a BFT environment.

  • It is hard to get an sMPC to be fault tolerant against nodes going offline (RenVM has solved this with our new algorithm). Every other state-of-the-art algo that I know of fails if one node goes offline.

  • It is slower than TEEs. TEEs run everything locally so they’re orders of magnitude faster. RenVM is very fast, and has cut out a lot of unneeded data, making it the fastest algorithm I know of, but it will never be as fast as a network made up of TEEs.

  • TEEs are easier to coordinate than an sMPC because in an sMPC the participants have to communicate with each other. There are various ways to solve it, but a lot of projects pick TEEs because you don’t have to solve it.

Again, though. TEEs are good for setting up secure execution environments in situations where there is some base level of trust. It’s an extra layer of security for high-critical stuff. As a hacker, if I gain access to your system, you’ve made my life very hard. In practice, you have good security until the body of researchers in this space finds the next vulnerability.

But, they were not designed and built for decentralised BFT computing where there is not meant to be central point of trust and where you can mathematically prove the properties of your system.

2 Likes

TEEs offer very different features. But, before touching on that: TEEs are not secure when running other peoples code. There are well-known vulnerabilities (and new ones being discovered all the time) that compromise security of the host system (the TEE itself) as well as the guest system (the code in the TEE).

This isn’t at all accurate. First, attacks against SGX are blown out of proportion (this is not to say there won’t be new attack vectors found, or that there aren’t more improvements to be made). I find it ironic that people are willing to think about the decade ahead when talking about fancy cryptography (which I’m a huge proponent of), and yet they try to limit TEE technology to what is possible today. Eventually, I believe TEEs will be multi-vendor, open source, and will have well-tested software and hardware suites (Enigma trying to tackle the software part for the time being) that will make it very expensive to practically leak any sensitive information from the enclave - especially those running in a live network of many nodes.

Second, Enigma doesn’t allow you to run any code others supply - like any blockchain, code is run in a sandboxed VM (there’s a WASM interpreter running inside of the enclave). While we have not implemented this yet, writing a side-channel resistant WASM interpreter could go a long way in limiting this attack vector. More importantly, most of the sensitive data (the actual encryption/signing keys) reside in a relatively small and fixed part of the code handling all cryptographic operations. That part cannot be altered by outside players and as long as it’s well audited and side-channel resistant, the really concerning part of extracting the keys from inside the enclave should not be possible.

The TEE is also centralised around the manufacturer because you need to verify its keys with the manufacturer.

Not true. We’ve implemented a bootstrap mechanism so we don’t rely on the manufacturer keys beyond an initial setup phase. This means that when actual sensitive data is stored on Enigma, it is encrypted with keys that were freshly generated and are unknown to Intel. Also, in SGX2 this will become a non-issue (you don’t need to EVER verify the keys with Intel).

In a TEE, you can not see what you’re executing. You can still execute it on your own though. This means you don’t need “permission” (consensus) from the whole network to run a computation and get a (potential hidden) result. There a couple of things at play here:

  • You cannot use this for interoperability. You need the whole network to verify that the user has in fact deposited BTC before minting an ERC20 representation of that BTC. In a TEE setup, the TEE can go ahead and do whatever it wants without permission. In sMPC, this is not possible so you can guarantee that an ERC20 will not be minted unless the majority of the network wants it to be.

I don’t fully understand what he’s trying to say, but from what I do - none of it makes any sense. In our network you can see the code that you’re executing on - because the bytecode is sent uneencrypted in the network (by design for transparency/security reasons). The code that executes that bytecode in the enclave uses a signed enclave code that is open-sourced, so you know it’s doing what it’s supposed to do. The only way to trick this mechanism is if theoretically you can fully break the TEE - but there’s an easy fix for that - ask multiple random nodes in the network to run the computation and reach consensus on the result. Because of the use of TEEs you can probably get away with less nodes involved in the computation compared to normal consensus mechanisms (since breaking a single enclave is not easy) - so even for BFT, TEEs provide a meaningful practical benefit.

  • You cannot have easily permissioned access to the data inside a TEE. In sMPC, the network can decide on some trigger that information is allowed to be revealed and then reveal it. A TEE has no such control. This can make working with programs that require inputs/outputs to/from specific people very difficult define.

Nonsense. MPC and TEEs can both solve access-control. We’ve discussed this at length (with both MPC - see my paper on Decentralizing Privacy and TEEs - in our blog).

  • TEEs require specialised hardware so not as many people can easily get their machines configured and start participating. This reduces practical decentralisation.

This is really the only valid point made - but TEEs are becoming more and more ubiquitous and are supported by all the large CPU vendors (to different levels). This is still much more accessible/decentralized than PoW miners, or staking-as-a-service companies.

It is hard to get an sMPC to be fault tolerant against nodes going offline (RenVM has solved this with our new algorithm). Every other state-of-the-art algo that I know of fails if one node goes offline.

Not true, there are works on cheater detection for dishonest majority, but they are generally quite expensive. Actually, we’ve been looking (this is purely research for now) at combining MPC with TEEs to achieve the best of both worlds - efficient cheater detection when using MPC protocols.

Again, though. TEEs are good for setting up secure execution environments in situations where there is some base level of trust. It’s an extra layer of security for high-critical stuff. As a hacker, if I gain access to your system, you’ve made my life very hard. In practice, you have good security until the body of researchers in this space finds the next vulnerability.
But, they were not designed and built for decentralised BFT computing where there is not meant to be central point of trust and where you can mathematically prove the properties of your system.

Naturally, I disagree with this conclusion. I hope my reasoning above gives clarity on why the individual claims made in support of this conclusion are false. And if I may relate this to my own experience - I’ve been working on MPC for years (my entire thesis was on the subject and I built an MPC VM at as early as 2015), and while I see great promise in it and how it’s going to shape privacy technologies in the years to come, it’s clear that its role is more limited than running fully blown VMs.

TEEs are the only viable solution currently (and in the foreseeable future) for general-purpose privacy-preserving computations. ZKPs can solve specific tasks really well (e.g., privacy coins and compressing information on-chain) and MPC can solve specific tasks really well (Trustless setups of CRS, Non-custodial crypto custody and trade).

Hope this helps!

6 Likes