I'd love the easy ability to run confidential computing loads with fine grained control over the data it gets access to. You can do this now on the desktop using SGX (etc) but on mobile it's really hard.
As a specific example of this, it'd be great to be able to run Whisper continually and have strong, system level guarantees about what can read the data.
The threat model you have in your head seems to imply that you don't trust your OS to not peek into what Whisper is doing? There are very few workloads that need or can operate under that model.
It's not really a matter of need, more a matter of good hygiene.
Do you trust any modern OS not to accidently include sensitive information when it generates a crash report for an app and sends it off the some remote server in the background?
Isolation is a useful tool. In an ideal world it can be done perfectly at the OS level, but we don't live in that world.
I agree that being able to isolate things that have different security domains is a useful tool. That said, I am not really seeing how pKVM provides useful primitives for much other than DRM, which has historically been the primary usecase for trusted execution that isolated VMs seem to provide.
Consider that, I want to be able to use a regular android OS, but I don't completely trust it, either its purposely malicious or just accidently going to leak info. So isolation is good in this case, its much easier to audit the mechanism of isolation rather than the whole OS.
The problem with DRM and "trusted computing" part is that it's under someone else's control, some central authority etc. From my reading of the docs on this, this is not the case with pVM, from https://source.android.com/docs/core/virtualization/security
> Data is tied to instances of a pVM, and secure boot ensures that access to an instance’s data can be controlled
> When a device is unlocked with fastboot oem unlock, user data is wiped.
> Once unlocked, the owner of the device is free to reflash partitions that are usually protected by verified boot, including partitions containing the pKVM implementation. Therefore, pKVM on an unlocked device won't be trusted to uphold the security model.
So my reading of this is that that it is under the users control, as long as they have the ability to unlock the bootloader, and reflash the device with their own images.
I'd love someone who is more knowledgeable to weigh in, but this tech, to me, doesn't seem that close to TPM/DRM type chips where there is no possibility of user control.
It is control in the sense that you can run your own applets I guess, but it is not control in the sense that you can necessarily inspect what the programs are doing, because once you reflash the device I'm sure the DRM programs will refuse to run.
As I said in my other response, I make heavy use of trusted (confidential) VMs for machine learning in cloud environments.
There are also vendors that are doing smart contract execution in trusted computing devices so you can get the benefits of trusted execution without the overhead of everyone executing the same code.
The issue here isn't the technology though, it's imagination.
Think about gaming in VR. You might want to make a game where the ML can adapt to the physical peculiarities of a person (think like personalized audio for airpods) but want to guarantee it isn't giving the person an advantage. Even simple things like setting up a VR system (or any physical computing device) can give an advantage to someone if corruptible.
At the moment there are lots of "anti-cheat" technologies that attempt to solve this, but really it needs trusted execution.
I'd love for my banking app to be completely isolated from the rest of my phone OS, in case I get malware. I'm sure journalists at risk of targeting by NSO and its ilk would appreciate isolation or their messaging apps
This is an interesting usecase (basically Qubes) but it has high overhead and I don't really see the framework as being designed to support this, at least yet. You'd need to move all sorts of services into the VM to support the app (like, for example, someone needs to pass touch input and network traffic into the VM) and at this begins to look like an entire OS running in there.
AFAIK Qualcomm's implementation does include passing touch input / display into the VM and is marketed in similar term ("Trusted User Interface") to TEE-based techs, except they are not in S-EL0/1.
I've only seen this used in some really obscure scenario (cryptocurrency wallet) though.
So I'm most familiar with using this in cases like machine learning on private data in cloud environments where you want to make it impossible for the cloud operator to see the data you are using.
I think there are usecases like this outside the mobile _phone_ that are interesting. For example on-device learning for edge devices where the device is not under your control.
See the thing here is that if the device is not under "your" control ("you" being a company or something, and the device being owned by a user) I don't think they will really appreciate you using their hardware to train your model in a way they don't get to see. Why would I want to support this on my own phone?
> I don't think they will really appreciate you using their hardware to train your model in a way they don't get to see.
This absolutely isn't the case. I know a number of vendors who are deploying edge ML capacity in satellites where the use case is for "agencies" to deploy ML algorithms that they (the vendors) cannot see.
I used to work at Google adjacent to this stuff and A) you wouldn't boot up a whole VM for this, on a phone, that'd be very wasteful B) there's much simpler ways to provide the same guarantee.
So in general, just would avoid labeling the quality of other people's takes. You never know who is reading yours
I agree there are currently better ways of doing this (because as you mention the resource/protection trade off for this technology on this application is sub-optimal), but the context here is as an example on HN where the data privacy is obvious so I didn't have to write a whole paper explaining it.
Its "not even wrong", if you had a million monkeys on a million typewriters with a million trillion millenia, still, none would come up with a paper long enough to explain how that'd help anything (ex. trivially, microphone)
Trusted computing can be used for DRM. I'm much more interested in it as a privacy enhancing technology: the fact that you can have strong guarantees about what can be done with data in the enclave is useful for a lot of applications where you have sensitive data.
(Putting aside the fact for the moment that most - if not all - trusted computing platforms have some security vulnerabilities. Obviously this is bad, but doesn't preclude their utility)
Not really. ARM TZ has been repeatedly blown open, in part because it’s not really a separate core or virtualized workload, but a different “mode of operation” that the standard CPU cores switch into temporarily. Basically going back-and-forth between TZ and your OS if I understand correctly. Turns out that’s a side-channel attack nightmare.
I'd love the easy ability to run confidential computing loads with fine grained control over the data it gets access to. You can do this now on the desktop using SGX (etc) but on mobile it's really hard.
As a specific example of this, it'd be great to be able to run Whisper continually and have strong, system level guarantees about what can read the data.