Hacker News new | past | comments | ask | show | jobs | submit login

Two-way isolation seems like it'd only be useful for DRM and Treacherous Computing.



This is such a bad take.

I'd love the easy ability to run confidential computing loads with fine grained control over the data it gets access to. You can do this now on the desktop using SGX (etc) but on mobile it's really hard.

As a specific example of this, it'd be great to be able to run Whisper continually and have strong, system level guarantees about what can read the data.


The threat model you have in your head seems to imply that you don't trust your OS to not peek into what Whisper is doing? There are very few workloads that need or can operate under that model.


It's not really a matter of need, more a matter of good hygiene.

Do you trust any modern OS not to accidently include sensitive information when it generates a crash report for an app and sends it off the some remote server in the background?

Isolation is a useful tool. In an ideal world it can be done perfectly at the OS level, but we don't live in that world.


I agree that being able to isolate things that have different security domains is a useful tool. That said, I am not really seeing how pKVM provides useful primitives for much other than DRM, which has historically been the primary usecase for trusted execution that isolated VMs seem to provide.


Consider that, I want to be able to use a regular android OS, but I don't completely trust it, either its purposely malicious or just accidently going to leak info. So isolation is good in this case, its much easier to audit the mechanism of isolation rather than the whole OS.

The problem with DRM and "trusted computing" part is that it's under someone else's control, some central authority etc. From my reading of the docs on this, this is not the case with pVM, from https://source.android.com/docs/core/virtualization/security

> Data is tied to instances of a pVM, and secure boot ensures that access to an instance’s data can be controlled

> When a device is unlocked with fastboot oem unlock, user data is wiped.

> Once unlocked, the owner of the device is free to reflash partitions that are usually protected by verified boot, including partitions containing the pKVM implementation. Therefore, pKVM on an unlocked device won't be trusted to uphold the security model.

So my reading of this is that that it is under the users control, as long as they have the ability to unlock the bootloader, and reflash the device with their own images.

I'd love someone who is more knowledgeable to weigh in, but this tech, to me, doesn't seem that close to TPM/DRM type chips where there is no possibility of user control.


It is control in the sense that you can run your own applets I guess, but it is not control in the sense that you can necessarily inspect what the programs are doing, because once you reflash the device I'm sure the DRM programs will refuse to run.


As I said in my other response, I make heavy use of trusted (confidential) VMs for machine learning in cloud environments.

There are also vendors that are doing smart contract execution in trusted computing devices so you can get the benefits of trusted execution without the overhead of everyone executing the same code.


There are a handful of potential uses for confidential VMs, but not many of them really seem to make sense on phones?


The issue here isn't the technology though, it's imagination.

Think about gaming in VR. You might want to make a game where the ML can adapt to the physical peculiarities of a person (think like personalized audio for airpods) but want to guarantee it isn't giving the person an advantage. Even simple things like setting up a VR system (or any physical computing device) can give an advantage to someone if corruptible.

At the moment there are lots of "anti-cheat" technologies that attempt to solve this, but really it needs trusted execution.


I'd love for my banking app to be completely isolated from the rest of my phone OS, in case I get malware. I'm sure journalists at risk of targeting by NSO and its ilk would appreciate isolation or their messaging apps


This is an interesting usecase (basically Qubes) but it has high overhead and I don't really see the framework as being designed to support this, at least yet. You'd need to move all sorts of services into the VM to support the app (like, for example, someone needs to pass touch input and network traffic into the VM) and at this begins to look like an entire OS running in there.


Qualcomm does have similar architecture deployed, here is their hypervisor: https://github.com/quic/gunyah-hypervisor.

AFAIK Qualcomm's implementation does include passing touch input / display into the VM and is marketed in similar term ("Trusted User Interface") to TEE-based techs, except they are not in S-EL0/1.

I've only seen this used in some really obscure scenario (cryptocurrency wallet) though.


So I'm most familiar with using this in cases like machine learning on private data in cloud environments where you want to make it impossible for the cloud operator to see the data you are using.

I think there are usecases like this outside the mobile _phone_ that are interesting. For example on-device learning for edge devices where the device is not under your control.


See the thing here is that if the device is not under "your" control ("you" being a company or something, and the device being owned by a user) I don't think they will really appreciate you using their hardware to train your model in a way they don't get to see. Why would I want to support this on my own phone?


> I don't think they will really appreciate you using their hardware to train your model in a way they don't get to see.

This absolutely isn't the case. I know a number of vendors who are deploying edge ML capacity in satellites where the use case is for "agencies" to deploy ML algorithms that they (the vendors) cannot see.


Btw SGX has been removed from 11th gen desktop CPUs and onwards https://www.bleepingcomputer.com/news/security/new-intel-chi...


I used to work at Google adjacent to this stuff and A) you wouldn't boot up a whole VM for this, on a phone, that'd be very wasteful B) there's much simpler ways to provide the same guarantee.

So in general, just would avoid labeling the quality of other people's takes. You never know who is reading yours


I agree there are currently better ways of doing this (because as you mention the resource/protection trade off for this technology on this application is sub-optimal), but the context here is as an example on HN where the data privacy is obvious so I didn't have to write a whole paper explaining it.


Its "not even wrong", if you had a million monkeys on a million typewriters with a million trillion millenia, still, none would come up with a paper long enough to explain how that'd help anything (ex. trivially, microphone)


> trivially, microphone

Qualcomm has trusted input for at least touch sensors into their trusted enclave (unsure about microphone input at this point). Look for "TUI" in https://www.qualcomm.com/content/dam/qcomm-martech/dm-assets...


Yes


is it really? wasn't that the whole point of ARM TZ/SEP?


Trusted computing can be used for DRM. I'm much more interested in it as a privacy enhancing technology: the fact that you can have strong guarantees about what can be done with data in the enclave is useful for a lot of applications where you have sensitive data.

(Putting aside the fact for the moment that most - if not all - trusted computing platforms have some security vulnerabilities. Obviously this is bad, but doesn't preclude their utility)


Not really. ARM TZ has been repeatedly blown open, in part because it’s not really a separate core or virtualized workload, but a different “mode of operation” that the standard CPU cores switch into temporarily. Basically going back-and-forth between TZ and your OS if I understand correctly. Turns out that’s a side-channel attack nightmare.


This is supposed to be a replacement to TrustZone applets.


This seems like an excellent tool for digital ID cards, banks, government authentication apps, maybe 2FA apps, cryptocurrency wallets, you name it. Anything that's more important than a calculator.

DRM and remote attestation already use a separate secure environment, so I don't see what would change by adding virtualisation.


Websites will require digital ID just to use them, along with remote attestation. They will also be able to ban or block you in an actually effective and comprehensive way.

There will be a chilling effect because people won't want to upset their Google/Microsoft/Apple/Meta etc overlords by saying or doing the wrong thing, and then get locked out of services they need to exist in society, do their job, spend money, etc.


Digital ID exists and is widely used, yet I only need to use my digital ID to authenticate with government services. Remote attestation is the norm for many types of apps already yet I can use my bank app on my rooted phone just fine, or use my phone to authenticate with my government's SSO system.

I'm no fan of the modern dependence on Play Services or Google's attempts to kill adblockers through remote attestation, but none of these technologies are inherently bad. Business devices authenticating to business websites should allow remote attestation to verify that their hardware has not been tempered with just as an extra security measure.

Maybe your government is more evil or incompetent than mine, but bad governments aren't going to he limited by technological concepts like these.


I'm not worried about the government, I'm more worried about inscrutable decisions made by companies like Google, where their automated systems decide that you're an anomaly, and thus malicious, and choose to ban you.

Instead of just losing your account, you (or at least both your machine and your digital ID) are banned for good. This already happens with phones, where the entire device gets banned by apps for good, adding a layer of digital ID on top of it worsens the consequences of such decisions by platform owners against users.

> Remote attestation is the norm for many types of apps already yet I can use my bank app on my rooted phone just fine,

Many people can't on their rooted phones, and this cat-and-mouse game will eventually be won by the parties with million/billions to throw at it.


Digital ID is safe from abuse by our ad overlords as long as it only happens in insular implementations for markets smaller than California. Things would look wildly different if digital ID was a thing in the USA (I find it rather amusing how they claim to have no ID at all, yet a decade in Europe seems to involve less presenting of ID than a month in the US involves presenting their driver's license ID substitute)

But I don't disagree, I'd rather have a rooted phone with a few islands out of my (and the software that I run!) control for sensitive authentication use cases than a phone where I'm not in control at all. Or than two phones, because only one of them can be rooted.


Maybe banking apps would let you run them on rooted phones if they were in an isolated VM


Every app will want to run in isolated VMs and rooting will mean nothing.

It's like SafetyNet today, you can't run a good amount of apps on unapproved platforms already, even apps that don't handle confidential data.


In an ideal world, you could opt-out of isolation without giving the code in the container a way to know. You wouldn't want to opt-out your banking TAN generator just like you wouldn't want to put the password in your email footer, but a Facebook client would likely be a popular target (despite the hypothetical risk of an attacker destroying your reputation by posting in your name).


This nonsense means I just use them in the browser. There is no functionality the apps would provide me that makes it worth fighting with their superstitious nonsense.


Some banks already require you to install and use apps to approve of transactions made outside of the app.

When I traveled, this is how I was able to spend money without having to call my bank every time I tried to use my card in person.


Why do US banks do that? I’ve never had a UK or EU bank call me to verify a transaction.

Do you have the IdentityCheck/SecureCode/3-D Secure stuff (2FA for online transactions and at certain terminals)? Are these calls for transactions without chip + PIN?

I’ve had some transactions declined while travelling but maybe about 1/1000, and still no call, and nothing the bank support could do to allow them if I called. I’d just have to use a different bank with a vendor. It’s very much a “computer says no” situation then. Otherwise, the payment just goes through in the 99.9% of all cases.

But the banks in central EU, the Nordics, and the UK don’t seem to monitor the transactions I make while travelling to the point that there would be an actual person involved (calling me or reaching out in some other way).

I’m mostly curious about what problem these bank calls are solving. Is it for credit card fraud? In that case, I wonder why this seems to not be a practice in Europe. Is it because we do chip & PIN in physical payments, and 2FA for online/some kiosks?


> I’ve never had a UK or EU bank call me to verify a transaction

That probably just means that you never made transactions that crossed the banks' suspicion threshold. Which might be quite high if the bank is confident that it won't be on the hook for credential abuse and does not care if their customers lose money to identify theft. That confirmation call would be an indication of good service, not of bad service.

I'm not saying that calls would be preferable to better authentication schemes like chip+pin (in skimming is very much a thing though), calls are just another second factor after all. And not even a particularly safe one. But defense should be layered and that layer stack should absolutely contain a form of confirmation call on some level if you are a bank.


What are you supposed to do if you don't have a smartphone? My bank simply texts me if there's suspicious purposes and you reply "YES".


See for example the Xbox, where everything runs as a VM.


Yep, you need only look at the number of server providers offering confidential computing (pretty much only the big 3) and the premium they charge for it (10x, except AWS “trust me bro” Nitro)

Confidential computing is cool and useful when you’re the one controlling the VM, but scary when you’re the one blindly running it on your hardware

Hopefully this gets (publicly!) backdoored like SEV, SGX, etc


> Confidential computing is cool and useful when you’re the one controlling the VM, but scary when you’re the one blindly running it on your hardware

Important point.

> Hopefully this gets (publicly!) backdoored like SEV, SGX, etc

From my reading this doesn't need to be backdoored, if you have the ability to unlock the bootloader, you are not reliant on googles root of trust to be able to use this feature, you can go ahead and become your own "vendor", by signing your own images, or use your choice of vendor, then relock the bootloader and have the same security guarantees.

I'll admit this only from a cursory glance over the documentation and a vague understanding, happy to be corrected, but seems a lot of the arguments in this thread are about your first point, who has control over the OS.

I'll also add that the EU is being quite proactive in people having control over their own device, and who is their 'choice of vendor' so while I understand concerns people bring up, I'm a bit more optimistic that it can be a more useful tool than not.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: