Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

These CPU flaws make it seem as if virtualization in the data center is becoming really, really dangerous. If these exploits continue to appear, the only way forward would be dedicated machines for each application of each customer. Essentially, this might be killing the cloud by 1000 papercuts because it loses efficiency and cost effectiveness and locally hosted hardware does not necessarily have to have all mitigations applied (no potential of a unknown 3rd party code deployed to the same server).



Many years ago, OpenBSD's Theo De Raadt made a sneer at virtualization, saying something the lines of "they can't even build a secure system, let alone a secure virtualized system". I can't remember who he was referring to specifically, but we've certainly been seeing a lot of similar vulnerabilities.


Here's the full Theo de Raadt quote from 2007 [1]:

"""> Virtualization seems to have a lot of security benefits.

You've been smoking something really mind altering, and I think you should share it.

x86 virtualization is about basically placing another nearly full kernel, full of new bugs, on top of a nasty x86 architecture which barely has correct page protection. Then running your operating system on the other side of this brand new pile of shit.

You are absolutely deluded, if not stupid, if you think that a worldwide collection of software engineers who can't write operating systems or applications without security holes, can then turn around and suddenly write virtualization layers without security holes.

You've seen something on the shelf, and it has all sorts of pretty colours, and you've bought it.

That's all x86 virtualization is. """

[1] https://marc.info/?l=openbsd-misc&m=119318909016582


I feel like people with these sorts of hardline views on security, might just be so concerned with safety that their argument misses the whole opportunity cost of not being 100% safe in our usage of technology. If we needed to make sure everything was safe and perfectly secure, the world would have missed out on a lot of innovative software. Tough thing to contend with is that the security people are hardly ever wrong.


>hardline views on security

The only hardline view on security you'll encounter in the wild is "security is practical in our computational environments"[1]. Only half-joking here.

My reading of Theo's quote is merely "the combination of x86/IA32/AMD64 and virtualization gives little to no factual security benefits, and plenty of pitfals".

I don't see Theo as being a hardliner about security, just meticulous about good engineering practices - as per OpenBSD's usual standards - and facing the problems & risks as they are.

[1] examples: "Rust/Java gives you security", "shortlisting the only allowed actions by end-user application gives you security", "hardcore firewalls give you security", "virtualization gives you security", "advanced architectures like Burroughs' give you security".


Except that's objectively wrong - x86 virtualization breakouts have been extremely rare in practice, and fixable till recently.

The new class of attacks we now see target any type of shared code execution environment. OpenBSD is as vulnerable to this as anything else.


OpenBSD disables hyperthreading, doesn't it? That's a smart defense against at least one of today's attacks. Doesn't help if you're a VM guest, but does if you're the host.


there's a foreshadow-ng variant specifically for vms, and it's arguably the worst


> examples: "Rust/Java gives you security"

Reminds me a friend who worked on Javascript in the early days said it was the only thing that had any hope of providing minimal security at the time. Because Windows 3.1 and 95 +0x86 was a security trashfire.


I believe "they" were all the people poking the project asking when were they going to support virtual X and virtual Y. He basically stated it would never happen on OpenBSD[1] but here we are with [2] (vmm/vmd/vmctl).

[1] http://www.tylerkrpata.com/2007/10/theo-de-raadt-on-x86-virt...

[2] https://www.openbsd.org/faq/faq16.html

[1] probably isn't the best source out there, I was in a bit of a rush to find it but that is indeed the quote! Gotta either love or hate Theo I guess!


Would be cool to see a source for that one. Theo de Raadt is a hardliner whom I don't always agree with, but I'd like to know how visionary he actually was in this case (quite a bit by the looks of it).


I'm not a security person but I wanna practice trying to sum up his points:

1. There's no way in hell that a bunch of VMs running on one physical server is more secure than a bunch of different physical servers each running an OS. If there were architectural hooks for those VMs to provide additional security beyond what the host OS provides, then an OS like OpenBSD would already be making use of it.

2. Running a bunch of VMs on a single physical machine is certainly cheaper.

3. People who are in favor of the cost-cutting are claiming that there's a security benefit to sell more stuff.

Am I right?

If so, how does that stance jibe with the research that Qubes is based on?


I think the argument VM-sellers make is that it's more secure than running a bunch of colocated code on the same machine without VMs, not that it's more secure than distinct physical systems.


That is their claim. Theo is pointing out that the security is an illusion. Either the OS is secure and so you may as well just run everything in the OS without the VM in the way (ignoring issues of different operating system), or the OS is not secure and now you have to hope the VM is secure because otherwise you just exploit your VM to get out of it and then exploit the OS. The second level attack is more difficult, but that is all.


Almost right, except one thing: I think Theo de Raadt wrongly did not acknowledge the valid point of his opponent: in practice, separating applications into virtual machines does have some security benefits, when compared to running them on single OS.

I think security guarranties are better if you follow practices of a little selfcentered project such as OpenBSD (run only trusted code) than if you follow practices of QubesOS (running whatever untrusted code you desire in Xen domains and relying on VM separation).



Interesting to note that AWS has been working on their own custom silicon, such as the announced Arm based AWS Graviton powered machines.

We will most likely see a continued divergence between "consumer silicon" which is designed for speed in a single tenant environment on your local desktop or laptop, and "cloud silicon" which is optimized to protect virtualization, be power efficient, etc. I'd predict that this will actually lead to increased efficiency and lower prices of cloud resources rather than the "death by a 1000 cuts" that you are proposing.


Except most companies don't care about security of the cloud apart from the magical "compliance" and that they are on the same cloud as everyone else.


There are bare metal "cloud" providers such as Packet.net where you get the click n deploy convenience of the cloud but a physical machine. They have quite small machines in the inventory that are close to price competitive with VMs. Even Amazon has this bare metal capability FWIW, but afaik only for big expensive machines.


it increases cloud revenues because of slow downs in CPU, and people can't move off cloud because they're locked in and can't hire datacenter engineers anyway


Ultimately cloud providers don't want revenue, they want profit. Except in some perverse cases (like cost-plus-percentage contracts), it's not generally in a business's interests for their costs to go up.

Even if there's no opportunity to switch away, eventually you bleed your customers dry and put them out of business. You will typically always aim to price your offering at the equilibrium point where loss of custom increases faster than the increase in profit, and vice versa.

One situation in which an increase in your costs can be good, is if the same increase applies more to your competition. But, in this case multi-tenant cloud is hit harder than the competing alternative of private infrastructure.


This is an important point that I noticed around the time of Spectre/Meltdown as well. The mitigation for those bugs caused an average of 30% CPU slowdown, meaning it took 30% more CPU cycles to perform the same work as it did prior to the mitigation. If a cloud provider rolls that out to every server, then every customer's bill for CPU usage should increase by roughly 30%.

Am I totally misunderstanding this? Someone please correct me if I'm wrong.


Well, dedicated machines for each security domain for each customer, a lot of the time it's fine for many applications to be in the same security domain.


Even this isn't enough. Sometimes mutually untrusted parties must exchange data (say you're running a trading platform, or a social network). You have to ensure every point of interaction between such parties is immune to timing attacks.


In theory, yes. But getting statistically meaningful data on sub-ms timing variations on a jittery connection with both round trip and jitter orders of magnitudes larger is hard... it would be a very, very slow attack and probably impractical in most cases.


> dedicated machines for each application of each customer.

I don't think you don't need to go this far. You can probably get away with circuit switching small blocks of hardware, and fully resetting them between handovers. Although you'd have to ensure sufficient randomisation / granularity to destroy side channels in the switching logic.


You should use large instance that don't share the same CPU socket, on AWS for example it would be c5.9xlarge and above.


The point of virtualization isn't to add security. It gives you functionality you just cannot have otherwise, and the cloud enables you to scale in a way that is impossible otherwise. If there are security holes, they get patched and the market moves on. It's not just going to abandon either the cloud or virtualization.


I don't agree with that statement. As a stand-alone physical machine is expected to be secure in its own enclave an I think it's within reason a virtual one would have the same expectations.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: