> [...] the contents of the entire memory to be read over time, explains Rüegge. “We can trigger the error repeatedly and achieve a readout speed of over 5000 bytes per second.” In the event of an attack, therefore, it is only a matter of time before the information in the entire CPU memory falls into the wrong hands.
We need software and hardware to cooperate on this. Specifically, threads from different security contexts shouldn't get assigned to the same core. If we guarantee this, the fences/flushes/other clearing of shared state can be limited to kernel calls and process lifetime events, leaving all the benefits of caching and speculative execution on the table for things actually doing heavy lifting without worrying about side channel leaks.
I get you, but devs struggle to configure nginx to serve their overflowing cauldrons of 3rd party npm modules of witches incantations. Getting them securely design and develop security labelled cgroup based micro (nano?) compute services for inferencing text of various security levels is beyond even 95% of coders. I'd posit that it would be a herculean effort even for 1% devs.
It's not a "just" if the fix cripples performance; it's a tradeoff. It is forced to hurt everything everywhere because the processor alone has no mechanism to determine when the mitigation is actually required and when it is not. It is 2025 and security is part of our world; we need to bake it right into how we think about processor/software interaction instead of attempting to bolt it on after the fact. We learned that lesson for internet facing software decades ago. It's about time we learned it here as well.
Yeah… folks who think this is just some easy to avoid thing should go look around and find the processor without branch prediction that they want to use.
On the bright side, they will get to enjoy a much better music scene, because they’ll be visiting the 90’s.
There is of course a slight chicken-egg-thing here: If there was no (dynamic) branch prediction, we (as in compilers) would emit different code that is faster for non-predicting CPUs (and presumably slower for predicting CPUs). That would mitigate a bit of that 10x.
But if the fix for this bug (how many security holes have ther been now in Intel CPUs? 10?) brings only a couple % performance loss, like most of the them so far, how can you even justify that at all? Isn't there a fundamental issue in there?
How much improvement would there still be if we weren't so lazy when it comes to writing software. If we were working to get as much performance out of the machines as possible and avoiding useless bloat instead of just counting on the hardware to be "good enough" to handle the slowness with some grace.
A modern processor pipeline is dozens of cycles deep. Without branch prediction, we would need to know the next instruction at all times before beginning to fetch it. So we couldn’t begin fetching anything until the current instruction is decoded and we know it’s not a branch or jump. Even more seriously, if it is a branch, we would need to stall the pipeline and not do anything until the instruction finishes executing and we know whether it’s taken or not (possibly dozens of cycles later, or hundreds if it depends on a memory access). Stalling for so many cycles on every branch is totally incompatible with any kind of modern performance. If you want a processor that works this way, buy a microcontroller.
But branch prediction doesn't necessarily need complicated logic. If I remember correctly (it's been 20 years since I read any papers on it), the simple heuristic "all relative branches backwards are taken, but forward and absolute branches are not" could achieve 70-80% performance of the state-of-the-art implementations back then.
> [...] the contents of the entire memory to be read over time, explains Rüegge. “We can trigger the error repeatedly and achieve a readout speed of over 5000 bytes per second.” In the event of an attack, therefore, it is only a matter of time before the information in the entire CPU memory falls into the wrong hands.