Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Or people could just understand the scope of the issue better, and realize that just because something has a vulnerability doesn't mean there is a direct line to an attack.

In the case of speculative execution, you need an insane amount of prep to use that exploit to actually do something. The only real way this could ever be used is if you have direct access to the computer where you can run low level code. Its not like you can write JS code with this that runs on browsers that lets you leak arbitrary secrets.

And in the case of systems that are valuable enough to exploit with a risk of a dedicated private or state funded group doing the necessary research and targeting, there should be a system that doesn't allow unauthorized arbitrary code to run in the first place.

I personally disable all the mitigations because performance boost is actually noticeable.



> Its not like you can write JS code with this that runs on browsers that lets you leak arbitrary secrets

That's precisely what Spectre and Meltdown were though. It's unclear whether this attack would work in modern browsers but they did reenable SharedArrayBuffer & it's unclear if the existing mitigations for Spectre/Meltdown stimy this attack.

> I personally disable all the mitigations because performance boost is actually noticeable.

Congratulations, you are probably susceptible to JS code reading crypto keys on your machine.


Disabling some mitigations makes sense for an internal box that does not run arbitrary code from the internet, like a build server, or a load balancer, or maybe even a stateless API-serving box, as long as it's not a VM on a physical machine shared with other tenants.


You run "arbitrary code from the internet" as soon as you use a web browser with JS enabled.


This is exactly what you won't do on most of your infrastructure boxes, would you? If you can reasonably trust all the software on the whole box, many mitigations that protect against effects of running adversary code on your machine become superfluous.

OTOH if an adversary gets a low-privilege RCE on your box, exploiting something like Spectre or RowHammer could help elevate the privilege level, and more easily mount an attack on your other infrastructure.


Yeah, as stated in a sibling answer, I misread your comment a little bit. It's true, on at least some classes of infrastructure boxes, you more or less "own all that is on the machine" anyway.

But also note my caveat about database servers, for example. A database server shared between accounts of different trust levels will be affected, if the database supports stored procedures for example. Basically, as soon as there's anything on the box that not all users of it should be able to access anyway, you'll have to be very, very careful.


While that’s an interesting idea, I’m not sure a side channel attack is actually exploitable by a stored procedure as I don’t believe it has enough gadgets.


I don't know. PL/SQL (which is separate from SQL) is effectively a general purpose language, and kind of a beast at that. I have not the faintest idea, but at least I wouldn't be surprised to see high enough precision timers, and maybe it even getting JITted down into machine code for performance nowadays. (And I've read that tight loops can be used for timing in side channel attacks as well, although I assume it requires a lot more knowledge about the device you're running on.)

A quick search reveals that there is at least a timer mechanism, but I have no idea of any of its properties: https://docs.oracle.com/en/database/oracle/oracle-database/1...

But what I'm actually trying to say, is: For multiple intents and purposes (which might or might not include relevance to this specific vulnerability), as soon as you allow stored procedures in your database, "not running arbitrary code" is not a generally true statement instead.


You need some lowish level programming primitives to execute side chain attacks. For example, you can't do cache timing with SQL.


PL/SQL, not SQL. Whatever I knew about PL/SQL in the 90s and early 2000s I've forgotten, but I wouldn't be so certain that PL/SQL a) does not have precise enough timing primitives, and b) does not get JITed down into machine code nowadays. It is a fully fledged, turing complete programming language with loops, arrays etc.


What infrastructure box are you running that is running 100% all your code? Unless you ignore supply chain attacks, you’ve always got exposure.


Excluding hardware supply chain attack, you start with a secure linux distro that is signed, and then the code that you write basically is written from scratch, using only the core libraries.

I got really good a CS because I used to work for a contractor in a SCIF where we counldn't bring in any external packages, so I basically had to write C code for things like web servers from scratch.


Or with JS disabled. HTML isn't as expressive, but it's still "arbitrary code from the internet"


There is a difference. JS is turing complete, pure HTML is far from (as far as I'm aware). So HTML might (!) well be restricted enough to not be able to carry out such an attack.

But I'd never state to definitively, as I don't know enough about what HTML without JS can do these days. For all I know there's a turing tarpit in there somewhere...


CSS3 is Turing-complete, but creating an exploit using just it would be... quite a feat.

With JS or WASM, it's much more straightforward.


HTML doesn’t have the potential to deliver Spectre like attacks because:

1. No timers - timers are generally a required gadget & often they need to be hires or building a suitable timing gadget gets harder & your bandwidth of the attack goes down

2. No loops - you have to do timing stuff in a loop to exploit bugs in the predictor.


Which you wouldn't do on an internal load balancer or database server, right?


You are right, I sort of misread the statement I was replying to, but also wanted to reinforce that the large class of personal desktop machines is still very much affected, even if you "think" that you don't run "arbitrary code" on your machine.

By the way, you have to be careful on your database server to not actually run arbitrary code as well. If your database supports stored procedures (think PL/SQL), that qualifies, if the clients that are able to create the stored procedures are not supposed to be able to access all data on that server anyway.


Oh yeah. Supply-chain risk is still a thing too and defense-in-depth is not a bad strategy.

Physical isolation simplifies a lot of this. This class of attacks isn't (as) relevant for single-tenant single-workload dedicated machines.


Based on this thread, I think people badly misjudge what “single-tenant” means in the context of susceptibility to exploits.


Mind elaborating?


Your “infrastructure” server could be a CI server - it’s just building “my” code ignoring that many (all?) build systems allow execution of arbitrary code as part of the build process (rust, cmake, bazel, JS ecosystem, Go, etc etc) and many involve 3p dependencies. And CI servers often handle secrets to infrastructure (publishing packages, etc). So you could end up allowing a supply chain attack that reads out various API keys & whatnot.

In other words, properly drawing the boundary around “this is safe with meltdown disabled” is very hard, non-intuitive, and you’re one configuration/SW change or a violated assumption away from a Meltdown attack which is cross-process memory access & one notch below remote access. There’s a reason you design for security in depth rather than trying to carefully build a jenga tower where you’re one falling block away from total compromise.


>Congratulations, you are probably susceptible to JS code reading crypto keys on your machine.

No wonder you guys are scared AI is going to take your job lol.

Thats not how it works at all. To grab a key stored in a JS variable, the following would need to happen

1. Attacker needs to find a way to inject arbitrary JS code in a website, which means controlling either an iframe that is loaded or some component. This is a pretty hard thing to do these days with Same-Site strictness

2. The code needs to know specifically what memory address to target. When things like JWT or other tokens are stored in session or local storage, the variable name usually contains a random string. Injected code will have to figure out a way to find what that variable name is.

3. For attack to work, the cache has to get evicted. This is highly processor specific on how well it works, and also, the web app has to be in a state where no other process is referencing that variable. With JS, you also have to infer memory layout (https://security.googleblog.com/2021/03/a-spectre-proof-of-c...) first, which takes time. Then you have to train the branch predictor, which also takes time.

So basically, I have a statistically higher chance of losing my keys to someone who physically robs me rather than a cache timing attack.

Generally when an exploit like this drops, people always have failures to update their systems, and you see it being used in the wild. With Spectre/Meltdown, this didn't really happen, because of the nature of how these attacks work and the difficulty of getting the cache timing code to work correctly without specific targeting of a processor and ability to execute arbitrary code on the machine.


This seems to theorize an attack where you are interested in particularly data of the particularly visited website, and at the same time assuming that the attack would have to be carried out on the same website.

The vulnerability however allows arbitrary reading of any memory in the system in at least some circumstances, the presented PoC (https://www.youtube.com/watch?v=jrsOvaN7PaA ) demonstrates this by literally searching memory for the system's /etc/shadow and dumping that.

Whether the attack is practical using JS instead of a compiled C program is unknown to me, but if it is, it's not clear to me why the attacker would need to inject JS code into other websites or know what addresses to target. (If it is not, the question is moot.)


>The vulnerability however allows arbitrary reading of any memory in the system in at least some circumstances, the presented PoC

The PoC uses compiled C code. I hope I don't have to explain the difference between C code that runs on the system versus JS code that runs in the context of the browser...


Well that only depends on what gadgets happen to be available, doesn't it? Both C and JS get compiled down to machine code.

I personally would not trust that you couldn't, in the most extreme case, get close enough to the kernel (like the PoC does through system calls) to mispredict it into leaking any kernel-mapped memory through a timing side channel. And nowadays, kernels typically map almost all physical memory in their shared address space (it's not too expensive in a 64 bit address space).

EDIT: See my extended reasoning here: https://news.ycombinator.com/item?id=43991696


There is no gadget in JS that lets you access arbitrary memory address in the system. You can create an array and then access it past bounds in the sense that branch predictor will execute this code and in theory load the address to cache, but the memory start of that array is going to be arbitrary. The JS engine doesn't allow you (even in web assembly) to access raw memory by value, and there is always a translation layer.


Again, I don't think I understood yet why the JS code needs to create actual pointers accessing arbitrary memory for the attack to work, instead of benignly passing down arbitrary integer values far enough into (say) the kernel and mispredicting into code that would use these arbitrary values to be dereferenced as pointers, elaborated here: https://news.ycombinator.com/item?id=43991973


Because you have no way of computing how "far" you need past the array length to access, because you have no idea where the first value is in memory, and you can read backwards from the memory location assigned to you by the JS engine. So if you get INSANELY lucky and make cache eviction of arbitrary memory addresses work, and you can get around other applications accessing the memory and putting values back in the cache, you are still left with a bunch of random hex values, and you have no idea where the key is, or even if its in those values (in case the memory of the target process is "behind" chrome)

With C code, you can pretty much reference any memory location, so you can make things work.


Spectre was shown to be exploitable from Javascipt: https://www.zdnet.com/article/google-this-spectre-proof-of-c... - making the bet that this won't be shown the same is not a safe wager I would say :) (especially that Javascript also includes stuff like WebAssembly).


Who these days would trust crypto keys on their machine, given the many hardware wallets available?


Where do you think the crypto keys for the TLS connection securing your HTTPS browsing are stored? Although from what you said I’m now thinking you’re referring to cryptocurrency & thus aren’t on the same wavelength of the discussion here. Crypto keys —> cryptography keys, not cryptocurrency keys.


I think he means crypto as in like tokens, not wallet keys.


> Or people could just understand the scope of the issue better

Do you understand the scope of the issue? Do you know that this couldn't personally affect you in a dragnet (so, not targeted, but spread out, think opportunistic ransomware) attack?

Because this statement of yours:

> Its not like you can write JS code with this that runs on browsers that lets you leak arbitrary secrets.

was not true for Spectre. The original spectre paper notoriously mentions JS as an attack vector.

If you truly disable all mitigations (assuming CPU and OS allow you to do so), you will reopen that hole.

So:

> The only real way this could ever be used is if you have direct access to the computer where you can run low level code.

I'm a low level kernel engineer, and I don't know this to be true in the general case. JITs, i.e. the JavaScript ones, also generate "low level code". How do you know of this not being sufficient?


>Do you understand the scope of the issue? Do you know that this couldn't personally affect you in a dragnet

The issue is not whether or not it could affect me, the issue is what is the risk. And I can say for certain that the risk is very low, because I seem to have more understanding of the space.

>The original spectre paper notoriously mentions JS as an attack vector.

In an analogy, having an attack vector is having a certain type of weapon, while executing a full exploit end to end is on the scope of waging a war. Sure, a right person at the right place with that weapon can take out a critical target and win the war, but just having that weapon doesn't guarantee you winning a war.

In the cases of certain exploits, like Log4Shell, thats like having a portable shotgun that shoots hypersonic missiles in a scatter pattern. Log4Shell basically means that if anything gets logged, even an error message, that can be used to execute arbitrary code, and its super easy to check if this is the case - send payloads to all services with a JNI url that you control and see what pops up, and boom, you can have shells on those computers.

In the case of Spectre/Meltdown, its like having a specific type of booby trap. Whether or not you can actually set up that booby trap highly depends on environment. If a website is fully protected against code injection, then executing JS cache timing would be impossible. And even if it wasn't, there would be other

Of course nothing is ever for certain. For example, browsers can contain some crazy logic bug that bypasses Same-Origin checks that nobody has found yet. But the chance of this happening is extremely low, as browser code is public.


Hmm, I'm not sure why Same-Origin and injection attacks are prerequisite. Shouldn't it be sufficient to visit an arbitrary website through a link somewhere?


It would be restricted to stealing secrets the website itself places, considering there isn't mapped js variable with to data from another website.


This vulnerability is, in the worst case, about reading any memory in the system, not memory confined to any particular website, or to the browser at all, though?


Not quite. This vulnerability is reading memory that you can directly address.

If you can run arbitrary machine code on a system, that memory is the entire memory space (in theory) - you can assign a value to any pointer and attempt to read that address through side channel attack.

In reality the task is much harder - you don't know where in memory the thing you want is because of ASLR, virtual memory maps, and other factors, and to exploit cache timing attacks you need to have cache eviction happen first, and that's not really that straight forward for some memory addresses.

Javascript that runs in browser on the other hand has a lot more restrictions. You can't dereference a pointer to an arbitrary memory address in JS, you need an existing variable in the current context that is mapped to some memory.


I am really an amateur when it comes to Spectre-like attacks, but do you strictly need a valid pointer pointing to the address? I thought you "just" need to mispredict into code that would use it as a pointer, even if that code is never actually reached?

The paper demonstrates this by the C PoC using a system call as a gadget. Any value can be passed into the system call before it gets checked for validity on the other side of the kernel boundary. In their example, they use the "buffer" and "buflen" arguments to the keyctl system call, which results the values passed into the system call being in the registers r12 and r13. Then, they mispredict into a disclosure gadget that uses r12 and r13 for dereferencing pointers:

    movzx edx, byte ptr [r12]
    mov rbx, qword ptr [r13 + rdx*8]
Note how "buflen" isn't even a pointer (for keyctl) to begin with, but the (as far as I understand) unrelated disclosure gadget code dereferences r13 (because it treats it as a pointer), and they managed to mispredict into it through keyctl's call to the "read" function pointer (this is the part where it's still a bit fuzzy to me, as I unfortunately don't fully understand the misprediction itself and how they control for arbitrary destinations).

Now, obviously you can't directly make system calls through JS. But I don't understand yet what, if anything, is in place to absolutely make sure that there are no indirect ways that result in a system call (or another path!) where benign, but arbitrary values get passed as arguments in registers, executing benign code, but being mispredicted into a different kernel code path where those registers would be used as pointers.

And then, once you can do that, you can affect almost arbitrary physical memory, since typically almost all physical memory is mapped in the kernel's address space.

Sure, this is much harder because of the layers in between, but I still don't quite understand why it's impossible, and why a sufficiently motivated attacker might not eventually find a workable solution?

Spectre just seems so fundamentally catastrophic for me, that anything but proper hardware fixes to how privilege boundaries are honored by speculative execution seems to merely make things harder to me, but how hard is a very non-trivial question for me. Is it hard enough?

(As for ASLR, in their paper they break that as their first step using their own methods.)


Reading the paper further, there is this:

   However, the BTB provides partial target addresses [28], so the attacker only needs to branch to an address where the lower portion matches the desired kernel target. The upper bits of the BTB target are provided by the victim branch source, which will be in the kernel address range. The technique follows the one we used in Section 6.1 and Figure 5.
So it seems to me that the actual difficulty from JS is less passing down the desired memory destinations (that's harder, yes, but I wonder if it's hard enough), but to generate (benign) branches to almost arbitrary addresses within the JS code, as it's probably neigh impossible to control for where those branches go.

Still, who really knows if there isn't some jump table generator or whatever to allow an attacker to generate branch targets arbitrarily enough (remember that it's not necessary to branch to the full address to train the branch predictor).

Because this would not be a vulnerability in any sense by itself. It would be yet another completely benign but unlucky piece of code that just allows the tire fire that Spectre is to be leveraged.

I'm probably missing other relevant aspects.

As for cache flushing, I think that's what the disclosure gadget does: "The disclosure gadget needs to use the two attacker-controlled registers to leak and transmit the secret via Flush+Reload", so that's also kernel code which we mispredict into. But I'm not totally sure.


You need to flush the addresses out of the cache in order for the branch predictor to speculatively execute and load the address back into the cache. This is where things get very tricky, because lets say you have some other process that accesses that address on a regular basis - it will get reloaded into the cache so your timing attacks have a lower chance of success.

So overall, putting together an exploit with this through JS becomes a matter of lots and lots of research and testing, for a specific target - i.e not worth the effort for anyone but a state sponsored agency.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: