Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Its not like you can write JS code with this that runs on browsers that lets you leak arbitrary secrets

That's precisely what Spectre and Meltdown were though. It's unclear whether this attack would work in modern browsers but they did reenable SharedArrayBuffer & it's unclear if the existing mitigations for Spectre/Meltdown stimy this attack.

> I personally disable all the mitigations because performance boost is actually noticeable.

Congratulations, you are probably susceptible to JS code reading crypto keys on your machine.



Disabling some mitigations makes sense for an internal box that does not run arbitrary code from the internet, like a build server, or a load balancer, or maybe even a stateless API-serving box, as long as it's not a VM on a physical machine shared with other tenants.


You run "arbitrary code from the internet" as soon as you use a web browser with JS enabled.


This is exactly what you won't do on most of your infrastructure boxes, would you? If you can reasonably trust all the software on the whole box, many mitigations that protect against effects of running adversary code on your machine become superfluous.

OTOH if an adversary gets a low-privilege RCE on your box, exploiting something like Spectre or RowHammer could help elevate the privilege level, and more easily mount an attack on your other infrastructure.


Yeah, as stated in a sibling answer, I misread your comment a little bit. It's true, on at least some classes of infrastructure boxes, you more or less "own all that is on the machine" anyway.

But also note my caveat about database servers, for example. A database server shared between accounts of different trust levels will be affected, if the database supports stored procedures for example. Basically, as soon as there's anything on the box that not all users of it should be able to access anyway, you'll have to be very, very careful.


While that’s an interesting idea, I’m not sure a side channel attack is actually exploitable by a stored procedure as I don’t believe it has enough gadgets.


I don't know. PL/SQL (which is separate from SQL) is effectively a general purpose language, and kind of a beast at that. I have not the faintest idea, but at least I wouldn't be surprised to see high enough precision timers, and maybe it even getting JITted down into machine code for performance nowadays. (And I've read that tight loops can be used for timing in side channel attacks as well, although I assume it requires a lot more knowledge about the device you're running on.)

A quick search reveals that there is at least a timer mechanism, but I have no idea of any of its properties: https://docs.oracle.com/en/database/oracle/oracle-database/1...

But what I'm actually trying to say, is: For multiple intents and purposes (which might or might not include relevance to this specific vulnerability), as soon as you allow stored procedures in your database, "not running arbitrary code" is not a generally true statement instead.


You need some lowish level programming primitives to execute side chain attacks. For example, you can't do cache timing with SQL.


PL/SQL, not SQL. Whatever I knew about PL/SQL in the 90s and early 2000s I've forgotten, but I wouldn't be so certain that PL/SQL a) does not have precise enough timing primitives, and b) does not get JITed down into machine code nowadays. It is a fully fledged, turing complete programming language with loops, arrays etc.


What infrastructure box are you running that is running 100% all your code? Unless you ignore supply chain attacks, you’ve always got exposure.


Excluding hardware supply chain attack, you start with a secure linux distro that is signed, and then the code that you write basically is written from scratch, using only the core libraries.

I got really good a CS because I used to work for a contractor in a SCIF where we counldn't bring in any external packages, so I basically had to write C code for things like web servers from scratch.


Or with JS disabled. HTML isn't as expressive, but it's still "arbitrary code from the internet"


There is a difference. JS is turing complete, pure HTML is far from (as far as I'm aware). So HTML might (!) well be restricted enough to not be able to carry out such an attack.

But I'd never state to definitively, as I don't know enough about what HTML without JS can do these days. For all I know there's a turing tarpit in there somewhere...


CSS3 is Turing-complete, but creating an exploit using just it would be... quite a feat.

With JS or WASM, it's much more straightforward.


HTML doesn’t have the potential to deliver Spectre like attacks because:

1. No timers - timers are generally a required gadget & often they need to be hires or building a suitable timing gadget gets harder & your bandwidth of the attack goes down

2. No loops - you have to do timing stuff in a loop to exploit bugs in the predictor.


Which you wouldn't do on an internal load balancer or database server, right?


You are right, I sort of misread the statement I was replying to, but also wanted to reinforce that the large class of personal desktop machines is still very much affected, even if you "think" that you don't run "arbitrary code" on your machine.

By the way, you have to be careful on your database server to not actually run arbitrary code as well. If your database supports stored procedures (think PL/SQL), that qualifies, if the clients that are able to create the stored procedures are not supposed to be able to access all data on that server anyway.


Oh yeah. Supply-chain risk is still a thing too and defense-in-depth is not a bad strategy.

Physical isolation simplifies a lot of this. This class of attacks isn't (as) relevant for single-tenant single-workload dedicated machines.


Based on this thread, I think people badly misjudge what “single-tenant” means in the context of susceptibility to exploits.


Mind elaborating?


Your “infrastructure” server could be a CI server - it’s just building “my” code ignoring that many (all?) build systems allow execution of arbitrary code as part of the build process (rust, cmake, bazel, JS ecosystem, Go, etc etc) and many involve 3p dependencies. And CI servers often handle secrets to infrastructure (publishing packages, etc). So you could end up allowing a supply chain attack that reads out various API keys & whatnot.

In other words, properly drawing the boundary around “this is safe with meltdown disabled” is very hard, non-intuitive, and you’re one configuration/SW change or a violated assumption away from a Meltdown attack which is cross-process memory access & one notch below remote access. There’s a reason you design for security in depth rather than trying to carefully build a jenga tower where you’re one falling block away from total compromise.


>Congratulations, you are probably susceptible to JS code reading crypto keys on your machine.

No wonder you guys are scared AI is going to take your job lol.

Thats not how it works at all. To grab a key stored in a JS variable, the following would need to happen

1. Attacker needs to find a way to inject arbitrary JS code in a website, which means controlling either an iframe that is loaded or some component. This is a pretty hard thing to do these days with Same-Site strictness

2. The code needs to know specifically what memory address to target. When things like JWT or other tokens are stored in session or local storage, the variable name usually contains a random string. Injected code will have to figure out a way to find what that variable name is.

3. For attack to work, the cache has to get evicted. This is highly processor specific on how well it works, and also, the web app has to be in a state where no other process is referencing that variable. With JS, you also have to infer memory layout (https://security.googleblog.com/2021/03/a-spectre-proof-of-c...) first, which takes time. Then you have to train the branch predictor, which also takes time.

So basically, I have a statistically higher chance of losing my keys to someone who physically robs me rather than a cache timing attack.

Generally when an exploit like this drops, people always have failures to update their systems, and you see it being used in the wild. With Spectre/Meltdown, this didn't really happen, because of the nature of how these attacks work and the difficulty of getting the cache timing code to work correctly without specific targeting of a processor and ability to execute arbitrary code on the machine.


This seems to theorize an attack where you are interested in particularly data of the particularly visited website, and at the same time assuming that the attack would have to be carried out on the same website.

The vulnerability however allows arbitrary reading of any memory in the system in at least some circumstances, the presented PoC (https://www.youtube.com/watch?v=jrsOvaN7PaA ) demonstrates this by literally searching memory for the system's /etc/shadow and dumping that.

Whether the attack is practical using JS instead of a compiled C program is unknown to me, but if it is, it's not clear to me why the attacker would need to inject JS code into other websites or know what addresses to target. (If it is not, the question is moot.)


>The vulnerability however allows arbitrary reading of any memory in the system in at least some circumstances, the presented PoC

The PoC uses compiled C code. I hope I don't have to explain the difference between C code that runs on the system versus JS code that runs in the context of the browser...


Well that only depends on what gadgets happen to be available, doesn't it? Both C and JS get compiled down to machine code.

I personally would not trust that you couldn't, in the most extreme case, get close enough to the kernel (like the PoC does through system calls) to mispredict it into leaking any kernel-mapped memory through a timing side channel. And nowadays, kernels typically map almost all physical memory in their shared address space (it's not too expensive in a 64 bit address space).

EDIT: See my extended reasoning here: https://news.ycombinator.com/item?id=43991696


There is no gadget in JS that lets you access arbitrary memory address in the system. You can create an array and then access it past bounds in the sense that branch predictor will execute this code and in theory load the address to cache, but the memory start of that array is going to be arbitrary. The JS engine doesn't allow you (even in web assembly) to access raw memory by value, and there is always a translation layer.


Again, I don't think I understood yet why the JS code needs to create actual pointers accessing arbitrary memory for the attack to work, instead of benignly passing down arbitrary integer values far enough into (say) the kernel and mispredicting into code that would use these arbitrary values to be dereferenced as pointers, elaborated here: https://news.ycombinator.com/item?id=43991973


Because you have no way of computing how "far" you need past the array length to access, because you have no idea where the first value is in memory, and you can read backwards from the memory location assigned to you by the JS engine. So if you get INSANELY lucky and make cache eviction of arbitrary memory addresses work, and you can get around other applications accessing the memory and putting values back in the cache, you are still left with a bunch of random hex values, and you have no idea where the key is, or even if its in those values (in case the memory of the target process is "behind" chrome)

With C code, you can pretty much reference any memory location, so you can make things work.


Spectre was shown to be exploitable from Javascipt: https://www.zdnet.com/article/google-this-spectre-proof-of-c... - making the bet that this won't be shown the same is not a safe wager I would say :) (especially that Javascript also includes stuff like WebAssembly).


Who these days would trust crypto keys on their machine, given the many hardware wallets available?


Where do you think the crypto keys for the TLS connection securing your HTTPS browsing are stored? Although from what you said I’m now thinking you’re referring to cryptocurrency & thus aren’t on the same wavelength of the discussion here. Crypto keys —> cryptography keys, not cryptocurrency keys.


I think he means crypto as in like tokens, not wallet keys.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: