You can't even "traverse" a data structure at that speed. We're talking low-single-digit cycles here. A compiled for loop tends to require two cycles just to loop (you need to increment, compare, and jump back but the CPU can speculate the loop will in fact jump back so the three operations can take two cycles), and I don't know what the minimum for a JS-engine JIT'd loop is but needing 4 or 5 wouldn't stun me. That's literally the "compiled loop do-nothing speed".
I mention this and underline it because this is really an honorary "latency value every programmer should know"; I've encountered this multiple times online where someone thought they were benchmarking something but they didn't think about the fact that .6 nanoseconds per iteration comes out to about 2-3 cycles (depending on CPU speed) and there's no way what they thought was benchmarking could be done that quickly, and I've now encountered it twice at work. It's a think worth knowing.
I have experienced numerous conversations about performance with people who invent theories while never actually measuring things. Typically this comes from students.
That is not correct in practice. Some operations are primarily CPU bound, some are primarily GPU bound, and some are primarily memory bound. You can absolutely see the difference when benchmarking things on different hardware. It provides real insight into how these operations actually work.
When you conduct your own measurements we can have an adult conversation about this later. Until then itβs all just wild guessing from your imagination.
Sure, go ahead and show me your "data structure traversal", in Javascript (JIT'd is fine, since clearly non-JIT is just out of the question), that works in 3-5 cycles.
The whole topic of this conversation is a measurement in which it was claimed that "a billion DOM operations per second" were being done in 2015. That's a concrete number. Show me the actual DOM operation that can be done a billion times per second, in 2015.
The burden of proof here is on you, not me. I'm making the perfectly sensible claim that all you can do in a low-single-digit number of cycles is run a loop. I have, in fact, shown in other languages down at the assembler level that things that claim to be running in .6ns are in fact just empty loops, so I'm as satisfied on that front as I need to be. It's not exactly hard to see that when you look at the assembler. You don't even need to know assembler to know that you aren't doing any real work with just 3 or 4 opcodes.
I don't know how you expect to just Measure Harder and get a billion operations per second done on any DOM structure but I expect you're going to be disappointed. Heck, I won't even make you find a 2015 machine. Show it to me in 2025, that's fine. Raw GHz haven't improved much and pipelining won't be the difference.
I cannot go back in time to 2015 conditions, but you can run the tests yourself and get your own numbers. Try running that in different browser and on different hardware. Another interesting thing is experimenting with is HTTP roundtrip speed and WebSocket send versus receive speed on different hardware.
This is interesting because many assumptions are immediately destroyed once the numbers come in. Many of these operations can execute dramatically faster on hardware with slower CPUs so long as the bus and memory speeds are greater.
What's also interesting is that many developers cannot measure things. It seems as if the very idea of independently measuring things is repulsive. Many developers just expect people to give them numbers of something and then don't know what to do with it, especially if the numbers challenge their assumptions, like cognitive conservatism.
I mention this and underline it because this is really an honorary "latency value every programmer should know"; I've encountered this multiple times online where someone thought they were benchmarking something but they didn't think about the fact that .6 nanoseconds per iteration comes out to about 2-3 cycles (depending on CPU speed) and there's no way what they thought was benchmarking could be done that quickly, and I've now encountered it twice at work. It's a think worth knowing.