This is exactly what I am talking about. All these excuses, especially about vanity, are masking behaviors.
DOM access is not quite as fast now as it was 10 years ago. In Firefox I was getting just under a billion operations per second when perf testing on hardware with slow DDR3 memory. People with more modern hardware were getting closer to 5 billion ops/second. That isn’t slow.
Chrome has always been much slower. Back then I was getting closer to a max of 50 million ops/second perf testing the DOM. Now Chrome is about half that fast, but their string interpolation of query strings is about 10x faster.
The only real performance problem is the JS developer doing stupid shit.
"In Firefox I was getting just under a billion operations per second when perf testing on hardware with slow DDR3 memory."
When you profile something and you get "a billion per second" what you've got there is a loop where the body has been entirely optimized away. Presumably the JIT noticed you were doing nothing and making no changes and optimized it away. I don't think there's a single real DOM operation you can do in an amortized 3-ish CPU cycles per operation (2015 3GHz-ish CPU, but even at 5GHz it wouldn't matter).
That's not a real performance number and you won't be seeing any real DOM operations being done at a billion per second anytime soon.
Or, more likely, it’s a traversal of a data structure already in memory. If so, then it is a very real operation executing as fast as reported, at near memory speed.
You can't even "traverse" a data structure at that speed. We're talking low-single-digit cycles here. A compiled for loop tends to require two cycles just to loop (you need to increment, compare, and jump back but the CPU can speculate the loop will in fact jump back so the three operations can take two cycles), and I don't know what the minimum for a JS-engine JIT'd loop is but needing 4 or 5 wouldn't stun me. That's literally the "compiled loop do-nothing speed".
I mention this and underline it because this is really an honorary "latency value every programmer should know"; I've encountered this multiple times online where someone thought they were benchmarking something but they didn't think about the fact that .6 nanoseconds per iteration comes out to about 2-3 cycles (depending on CPU speed) and there's no way what they thought was benchmarking could be done that quickly, and I've now encountered it twice at work. It's a think worth knowing.
I have experienced numerous conversations about performance with people who invent theories while never actually measuring things. Typically this comes from students.
That is not correct in practice. Some operations are primarily CPU bound, some are primarily GPU bound, and some are primarily memory bound. You can absolutely see the difference when benchmarking things on different hardware. It provides real insight into how these operations actually work.
When you conduct your own measurements we can have an adult conversation about this later. Until then it’s all just wild guessing from your imagination.
Sure, go ahead and show me your "data structure traversal", in Javascript (JIT'd is fine, since clearly non-JIT is just out of the question), that works in 3-5 cycles.
The whole topic of this conversation is a measurement in which it was claimed that "a billion DOM operations per second" were being done in 2015. That's a concrete number. Show me the actual DOM operation that can be done a billion times per second, in 2015.
The burden of proof here is on you, not me. I'm making the perfectly sensible claim that all you can do in a low-single-digit number of cycles is run a loop. I have, in fact, shown in other languages down at the assembler level that things that claim to be running in .6ns are in fact just empty loops, so I'm as satisfied on that front as I need to be. It's not exactly hard to see that when you look at the assembler. You don't even need to know assembler to know that you aren't doing any real work with just 3 or 4 opcodes.
I don't know how you expect to just Measure Harder and get a billion operations per second done on any DOM structure but I expect you're going to be disappointed. Heck, I won't even make you find a 2015 machine. Show it to me in 2025, that's fine. Raw GHz haven't improved much and pipelining won't be the difference.
> All these excuses, especially about vanity, are masking behaviors.
1. These are not excuses, these are facts of life
2. No idea where you got vanity from
> DOM access is not quite as fast now as it was 10 years ago. I was getting just under a billion operations per second
Who said anything about DOM access?
> The only real performance problem is the JS developer doing stupid shit.
Ah yes. I didn't know that "animating a simple rectangle requires 60% CPU" is "developers doing stupid shit" and not DOM being slow because you could do meaningless "DOM access" billions time a second.
Please re-read what I wrote and make a good faith attempt to understand it. Overcome your bias and foregone conclusions.
DOM access is not quite as fast now as it was 10 years ago. In Firefox I was getting just under a billion operations per second when perf testing on hardware with slow DDR3 memory. People with more modern hardware were getting closer to 5 billion ops/second. That isn’t slow.
Chrome has always been much slower. Back then I was getting closer to a max of 50 million ops/second perf testing the DOM. Now Chrome is about half that fast, but their string interpolation of query strings is about 10x faster.
The only real performance problem is the JS developer doing stupid shit.