There is a problem in the code that causes the grid_1d implementation to dynamically increase array size on demand.
In the test case, it looped over from w to 0, and then from h to 0, thus, the scanline size is (w + 1). In the grid_1d impl, he allocated array of w x h size, which in fact should be (w + 1) x (h + 1).
you passed in [x,y] where x is from w to 0 and y is from h to 0. But in your indexing code of grid_1d, it is pos[0] * w + pos[1] which translates to x * w + y. however, since x is from w to 0 and y is from h to 0, the correct code should be something like y * w + x. I am sorry if I pushed too hard on the correctness of this code, but I am just not quite buying the fact that indexing a 1d array will be slower than indexing an array of arrays.
I am just not quite buying the fact that indexing a 1d array will be slower than indexing an array of arrays
As it happens, I recently ran similar benchmarks (on sparse arrays represented using JS hashes) and got similar results: a hash of hashes performed much faster than a single hash with computed keys, even though the keys seemed trivial to compute. It was a disappointing outcome insofar as the data structure in question is performance-critical and I had hoped that flattening it would be an optimization. Another almost clever idea backfires. I wonder whether collision/balancing issues in the underlying hashtable implementation are responsible.
Admittedly, our sparse grid is a different beast than the OP's dense grid, except that these things are not so different in Javascript, whose arrays are weird specialized hashes anyway, and thus not so different when dense vs. sparse. (Though not the same, either: http://news.ycombinator.com/item?id=1999637)
Ah, you're right about that! I'll fix it when I have a chance, though it wont affect the performance.
You're welcome to clone the repo, make changes, and run the tests yourself (just open index.html in a browser).
I was surprised too, but my rationalization is that the engine has been very optimized towards DOM-like operations for the past decade that doing math is comparatively slower. And another thing could be is that the 2d array could be effectively partitioning the space that the engine has to search in case it's doing something stupid underneath.
One day when I'm brave enough I'll look into the V8 code for arrays and objects.
In the test case, it looped over from w to 0, and then from h to 0, thus, the scanline size is (w + 1). In the grid_1d impl, he allocated array of w x h size, which in fact should be (w + 1) x (h + 1).