I find it highly suspect when a data structure uses independently allocated nodes tha contain actual pointers (as opposed to indirection ints). I know people like to make the tree just be another node, but it's really really important to allocate things together! I would deem it mandatory to allocate every node in the same tree in the same vector.
Are you criticizing the separation of m_sparseIds and m_items in this article? Could you give some more detail as to why you see this as a problem?
The only performance issue I see with having these separate is what the author describes as Consideration #2: you have to do completely separate read for the indirection which may incur an additional cache miss. But this is the cost of keeping the dense set dense and the handles constant.
Is there another issue I'm overlooking? Maybe I misunderstand you, as this structure does use int indexes (both for the free lists and to locate objects in the other array), and I don't know what you mean by "I know people like to make the tree just be another node"
You should use maps in game engines parsimoniously and when you do, use integer keys like a hash or something. This should be evident for any game programmer.
If you are at the point of inventing a more complex structure to compensate for programmer mistakes and/or malpractice, I suspect you should fire or reassign people instead of coding this.
But meh, I'm probably getting too old/grumpy... lol
Inspired by "Pitfalls of Object Oriented Programming"[0], i did something similar to this (in C). Idk how the cpu usage would be by doing it another way, but this way processing thousands of matrices is no problem.
Could someone give this code a try to see if G++ will vectorize it's tree? If this is ordered to enhance cache hits for this data then you should see some serious performance with vectorization enabled IF g++ recognizes it.
Yes! I work on a dynamic compiler (also known as a JIT) for Ruby that will for example represent small hash maps as a single allocation of a linear consecutive set of keys and values. This helps further optimisations such as escape analysis, scalar replacement and loop unrolling, which in turn improve the effectiveness of inline caching and then inlining, splitting and specialisation. So after all that it can have quite an impact, beyond simply being more cache friendly.