Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For every object added and removed, the vector and all of it's data was basically reallocated, moved, deleted, repeat.

That doesn't sound right at all. Of course, if you add or remove objects at the vector's head, all of its data will be moved, though probably not reallocated. But if you have a need for frequent additions and deletions anywhere but the tail of the vector, you picked a wrong data structure. Double ended queue (<deque>) would would've suited you much better.

Moving to a fixed size array in C++ I saw an easy 50% drop in CPU usage.

Did you reimplement element addition/deletion logic from scratch?



I wouldn't call myself an expert on vertex buffers, but IIRC the OpenGL API to load data into a buffer just takes a pointer to the data and a size, so requires that your data is contiguous in memory beforehand - so deque may not have been an option.


Agreed. But I can't think of any data structure that would offer you contiguous, order-preserving storage and O(1) insertions/deletions of arbitrary elements.


There are actually implementations of deque which can store elements contiguously: imagine a vector where the "first element" is placed in the middle of the vector, and push_{front,back} reallocate whenever either end is reached. It wastes more memory than the typical deque implementation, but the elements are contiguous.


You still don't get O(1) insertions/deletions of arbitrary elements with this implementation.

Insertions and deletions at either end are O(1), but inserting/deleting in the middle is costly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: