I've found measuring memory usage with STL containers to be difficult as they have their own allocator and unless you're using reserve() often end up allocating extra space for things. Dr Dobbs had an interesting article a long time ago around optimizing allocators [1] and also memory management was one of the main reasons that Electronic Arts rolled their own variant of the STL [2]. I use a lot of STL for convenience but have been burned enough times that when memory consumption his high or has heavy usage I manage it manually and make sure to use a modern allocator like jemalloc or tcmalloc[3].
By default, std::allocator just wraps new, so it doesn't make things any harder to measure. vector isn't "allocating extra space" for no reason - its capacity grows geometrically in order to provide amortized O(1) push_back. If you use reserve() incorrectly, you can actually trigger quadratic complexity (as I've done in the past when I didn't know any better). Even with unused capacity, vector is quite space-efficient. For example, with GCC's 2x growth, on average a vector will be using 33% extra space (that's 0.5/1.5). With VC's 1.5x growth, on average a vector will be using 20% extra space (that's 0.25/1.25). That's a pretty small cost to pay for vector's optimal locality. And in practice, many containers will actually be exactly-sized (range construction and copy construction will produce exact sizing), so the overhead is even lower.
[1] http://www.drdobbs.com/cpp/improving-performance-with-custom... [2] http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n227... [3] http://locklessinc.com/benchmarks_allocator.shtml