Reputation: 4376
I have just read a question regarding initializing multidimensional vectors (question) and Viktor Sehr and Sbi reccomended instead using a single vector and getting the element with my_vector[x+y*100+z*100*100]
. Why is this? Is it for performance reasons? If so, how does it improve performance? Thanks in advance, ell.
Edit: Do these reasons still apply when the width/height/depth are not the same and can change?
Upvotes: 4
Views: 1602
Reputation: 104708
…Viktor Sehr and Sbi reccomended instead using a single vector and getting the element with my_vector[x+y*100+z*100*100]. Why is this?
Given the dimensions, it's logical recommendation if the sizes are fixed.
Is it for performance reasons? If so, how does it improve performance?
Consider:
Edit: Do these reasons still apply when the width/height/depth are not the same and can change?
Resizing this (massive!) array can be extremely slow. You have to understand how your program will operate if you want it to be fastest. The copy and destruction complexity of elements is also a consideration (when using something more complex than int
). If you do a lot of resizing or insertions/deletions, then the flattened vector can be very slow.
However, if its dimensions are fixed, you can do much better than std::vector
. std::array
is one alternative. (If you go the route of std::array
, be careful of what you allocate on the stack)
Upvotes: 2
Reputation: 182038
This advice is sound if you're looking at a bottleneck here. If memory usage or speed of access of this vector are not critical, just go down the easiest road.
You should have a look at Boost.MultiArray which gives you best of both worlds.
If for whatever reason you cannot use Boost, I'd definitely typedef
it:
typedef vector<vector<vector<int> > > My3DIntVector;
My3DIntVector v;
Upvotes: 4
Reputation: 9740
The only thing I can imagine is that this is one big block of memory and thus prevents from fragmented memory. This is much easier to cache.
A vector<vector<vector<int> > >
contains a lot of chunks of memory: a chunk for the first vector, a chunk for each element in vector<>
and a chunk for each element in the vector<vector<>>
. This is not easy to cache and may produce hardly predictable memory usage.
Upvotes: 0
Reputation: 51555
Just few reasons:
It wastes spaces, it is slow (unpredictable memory access, cache waste, etc), it's cumbersome
Main performance drawback is likely to be caching. With flat arrays you are guaranteed memory to be contiguous - cache is happy. With vector of vectors - who knows!
Upvotes: 8