Reputation: 1239
This is, I guess, a pretty common scenario for anyone doing performance studies.
Suppose one has a few data structures, and would like to evaluate their space performance --
without counting using e.g. sizeof
but rather in an empirical way.
I model this situation in an MWE as follows:
stack<in>
and there is a set<int>
valgrind --tool-massif
--max-steps=1000 --time-unit=ms main
The code:
#include <set>
#include <stack>
#include <random>
#include <memory>
#include <functional>
using namespace std;
const int n= 42;
std::default_random_engine engine;
std::uniform_int_distribution<int> distribution(0,n-1);
int main() {
unique_ptr<stack<int>> stck;
unique_ptr<set<int>> st;
auto dice= bind(distribution,engine);
{
// first try the stack
stck = make_unique<stack<int>>();
for (auto i = 0; i < 0x420; ++i)
stck->push(dice());
stck.reset();
}
{
// then try the set
st= make_unique<set<int>>();
for (auto i = 0; i < 0x420; ++i)
st->insert(dice());
st.reset();
}
return 0;
}
I can see a clear divide when the stack->set transition occurs.
However, the memory is not fully released, it seems -- one intuitively (and maybe naively) would expect some "seesaw" image. How do we achieve this effect? I presumed that .reset()
woul call the stack's destructor. I guess it does, since on the right half of the image the tooltips talk
of Rb_tree
(i.e. the set<>
) only. My question is, which switch in massif
tool or what sort of arrangement in my code would produce a more "intuitive-looking" image?
Of course, I could write the same boilerplate code for each data structure I test, but I would like to juxtapose them so that their memory performance is easily comparable.
Upvotes: 0
Views: 462
Reputation: 385098
There is a seesaw in that graph.
The green dwindles to nothing and gives way to the blue.
It's just hard to spot because the bulk of heap usage (in orange) is taken up by overheads (standard library etc.) and this is dwarfing the memory taken by your containers.
Upvotes: 1