Reputation: 351
const size_t size = 10000000;
using T = unsigned short[];
vector<unique_ptr<T>> v;
v.resize(size);
for (size_t n = 0; n != size ; ++n) {
v[n] = make_unique<T>(3);
for (int i = 0; i!= 3; ++i)
v[n][i] = rand();
}
I want to measure how much memory it uses.
Why the actual size is 3 times bigger than valgrind displays? I run it on VSL, ubuntu.
Upvotes: 4
Views: 860
Reputation: 6906
Valgrind is just reporting what it sees when it intercepts calls to malloc
and new
.
If you really want to see how much memory is being used, try massif
, and also try massif with --pages-as-heap=yes
. This argument will also cause massif to record pages mapped with mmap
.
Upvotes: 2
Reputation: 118282
You are forgetting the overhead of memory allocation itself. Just the new
/delete
itself: something has to keep track of it. Somewhere.
You have it easy, but the C++ library does all the hard work for you.
You have it easy: you just new
some arbitrary number of bytes, and delete
it later, and you're done. But it's not as easy for C++ library. It has to know how big each new
ed object or objects are. So when they are delete
d the C++ library knows how much memory has just been deleted. When adjacent objects in memory get delete
d, an allocator will also need to be aware of that and combine the two adjacent delete
d objects together into one bigger chunk of memory so that it could, potentially, be used for new
ing a larger object, at some point later down the road.
This complexity doesn't come for free. It needs to be tracked and totaled up.
All of this jazz is going to require at least a pointer value, and a byte count value, at a minimum. Per allocation. A robust internal memory allocator may want to store an extra pointer, somewhere, but let's start with just a pointer and a byte count, as a minimalist implementation for a memory allocator.
You are allocating sizeof(unsigned short)*3
bytes at a time, or 6 bytes by my count. On a 64 bit platform, the pointer will need to be 8 bytes long. Let's say you have a smart memory allocator that maintains a separate bool of allocations that don't exceed 64kb in size, so the memory byte count needs only be 2 bytes. So that's an overhead of ten bytes per allocation.
That overhead needs to be stored, kept track, and piled on to every allocation. So, given an additional overhead of, at least, 10 bytes for every 6 bytes allocated, observing 2-3 amount of expected memory used seems to be pretty much in the ballpark. Definitely in the ballpark if the byte count gets tracked, internally, as 4 bytes, or if the memory pool is a doubly-linked list, requiring another 8 bytes (maybe 4 bytes if you're lucky) thrown into the mix.
Upvotes: 2