Reputation: 398
when running this code:
#include <iostream>
#include <vector>
#include <deque>
template< typename C >
void fillToMax( C & collection, typename C::value_type value )
{
try
{
while( true )
collection.push_back( value );
}
catch( std::bad_alloc const& )
{
std::cout << "bad alloc with size " << collection.size() << std::endl;
}
return;
}
void fillVector()
{
std::vector<long> vecL;
fillToMax( vecL, 123 );
}
void fillDeque()
{
std::deque<long> deqL;
fillToMax( deqL, 123 );
}
int main()
{
fillVector();
fillDeque();
}
I get an expected bad_alloc
error, therefore that is easy to try/catch.
The problem is when I substitute vector
with deque
, in this case my machine just crashes... blackscreen, reboots and when up again claims: you had an unexpected problem!
I would like to use deque
instead of vector
to store a larger amount of items without the issue of contiguous space. This will enable me to store more data but I cannot afford for my application to crash and would like to know how I can get this to bad_alloc instead.
Is this possible?
My tests use MinGW-W64 - gcc version 4.8.2 (x86_64-posix-seh-rev4) on win8.1
Upvotes: 1
Views: 1111
Reputation: 31445
The fast answer of why vector might crash and not deque is that because vector uses a contiguous buffer you'll bad_alloc "quicker". And also on a request that asks for a large chunk.
Why? Because it is less likely that you will be able to allocate a contiguous buffer than a smaller one.
vector
will allocate a certain amount and then try a big "realloc" for a bigger buffer. It might be possible to extend the current memory space but it might not, and may need to find a whole new chunk of memory.
Let's say it looks to expand by a factor of 1.5. So you currently have 40% of the memory available in your vector in use and it needs to find 60% of the memory available but cannot do it at the current location. Well that takes you to the limit so it fails with bad_alloc
but in reality you are only using 40% of the memory.
So in reality there is memory available and those operating systems that use "optimistic" memory allocation will not accidentally over-allocate for you. You've asked for a lot and it couldn't give it to you. (They are not always totally optimistic).
deque
on the other hand asks for a chunk at a time. You will really use up your memory and as a result it's better to use for large collections, however it has the downside that when you run out of memory you really do run out. And your lovely optimistic memory allocator cannot handle it and your process dies. (It kills something to make more memory. Sadly it was yours).
Now for your solution of how to avoid it happening? Your answer might be a custom allocator, i.e. the 2nd parameter of deque, which could check the real system memory available and refuse to allocate if you have hit a certain threshold.
Of course it is system dependent but you could have different versions for different machines.
You could also set your own arbitrary "limit", of course.
Assuming your system is Linux, you might be able to turn overcommit off with
'echo 2 > /proc/sys/vm/overcommit_memory'
You would need root (admin) permissions to do that. (Or get someone who has it to configure it that way).
Otherwise, other ways to examine the memory usage are available in the Linux manuals, usually referred to in /proc
.
If your system isn't Linux but another that over-commits, you'll have to look up how you can by-pass it by writing your own memory manager. Otherwise take the simpler option of an arbitrary configurable maximum size.
Remember that with deque
your allocator will only be invoked when you need to allocate a new "chunk" and not for every push_back
.
Upvotes: 2
Reputation: 2138
Xarylem, just attempting to answer "how can I prevent this" here...
you know something that throws bad_alloc - std::vector. you know something that crashes... std::deque.
So one way would be to create a new vector of size X, if that succeeds, clear the vector and push back X more into the deque. If it doesn't, you know you're walking into a quagmire. Something like:
std::vector<int> testVector;
testeVector.reserve(1);
std::deque<int> actualDequeToFill;
for(size_t i = 0; ; ++i)
{
//test first
bool haveSpace = false;
try { testVector.reserve(2); } catch(...) { haveSpace = false; }
vector.reserve(1);
if (!haveSpace) throw new std::bad_alloc("Vector shows no space left");
deque.push_back(something);
}
This isn't anywhere close to foolproof... so please use it as a possible idea for a workaround rather than as an implementation.
Now that that is aside, my best guess would be... your compiler is not compliant ... as I've mentioned in a comment, C++ requires deque::push_back to throw bad_alloc. If you can, move away from that compiler (this is basic stuff to get right)
Upvotes: 0
Reputation: 153919
You don't say what system you're using, so it's hard to say, but some systems "overcommit", which basically makes a conforming implementation of C++ (or even C) impossible; the system will say that there is memory available when there isn't, and crash when you try to use it. Linux is the most widely documented culprit here, but you can reconfigure it to work correctly.
The reason you get bad_alloc
with vector is because vector
allocates much larger chunks. And even with overcommit, the
system will refuse to allocate memory if the chunk is too big.
Also, many mallocs will use a different allocation strategy for
very large chunks; IIRC, the malloc in Linux switches to using
mmap
beyond a certain size, and the system may refuse a mmap
even when an sbrk
would have succeeded.
Upvotes: 4