MadDave
MadDave

Reputation: 147

Resizing QByteArray throws std::bad_alloc when only using 600 MB of memory

I am new to Qt and need to load and process some large files. Instead I am running out of memory. The following code illustrates my problem:

QByteArray mem;
for(int i=1; i<=20; ++i)
{
    std::cout << "eating " << (i * 100) << "MB";
    mem.resize(i * 100 * 1024 * 1024);
}

I am getting std::bad_alloc when it reaches 600MB. That really should not happen. Is there a secret switch to increase the heap size?

I am using Qt 5.0.2 on Windows and the Visual C++ 10.0 x86 compiler.

Upvotes: 4

Views: 2440

Answers (2)

Pixelchemist
Pixelchemist

Reputation: 24956

On Windows a 32 bit process can have 2 GB of heap memory. If this memory does not contain a contiguous block that is large enough to handle your Bytearray you'll encounter a bad allocation exception.

MSVC knows /LARGEADDRESSAWARE (Handle Large Addresses) and the /HEAP (Set Heap Size) Linker options.

You may check if any change to those will affect the number of bytes you may allocate at once.

On my x64 machine an executable, compiled with /MACHINE:X86 on MSVC2012 throws a bad alloc exception for a single allocation of >=1200MB.

If I add /LARGEADDRESSAWARE to the Linker command line the program will continue until it crashes after eating 2100MB.

If I compile using /MACHINE:X64 instead, the process allocates blocks to 8000MB without any exceptions (maybe even more but I did only test until 8GB).

Upvotes: 2

Timo Geusch
Timo Geusch

Reputation: 24351

AFAIK QByteArray allocates a continuous block of memory. While your application might still have plenty of virtual memory available, there is a good chance that the current block of memory that your array is allocated in cannot be extended any further because your memory manager doesn't have a contiguous block that is large enough.

If you need to process some large files, instead of allocating memory and loading them into memory in one chunk, I would recommend looking at memory mapping a "viewport" into the file and process it that way. Depending on the size of the file, you might well be able to memory map the whole file into memory in one chunk. That's also more efficient on Windows than loading the file byte by byte as it makes use of the virtual memory system to page in the relevant file.

Upvotes: 5

Related Questions