Reputation: 9392
So I got a piece of code running, and I want to improve the performance and I noticed that delete takes quite a while to complete (around 0.003 seconds) so I decided to put it into another thread and then delete the array.
Now the time it takes to create and run the thread is much faster than deleting the array, however now the code following my creation of the thread suffers a performance hit, taking around 2 to 3 times longer to run.
Does anyone know why this is happening and how I can improve the performance hit I'm running in to? Note that I'm using dispatch_async because I have only coded on a mac before and haven't tried out other C/C++ libraries that creates multiple threads so if anyone knows an alternative library which can do the same thing with better performance then I'll switch over to using that.
clock_t start, end, start2, end2;
start = clock();
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0);
std::set<int> *temp = a;
a = nullptr;
dispatch_async(queue, ^{
delete[] temp;
});
//delete[] a;
end = clock();
a = new std::set<int>[10000];
start2 = clock();
/*
Code here initializes stuff inside the array a
Code here never changes
*/
end2 = clock();
//(end2 - start2/CLOCKS_PER_SEC) is now much longer than it was without multithreading but (end - start)/CLOCKS_PER_SEC is much faster (which is expected)
Upvotes: 0
Views: 302
Reputation: 29886
Chances are you're running into a shared lock in the heap allocator. There are various data structures that keep track of heap-allocated memory (i.e. memory allocated with malloc/free or new/delete) that need to be protected against concurrent access. My guess is that the delete
operation on the background thread and the code between start2
and end2
are competing for that lock. There are various specialized C++ allocators that you can use, but the default one is thread safe and therefore there will be a performance penalty for using it concurrently from multiple threads.
Upvotes: 1