Reputation: 2343
I am looking for a data structure for the following scenario:
Do you think that a blocking FIFO queue is suitable/sufficient here? One other requirement is that the data structure is available as open-source and can be compiled for different operating systems (at least Windows/Linux).
The APR Util library looked promising at first sight, but the project files for Visual Studio are intended for prehistoric versions of this compiler suite.
Upvotes: 2
Views: 402
Reputation: 24867
A producer-consumer FOFO blocking queue would seem reasonable. ALl multithreaded OS have suitable semaphores/mutexes/condvars/whatever to make the P-C queue and that you can define in one OS-specific interface unit.
The 'data structure' I would use is a queue of *SomeStruct that can contain a data chunk and any other metadata that is required to process the chunk.
When doing this sort of stuff, I usually create a large 'pool queue', (another P-C queue), that is filled up with malloced/calloced *SomeStruct at app startup. The threads would then just circulate the *SomeStructs from the pool, into the reading-thread, onto the threadpool FIFO, into the processing-threads and finally back to the pool queue. This flow-controls the data, preventing memory runaway and eliminating continual malloc/free.
Edit:
Yes, FOFO was a FOOBAR.
My design uses two FIFO, thread-safe, blocking queues:
a) Pool queue, filled with a fixed number of malloced/calloced *SomeStructs at startup.
b) Processing queue for the processing threads to wait on.
The reading thread dequeues *SomeStructs from the pool queue, loads them up, queues them to processing queue where the processing threads get them, work on them and then, when all done, requeue them to the pool Queue.
This means that the number of SomeStructs is limited to the number created at startup, so you don't need to do any more malloc/calloc or free while the app runs. Also, if the reading thread tries to 'get ahead' of the processing threads, the pool queue will empty and the reading thread will block on it until the processing threads return some 'used' *SomeStructs to the pool for re-use - the reading thread will then run again.
Another advantage is that you don't need complex, bounded queues. Since the total number of *SomeStruct instances is limited to the number created at startup, the queues need only be large enough to hold that number - they never have to hold any more. So, a simple fixed-size *SomeStruct[CtotalInstances] array would be fine as a basis for the queues and you don't need to do any checking of the indexes for for array full or need to use an extra semaphore to count the 'empty space':)
Oh - shutdown - I probably wouldn't bother shutting down the threads. If there is nothing for them to do, they can just sit there, doing nothing, until the app gets closed. If you really need to shut down the processing threads, send a 'poison pill' message, (a null pointer is a good pill - it doesn't need freeing), on the queue that tells the receiving thread to push the pill back onto the queue and then clean up and exit - in short order, all the processing threads will get the pill and commit suicide.
Upvotes: 1
Reputation: 62439
Sure, a blocking queue is perfect for producer-consumer patterns. If you want something portable try Intel TBB and specifically tbb::concurrent_bounded_queue.
Upvotes: 1