Gandalf
Gandalf

Reputation: 2348

Exists an implementation like SynchronousQueue with capacity?

In a Java application reading items from a Socket connection I need (for reasons not further outlined here) the

  1. input items to be processed by a single thread so that their order is preserved.
  2. input items to be buffered before processing so that new items can be read from the socket while others are still being processed.
  3. the reading thread to be blocked as long as the buffer is full

So actually I would like to use a single worker thread to work on buffered items received from the socket. And a fitting queue as buffer between the worker thread and the reader thread which would be a kind of a fair SynchronousQueue with a FIFO capacity.

The queue required should behave like an ArrayBlockingQueue or LinkedBlockingQueue with capacity while not full and similar to a SynchronousQueue when full meaning that

  1. put on the queue will only block the thread if the queue is full
  2. take on the queue will only block the thread if the queue is empty
  3. take on a full queue will give the caller the next FIFO element and unblock and inserting the element from the next thread waiting in a put operation
  4. put on the empty queue will either hand over the element to thread waiting in a poll operation or insert it

Is there any known implementation like that or do I have to roll out my own?

Upvotes: 0

Views: 285

Answers (1)

Martin James
Martin James

Reputation: 24857

One answer that is not just an attempt to steal points from those posters who suggested an ArrayBlockingQueue :

Two ArrayBlockingQueues. One to act as a 'pool queue' - filled with buffer objects at startup. The other for the processing thread/s to wait on for work.

The socket thread must get a buffer from the pool before loading it with data and queueing it to the processing threads. The processing threads, after handling the data, must eventually return the 'used' objects back to the pool. If the pool empties, the socket thread blocks on it until some buffers are returned.

This provides the same flow-control as a bounded processing-queue, but with the added advantage that GC on the buffers is avoided.

Upvotes: 1

Related Questions