Imran Niloy
Imran Niloy

Reputation: 69

Waiting on multiple semaphores or multiple message queues

In most implementations of UNIX, processes can block on more than one event. That is, rather than waiting on a single semaphore or receiving from a single message queue, the process can wait on multiple semaphores or multiple message queues. What advantage does such a capability offer? How would you implement this?

Now, before everyone starts to ask if this is my school assignment, it is not. It's a recommended exam question for a class I'm taking.

My thoughts on it - something like this:

typedef struct QueueElement {
    int Sender;
    int Receiver;
    char Message[MAX_MSG_SIZE];
    int MessageSize;
} QueueElement;

QueueElement MessageQueue[NUM_OF_PROCESSES];



/* Send Message to receiving process queue number */
int SendMsg(int SenderId, int ReceiverId, char* Msg, int Size)
{
    /* Create Queue Element to Enqueue */
    QueueElement El = {
        .Sender = SenderId,
        .Receiver = ...
        .
        .
    };

    /* Enqueue element in queue specified by Receiver's Id */
    Enqueue(&(MessageQueue[ReceiverId]), El);
}



/* Get messages for process defined by ReceiverId, from any queue */
int RecvMsg(int* SenderId, int ReceiverId, char* Msg, int* size)
{
    int i;
    for (i=NUM_OF_PROCESSES; i>=0; i--)
    {
        /* If a message is available for receiving process, Dequeue element from queue */
        if (MessageQueue[i].Receiver = ReceiverId)
        {
            QueueElement El = Dequeue(&(MessageQueue[i]));
            *SenderId = El.Sender;
            strcpy(Msg, El.Message);
            .
            .
            .

            return TRUE;
        }
    }

    return FALSE;
}

Now, consider 4 processes running in parallel. They are sending messages to message queues 1, 2, 3, 4 continuously. Now, say all the messages are being sent to process 2. That means process 2 must check for messages in all 4 queues (1, 2, 3, 4). But if new messages are continuously being added, only messages in queue 4 will get processed. How does one get around from starving the other message queues?

Is there a better way to handle this? How do modern architectures handle this? The issue with this solution is that if messages keep getting Enqueued to the high priority Queue (NUM_OF_PROCESSES), messages in the lower priority Queues will never get processed.

Upvotes: 0

Views: 3045

Answers (3)

anand
anand

Reputation: 163

When different priority message queues are used, reader will continue serving high priority queue and when that gets exhausted, it will move to lower priority queue. In case priorities are not used, threads can be used to serve queues in parallel but it will not guarantee the sequence in which messages appear on different queues.

Upvotes: 0

John Kugelman
John Kugelman

Reputation: 361957

I don't think your code answers the question the way the asker intends. To expand on what the question is asking:

Normally you can call something like read() to read from a file descriptor and it'll block until some data comes in. What if you have multiple file descriptors, though, and you want to block on all of them simultaneously until data comes in on any of them? You can't do that with a simple read() which only takes a single file descriptor.

  • What kind of API would you need? Show a function or set of functions that would let someone read from multiple sources at the same time.

Hint: select() does exactly this. Good keywords to google include "select", "multiplexing", and "non-blocking I/O".

Upvotes: 1

Brendan
Brendan

Reputation: 37222

Is there a better way to handle this?

Yes. The main problem is that your code continually wastes CPU time polling because it doesn't wait at all.

The better way is to put it in the kernel, such that:

  • when a task calls RecvMsg(), the kernel can do if no messages in queue { tell scheduler this task should block until a message is received } atomically (without race conditions)

  • when anything calls SendMsg(), the kernel can do if receiving task is blocked waiting for a message { tell scheduler the receiving task should be unblocked } atomically (without race conditions)

The next problem is that it only waits for messages. What if you want to wait for a file to be (asynchronously) opened, or wait for a signal, or wait for time to pass, or wait for a mutex to be acquired? For this problem there's two possible solutions:

  • have a horrendous mess (e.g. epoll_pwait()) with different arguments for different types of things (that still can't be used for some things - e.g. waiting for a mutex).

  • implement everything on top of messages so that you only ever have to wait for a message.

For the second solution; you mostly end up replacing traditional procedural programming with the actor model.

Upvotes: 2

Related Questions