Reputation: 3164
Assume I have a node (process, thread, whatever) with a ZeroMQ interface, let's say a REP
socket. This means I have an infinite main loop, which sleeps in the zmq_recv
or zmq_poll
function.
Now that node should also receive data from another asynchronous event. Imagine for example a keyboard button-press event or toggling a GPIO pin or a timer expiration. The execution must also sleep while waiting for these events.
How should the main loop be written, such that it wakes up on either type of event, namely the reception of a message and the occurrence of the asynchronous event?
I can think of two solutions:
First, a polling loop, where both events are checked in a non-blocking way, and I run the loop every couple of milliseconds. This seems non-ideal in terms of processing load.
Second, I move the blocking code for both events into separate threads. In the main loop instead, I sleep on a semaphore (or condition variable), which is posted by the occurrence of either event. That's the way I would go in a traditional application, but the ZeroMQ Guide written by Pieter Hintjens is very explicit on not using semaphores:
Stay away from the classic concurrency mechanisms like as mutexes, critical sections, semaphores, etc. These are an anti-pattern in ZeroMQ applications.
So what to do? What's the best practice here?
Upvotes: 3
Views: 545
Reputation: 8404
Depending on your OS, most events in the system are generated through some file descriptor becoming ready to read. This is generally how it works on Linuxs, Unixes, etc. For instance, keyboard input comes in through STDIN. And of course, file descriptors of any description can be included in a ZMQ poll.
Where an event isn't raised through a file descriptor that can be used in ZMQ poll (e.g. a serial port on Windows becoming ready to read), I generally use a thread to translate the event into a message sent through a ZMQ socket. Works well enough. Becomes non-portable, but that's unavoidable.
GPIO can be harder. If it's not backed by some driver that's got an ISR integrated into the OS's driver stack, then you're going to have to poll it.
Upvotes: 2
Reputation: 776
A 0MQ-style solution would be to use a second socket to notify the main thread of the other async events. Then pass both sockets to zmq_poll
and your main thread will be notified when a message is available in either socket.
For example, in addition to the REP
socket your main thread could also create a PULL
socket. It would pass both to zmq_poll
and then block waiting for messages. When async events are triggered, separate threads can connect a PUSH
socket to the PULL
and pass event data to the waiting main thread.
See the zmq_poll docs for details on how to poll multiple sockets.
Upvotes: 0
Reputation: 1
How should the main loop be written, such that it wakes up on either type of event, namely the reception of a message and the occurrence of the asynchronous event ?
You have almost answered yourself. Concurrent event-loops are an issue. ZeroMQ permits one to handle both asynchronously and synchronously inspected message-queue(s).
Given your control-logic requires some additional time-related decisions, it is awfully bad idea for any distributed-computing to ever move into any blocking-state ( even if popular media in posts and "help"-files present almost all trivialised code-examples with blocking-state SLOCs for "explanations". No. No. No.
Never lose your control-role. A first blocking-state removes your code from domain of its own control. No. No. No.
What's the best practice here ?
Even if your control-strategy is complex, fine-grained, whatever, best use explicitly controlled-timing sections, where you can first detect, using the { non-blocking | controlled-duration-blocking }-call to a .poll()
, for an explicitly { ACK'd | NACK'd } presence of any message arrival, and next your processing strategy may decide, what to do with a detected composition of events, based on your priorities, post-process-durations and resources availability state and/or their respective relative performance for handling each of the detected events' post-processing workloads still within the control-loop hard-time-limit.
This is how { soft | hard }-real-time systems work.
ZeroMQ provides a few tricks for doing this both right and smart ( using Zero-Copy mechanics, using performance / latency tweaking options of each respective Context()
-instance's embedded pool of IO-threads, using almost-Zero-Latency inproc://
transport-class, where feasible for avoiding old-school-concurrency bells and whistles ... Martin SUSTRIK has posted a lot about performance and more details on this )
If your hardware resources seem to be too weak to handle any fully-fledged network-of-fat-agents, still may compose your system as a network-of-lightweight-agents, using Zero-IO-thread Context()
-s, communicating over a grid of inproc://
-s with the system's War-Gaming-Control-Room, where situational awareness is collected and maintained, where global control-strategy compliant decisions are made and from where the respective strategic, tactical and operational commands get issued, coordinated for distribution and marshalled down the grid for their final execution inside lightweight and task-specialised almost-autonomous agents.
Upvotes: 1