Reputation: 25
The parent process launches a few tens of threads that receive data (up to few KB, 10 requests per second), which has to be collected in a list in the parent process. What is the recommended way to achieve this, which is efficient, asynchronous, non-blocking, and simpler to implement with least overhead?
The ZeroMQ guide recommends using a PAIR
socket archetype for coordinating threads, but
how to scale that with a few hundred threads?
Upvotes: 1
Views: 258
Reputation: 1
even if some marketing blable offers that, do not take it for granted.
Efficient means usually a complex resources handling.
Simplest to implement usually fights with overheads & efficient resources handling.
Using sir Henry FORD's point of view, a component, that is not present in one's design simply cannot fail.
In this very sense, we strive here not to programmatically control anything beyond the simplest possible use of elementary components of an otherwise smart ZeroMQ library:
Scenario SIMPLEST:
Rule a)
The central HQ
-unit ( be it just a thread or a fully isolated process ) .bind()
-s it's receiving port ( pre-set as a ZMQ_SUBSCRIBE
behaviour-archetype ) and "subscribes" it's topic-filter to "everything" .setsockopt( ZMQ_SUBSCRIBE, "" )
before it spawns the first DAQ
-{ thread | process }, further ref'd as DAQ
-unit.
Rule b)
Each DAQ
-unit simply .connect()
-s to an already setup & ready port on the HQ
-unit with a unit-local socket access-port, pre-set as a ZMQ_PUBLISH
behaviour-archetype.
Rule c)
Any DAQ
-unit simply .send( ..., ZMQ_NOBLOCK )
-s as needed it's local-data via a message, which is being delivered in the background by the ZeroMQ-layer to the hands of the HQ
-unit, being there queued & available for a further processing at the HQ
-unit's will.
Rule d)
The HQ
-unit regularly loops and .poll( 1 )
-s for a presence of a collected message from any DAQ
-unit + .recv( ZMQ_NOBLOCK )
in case any such was present.
That's all
Asynchronous
: yes.
Non-blocking
: yes.
Simplest
: yes.
Scaleable
: yes. Almost linearly, until I/O-bound
( still some tweaking possible to handle stressed-I/O-operations ) as a bonus-point...
Upvotes: 1