Pavlo
Pavlo

Reputation: 1654

ZeroMQ request and multiple async replies to this request

I have something like remote machine that performs heavy computations and a client machine that sends tasks to it. Output results are very big from megabytes to gigabytes and come in chunks during long time period. So it looks like this: client sends task and then needs to receive this chunks since they are already useful (one request - multiple responses). How to realize this pattern in ZeroMQ.

Upvotes: 1

Views: 1431

Answers (2)

user3666197
user3666197

Reputation: 1

Maybe I read the problem above definition wrong, but as it stands, it seems to me that the main concern is achieving a way to accomodate a message flow between a pair of hosts ( not a Broker fan-out to 1+ Workers using classical DEALER/ROUTER Scaleable Formal Communication Pattern ),
where
the key concern
is, how to handle
a client-machine ( sending one big-computing-task-request and "waits" for a flow of partial results )
to an HPC-machine ( receiving a TaskJOB, processing it and delivering a flow of non-synchronised, unconstrained in time and size messages back to the client-machine ).

For such a 1:1 case, with 1-Job:many-partialJobResponses, the setup may benefit from a joint messaging and signalling infrastructure with several actual sockets under hood, as sketched below:

Signalling:

clientPUSH   |-> hpcPULL    // new TaskJOB|-> |
clientPULL <-|   hpcPUSH    //              <-|ACK_BEGIN
clientPULL <-|   hpcPUSH    //              <-|KEEPALIVE_WATCHDOG + PROGRESS_%
clientPULL <-|   hpcPUSH    //              <-|KEEPALIVE_WATCHDOG + PROGRESS_%
...                         //                |...
clientPULL <-|   hpcPUSH    //              <-|KEEPALIVE_WATCHDOG + PROGRESS_%
clientPULL <-|   hpcPUSH    //              <-|KEEPALIVE_WATCHDOG + PROGRESS_%
clientPULL <-|   hpcPUSH    //              <-|ACK_FINISH         + LAST_PAYLOAD#
clientPUSH   |-> hpcPULL    // new TaskJOB|-> |
clientPULL <-|   hpcPUSH    //              <-|ACK_BEGIN
...                         //                |...
clientPULL <-|   hpcPUSH    //              <-|ACK_FINISH         + LAST_PAYLOAD#

Messaging:

clientRECV <-|   hpcXMIT    //              <-|FRACTION_OF_A_FAT_RESULT_PAYLOAD#i
clientACK    |-> hpcACK     // #i POSACK'd|-> |
clientRECV <-|   hpcXMIT    //              <-|FRACTION_OF_A_FAT_RESULT_PAYLOAD#j
clientRECV <-|   hpcXMIT    //              <-|FRACTION_OF_A_FAT_RESULT_PAYLOAD#k
clientACK    |-> hpcACK     // #k POSACK'd|-> |
clientACK    |-> hpcACK     // #j   NACK'd|-> |
clientRECV <-|   hpcXMIT    //              <-|FRACTION_OF_A_FAT_RESULT_PAYLOAD#j
clientACK    |-> hpcACK     // #j POSACK'd|-> |
clientACK    |-> hpcACK     // #u   NACK'd|-> |               // after ACK_FINISH
clientACK    |-> hpcACK     // #v   NACK'd|-> |               // after ACK_FINISH
clientACK    |-> hpcACK     // #w   NACK'd|-> |               // after ACK_FINISH
clientACK    |-> hpcACK     // #x   NACK'd|-> |               // after ACK_FINISH
clientRECV <-|   hpcXMIT    //              <-|FRACTION_OF_A_FAT_RESULT_PAYLOAD#x
clientACK    |-> hpcACK     // #x POSACK'd|-> |
clientRECV <-|   hpcXMIT    //              <-|FRACTION_OF_A_FAT_RESULT_PAYLOAD#u
clientACK    |-> hpcACK     // #u POSACK'd|-> |
...                         //                | ...      
clientRECV <-|   hpcXMIT    //              <-|FRACTION_OF_A_FAT_RESULT_PAYLOAD#w
clientACK    |-> hpcACK     // #w POSACK'd|-> |

again, using a pair of PUSH/PULL sockets for (internally)-state-less messaging automata, but allowing one to create one's own, higher level Finite-State-Automata, for self-healing messaging flow, handling the FAT_RESULT controlled fragmentation into easier to swallow payloads ( remember one of the ZeroMQ maxims, to use rather a Zero-Guarrantee than to build an un-scaleable mastodont ( which the evolutionary nature of the wild ecosystem will kill anyways ) and also providing some level of reactive re-transmits on demand.

Some even smarter multi-agent setups are not far from gotten sketched to increase the processing throughput ( a FAT_RESULT DataFlow Curator agent, separate from the HPC_MAIN, unloading the HPC platform's resources for immediate start of next TaskJOB, etc )

Upvotes: 1

DontPanic
DontPanic

Reputation: 1367

You can use async pattern (DEALER-ROUTER).

Look at this topic The Asynchronous Client/Server Pattern

And at the example in Java or C#

But keep in mind that ROUTER socket can drop your messages if its HWM is reached.

Upvotes: 1

Related Questions