Reputation: 2735
I want C example for Multithreaded zmq
client, I am already using Multithreaded server, but I have requirements to make each client send requests from multithreads not from a single thread.
I had a look at:
http://zguide.zeromq.org/c:asyncsrv
https://github.com/booksbyus/zguide/blob/master/examples/C/
but I didn't see client example that uses multithreads of ZMQ_DEALER
socket talking to Multithreaded server ( a ZMQ_ROUTER
socket )
So I am looking for DEALER
and ROUTER
pattern. I want the client ( the DEALER
) to be multithreaded:
or
pthread_create
enough?Currently my client is similar to hello world C:
zctx_t ctx = zctx_new ();
void *client = zsocket_new (ctx, ZMQ_REQ);
assert (client);
zsocket_connect (client, config.SERVER_ENDPOINT);
// SKIPPED: I get the data (hm->body.p) to be send through zmq...
// We send a request, then we work to get a reply
zstr_sendf(client, "%.*s", (int) hm->body.len,hm->body.p);
char *reply = zstr_recv (client);
if ( reply ) {
zsys_info ("server replied (%s)", reply);
free (reply);
}
Please help me make my C client to be a Multithreaded zmq
client.
Update 1 (More details):
3pa
) that I need to integrate with. The 3pa
sends on average ~ 4 HTTP POST
requests per second ( each request's size ~ 60 KB ) to an HTTP Listener ( I am using mongoose ).3pa
ONLY sends a next HTTP POST
after it receives a "HTTP/1.1 200 OK\r\n"
"reply" OR after a timeout for the sent request. So 3pa
seems to be single threaded.200 OK
quickly, the 3pa
will keep queuing in Memory, until it crashes.3pa
, I need to send it over a 3G-mobile-packet network, to the backend server., using ZMQ. Due to the 3G-mobile-packet network latency and bandwidth, each request requires about 1-3 seconds to be sent, and about 1-3 seconds for a ZMQ replay new ZFrame("OK")
to be initiated, transported and delivered (received on the 3pa
side).So, if we make the elementary calculus, 3pa
queue capacity will be filled up so fast due to the 3G-mobile-packet network performance.
Upvotes: 1
Views: 1139
Reputation: 1
From the few details provided in the OP/Update 1,
let's focus on the root cause of the problem,
before speaking about an idea of increasing the number of threads.
Given the 3pa
has been introduced as a BlackBox, with an experienced blocking-design ( indicated by a waiting forever, till a delivery of 200 OK
, with an escape on a timeout event )
the best first step would be to operate that 3pa
as-is, but rather hosted/colocated on some better NIX-peering/colocation centre, than on a current "peripheral" last-mile part of the latency-expensive 3G-mobile-packet radio access network, that would both avoid/prevent a capacity-driven DoS-introduced packet re-transmits in case any prioritised call-traffic preempts 3G-channels and congests the RAN and would absolutely reduce the costs of the round-trip latency, currently paid at the said levels somewhere above 2~6+ seconds.
The next step possible, if the 3pa
relocation to a NIX-proximity on it's own were not enough to save the game, would be to create o single purpose http-proxy, that would receive those 60KB+ http-requests ( that would be than relayed to a due target ) and that would immediately inject 200 OK
response downstream, to the hands of 3pa
, so as to unlock it from it's internal blocking-loop and allowing it to send another de-queued HTTP POST
.
Dirty? Yes and no. It solves the weaknesses of a poor 3pa
design and allows to operate it in given circumstances.
REQ/REP
patterns in real scenarios.Whereas the ZeroMQ Scaleable Formal Communication Patterns building blocks are smart examples of multi-party communication schemes, do not expect them to be immune to LoS, lost messages and other real-world issues. REQ/REP
pattern is able to deadlock itself in a de-railed internal FSA-state, from which there are no ( literally Zero ) tools to save / resuscitate the inter-twinned FSA-state machines from a unsalvageable mutually deadlocked state.
Use some other Scaleable Formal Communication Patterns in such design and be ready to provide additional means of signalling for a potential need to salvage states, when something goes wrong. ZeroMQ has a lot of tools for doing that smarter, than just relying on a single trivial archetype, close ones eyes and belive in a fiction that everything will work error-free forever ( that is not the reality we live our lives in, right :o) ).
Real-world clients operate many zmq-socket instances, some for signalling, some for transport-services, some for remote access to a client's internal CLI interface, some other for a distributed log-collector, some for self-diagnostics and hosting-system health-checks. Simple, principally non-blocking designed clients may contain some small thousands of SLOCs, so forget to expect any such solution to consist of just a copy of a few SLOCs from a library wiki-pages or from some sparkling blog posts.
One might also like to read other ZeroMQ posts, with also a link to the fabulous Pieter HINTJENS' book, that has been my must-read recommendation for those interested since I started to love the ZeroMQ way of thinking and design priorities. Worth the time and efforts, believe me or not, distributed processing has other, more important rules, than just asking for more threads for otherwise poor and un-feasible design.
Being able to change a poor design into a better one, one can still operate a single-threaded, well designed, application that can move xKB-messages and keep the end-to-end latencies under a few tens of milliseconds, even with a remote server doing a complex AI/ML-processing of the "fat"-data payloads.
Worth a try, isn't it?
Upvotes: 1