yongrae
yongrae

Reputation: 173

What is the overhead for creating multiple ZeroMQ sockets?

I try to build a point to point channel, using a ZMQ_PAIR socket. But the ZMQ_PAIR socket only allows an exclusive connection, which it means that if one peer (A) is connected to another peer (B), other peers can't connect to B, until the A disconnects from the B.

So, the first solution that comes into my mind is to set up as many ZMQ_PAIR sockets as the number of peers in the network and polling the sockets to multiplex the events from other peers. The problem with this is to manage lots of sockets which might incur non-negligible overheads.

But I don't have experimental data about the overheads of managing multiple sockets such as the time to create 50,000 ZMQ_PAIR sockets or the time to poll 50,000 ZMQ_PAIR sockets. Of course, I can do it myself, but I wonder if there is any existing experiment conducted by researchers or network developers who used ZeroMQ.

(I don't want any proxy or REQ/REP pattern. Using proxy incurs some performance degradation and I want pure peer-to-peer network. And REQ/REP pattern is inherently synchronous communication which is not my purpose now.)

Upvotes: 1

Views: 644

Answers (1)

user3666197
user3666197

Reputation: 1

Not an easy thought experiment to answer in general.

Definitely benchmark before taking decision.

ZeroMQ has its own easy and smart clocking tool, a Stopwatch(). Each instance has .start() and .stop() methods, so your code can benchmark each critical section independently and up to a [us] resolution, so you are ready to go.

One may reasonably start to setup and benchmark just some 1, 10, 20, 50, 100, 200 instances first and thus have a reasonable ground for further scaling the observed behaviour, before solving the 50k+ sized meshes from scratch.

There will be heavy both the [SPACE]-domain ( memory allocation ) and [TIME]-domain non-linearities, as the scale is going to be extended growing towards the 50k+ levels.

The next issue comes from non-homogeneous transport-class mixtures. inproc:// will have the least amount of overheads, while some L3+ transport-protocols will require way larger overheads, than others.

One might enjoy to also test a nanomsg, for its having a NN_BUS scalable formal communication archetype, better matching the needs.

Upvotes: 2

Related Questions