susan
susan

Reputation: 27

Java nio socket communication for two processes located on one machine or on two machine

I am new to networking and ask a simple question which makes me very confused. I hope people who has a lot of networking knowledge can help.

I send messages constantly from process A to process B. If A and B are located at the same machine, the average end-to-end delay is ~6 ms; if A and B are located on two machines in a local network (conected by a very simple router), the average end-to-end delay is ~176 ms. Process A and process B communicate using java non-blocking socket. Each message is 10 KB and message copies of one message are sent to different receivers.

Another test is to vary the # of message copies. In this test, there are three processes (A, B, and C) in which process A sends to process B and process B sends to process C. If one message copy is sent, End2End delay is ~10 ms; if two copies of messages are sent consecutively (20 KB), end2end delay is 147 ms; if four copies of messages are sent one by one (40 KB), end2end delay is ~800 ms. Every 1000 ms, a new message is sent.

There is no delay between message copies. Why the cost increases so fast when # of message copies to be sent increases? Is this related to my networking configuration or my coding problem?What is the cause of this difference?

Part of the code is at this link: How to let selector that socketchannel key change in java nio

At the beginning, I think it is slow becuase the tcp buffer is full and the process has to wait when there is new room to write. In some reference, the waiting time for tcp can be 200 ms. Is this correct for my question and how to verify it?

Upvotes: 1

Views: 462

Answers (2)

Mark Wilkins
Mark Wilkins

Reputation: 41222

The times for both the local machine and across the network seem a little bit slow for that amount of data. Of course, it also depends on what processing is being performed on the data and if it is done synchronously or asynchronously. For example, the slowdown when multiple requests are sent in a row could be explained by asynchronous handling of the data. If the data is pulled off the request and put in a queue to be "processed" and the response is immediately sent, then it might be relatively fast. If subsequent requests have to block while the queue is processed before returning, that might explain the slow down. But I am just speculating at this point.

The request speed across the network obviously depends on the physical distance, number of hops, etc. But the numbers seem to indicate it is slower in general than it should be. And the most likely reason is that the logic in the code is not correct ... but without more information, it is not easy to guess at the reasons (at least for me).

Out of curiosity, I just ran some quick tests that sent messages via sockets on my local machine with each request being 10K (response packet much smaller). The average round trip was only 0.22 ms. A similar test to a server two hops away on the network averaged 1.9 ms per round trip. This was using TCP/IP. Using UDP to the server two hops away, the time was 1.7 ms on average.

So the speeds you are seeing seem rather slow (but the hardware and network speed of my test may have nothing in common with your setup ... so these numbers are only marginally useful to you).

Of more value, though, you can use ping to get a baseline of how long you should expect it to take. You can specify the packet size for ping (-s option or possibly -l depending on the version you are using). Pinging the same two machines that I ran the other test verified that the times made sense.

Upvotes: 1

ethrbunny
ethrbunny

Reputation: 10469

Its always slower to cross the network than within a given CPU.

Upvotes: 0

Related Questions