John
John

Reputation: 5287

First request sent over TCP in a periodically invoked loop takes to much time

I have encountered behavior with what i suspect is TCP that i don't understand and would appreciate help with.

Loop which sends requests over TCP and reads responses is invoked periodically.

First request-response pair that is sent almost always takes more time than other pairs, no matter the size of the loop.

This doesn't happen when i run both client and server on same machine, therefore i suspect this is related to TCP.

The code example bellow illustrates the issue:

Invoked every second:

    for (int sentPackets = 0; sentPackets < totalPackets; sentPackets++) {

        // send request
        stream.write(packet);
        stream.flush();             
        long sentAt = System.currentTimeMillis();

        // read response
        String response = StreamReader.read(inputStream, buffer);
        long receivedAt = System.currentTimeMillis();

        long dif = receivedAt - sentAt;

        System.out.println("dif: "+ dif);

StreamReader is utilty to read from socket.

The read method implementation is shown bellow:

public static String read(DataInputStream stream, byte[] buffer) throws IOException {

    int totalBytes = 0;

    // keep reading from stream until entire packet is available
    while (totalBytes < buffer.length) {

        // read into buffer, from position of latest byte read, up to size of buffer
        int bytesRead = stream.read(buffer, totalBytes, buffer.length - totalBytes);

        // premature end of stream
        if (bytesRead == -1) {

            System.out.println("unexpected end of stream during socket read operation");
            break;
        }

        else
            totalBytes = bytesRead;
    }

    return new String(buffer);
}

There are no other operations between time stamps, only write and read to socket.

What happens in that one second between loops that causes this behavior which doesn't occur on localhost ?

Bellow are some examples for visualization.

dif: 172
dif: 15
dif: 0
dif: 0

dif: 0
dif: 0
dif: 0
dif: 0

dif: 203
dif: 0
dif: 16
dif: 0

dif: 47
dif: 0
dif: 0
dif: 16

dif: 16
dif: 16
dif: 0
dif: 0

Upvotes: 0

Views: 51

Answers (1)

John Kugelman
John Kugelman

Reputation: 361645

I suspect Nagle's algorithm:

Nagle's algorithm, named after John Nagle, is a means of improving the efficiency of TCP/IP networks by reducing the number of packets that need to be sent over the network. It works by combining a number of small outgoing messages, and sending them all at once.

Nagle's algorithm is disabled for localhost on Windows (but not Mac or Linux).

If the latency is really problematic you can disable Nagle's algorithm with the TCP_NODELAY socket option. It will decrease latency at the expense of higher bandwidth if you send lots of small packets.

Upvotes: 1

Related Questions