source.rar
source.rar

Reputation: 8080

TCP network throughput measurement

I am writing a small program to test network throughput as part of an exercise and need to increase the sending and receiving buffers to 256 KB (to try and boost TCP performance). I am doing this with setsockopt() and the SO_SNDBUF/SO_RCVBUF options and have also increased the 'net.core.rmem_max' and 'net.core.wmem_max' values.

The getsockopt() confirms the increase in the buffer sizes (double the 256KB value) so I know this is fine. However when I send 256KB of data from one host to another the receiver is always receiving it in several reads each of varying size (form somewhere between 20 to 40 reads ranging in number of received bytes form 1448 to 18824) until it receives all the data. I pretty confused at this point mainly with these questions,

  1. With the increased buffer size shouldn't it receive it in one read?
  2. Also why do the amount of bytes in each read differ so much (shouldnt they be more of less constant)?
  3. Is there some way to ensure the 256KB is received in one read?

Below is the receiver side snippet showing the read part,

  while(1) {
    memset(&client_addr, 0, sizeof(client_addr));
    if ((connfd = accept(listenfd, (struct sockaddr*)&client_addr, &socklen)) <= 0) {
      perror("accept()");
      exit(EXIT_FAILURE);
    }

    while ((n = recv(connfd, &buff[0], BUFF_SIZE, 0/*MSG_WAITALL*/)) > 0) {
      totalBytes += n;
      ++pktCount;
      printf("Received(%d): %d bytes\n", pktCount, n);
    }

    if (n == 0) {
      printf("Connection closed (Total: %lu bytes received)\n", totalBytes);
    }

    close(connfd);
    connfd = -1;
    totalBytes = 0;
    pktCount = 0;
  }

Any help would be great.

TIA

Upvotes: 1

Views: 2224

Answers (2)

CodeCaster
CodeCaster

Reputation: 151586

You'll have about 180 packets of data to send, which is not anywhere near a lot of data. Nagle will cause arbitrary amounts of packets to be buffered by either OS, while sending and while receiving.

The trigger for when any buffered packets will be presented through recv() is up to the implementation. It might very well present all packets after a certain amount of time, or after a certain amount of internal buffer size is reached. On some OS'es, you can alter those levels using the SO_RCVLOWAT socket option.

Upvotes: 2

user405725
user405725

Reputation:

With the increased buffer size shouldn't it receive it in one read?

No. TCP/IP is a streaming protocol and fragments data over a smaller packets. The buffer size primarily affects how much memory an underlying network layer implementation can use (and Linux generally doubles what you have set), but there is no guarantee that receives will receive data in chunks of exactly the same size as sender was sending it (after all, when you call send()/write(), it doesn't mean the packet was sent, actually. Also, you are most likely have your packets split anyway due to the MTU.

Also why do the amount of bytes in each read differ so much (shouldnt they be more of less constant)?

You may want to debug as to why this is happening. There are hundreds of factors — NIC in use, its settings, its driver, TCP/IP, TCP, Ethernet network layer settings, stuff in between sender and receiver (i.e. a switch) etc. But hey, more data comes in while your OS and the application is processing the previous chunk. Given a fact that passing data from the NIC to the user-space application is relatively very expensive, it is not surprising that data is being buffered and you get it in different sizes.

Is there some way to ensure the 256KB is received in one read?

There might be. One might use a blocking read() and require OS to wake up the process only upon receiving at least 256KB or if the error occurs.

Upvotes: 2

Related Questions