Reputation: 541
I writing a network program as part of a networking project. The program generates a bunch of packets (TCP used for communication using berkely socket API) and sends it to a particular port and measures the responses that it gets back. The program works perfectly fine, but I wanted to do a little back end calculations like how much data rate my program is actually generating. What I tried to do is that I measured the time before and after the routine of the code which sends the packets out and divided the total data by that time i-e: A total of 799 packets are sent out in one go of the routine where each packet is 82 bytes so:
799 x 82 x 8 = 524144 bits. The time measured was = 0.0001s So the data rate , 524144 / 0.0001 = 5.24 Gbps
Here is the piece of code that I tried:
struct timeval start,end;
double diffTime (struct timeval* start, struct timeval* end)
{
double start_sec = ((double) start->tv_sec) + (double) start->tv_usec / 1000000.00;
double end_sec = ((double) end->tv_sec) + (double) end->tv_usec / 1000000.00;
return (end_sec - start_sec);
}
while(1){
gettimeofday(&start, NULL); // getting start time
*/ Call packet sending routine */
gettimeofday(&end, NULL);
printf("Time taken for sending out a batch is %f secs\n", diffTime(&start,&end));
}
I wanted to confirm whether I am approaching the problem rightly or not. Also If this is the right method is there a way to find out actually at what rate the packets are getting out of the wire i-e the actual physical rate of the packets from the ethernet interface? Can we estimate what will be the difference between the packet rate I calculated in the program (which is in user mode and I expect it to be a lot slower as the user/kernel barrier is traversed on system calls) and the actual packet rate? All help much appreciated.
Thanks.
Upvotes: 0
Views: 1129
Reputation: 2976
I suspect it's not what you want. What you have will give you info, but perhaps not indicators on how to improve it.
Typically what I worry about is measuring latency for hitching in a socket send/recv relationship. This typically involved measuring how long I spend between sends() or how long I wait in some form of recv().
There are a couple things to typically watch for. Are you depending on the network layer to gather and buffer your sends, or are you buffering and doing the send only when you want the data to go out? If the latter you usually want to turn off nagle buffering (see setsockopt and TCP_NODELAY) - but be careful the code doesn't regress.
Next is buffering. You are probably measuring the time it takes the data to hit the socket buffering, which will be almost instant. If you are streaming with now ack/coordination of response packets, this will average out fine over a long period. You can mess with buffering with with setsockopt() and SO_RCVBUF/SO_SNDBUF.
I'm going to reach a little farther. If you are measuring code and not the physical network for perf, and you are sending and receiving the packets in the largest blocks possible, there are two more things I would typically check.
Do you have an ACK type protocol. Does the code send/recv in a pattern basically. If so, that latency is likely to be the biggest problem. There are lots of ways to solve this, from allowing multiple pending requests that stream back independently, to implementing sliding windows. The main idea is just don't hitch the flow of data.
Finally, if you are dealing with recv() you typically don't want to block in recv unless you really really just want to handle one socket relationship per thread/process. The old school solution was select(), and that is still viable for reasonable scalability. For larger scale solutions you would look to epoll(linux), kevents(osx), or IOCP(windows). These allow you move more of the bookkeeping and (sometimes) thread pooling back to the OS. This is where you want to be if your issue is really about how much data can I pump out this NIC; but is rarely necessary unless you are handling multiple connections.
Sorry if I missed what you were really asking, but I tried to cover the common issues.
Upvotes: 1