Reputation: 7190
I’m using UDP function sendto(), According to the UDP protocol, the function should send/discard the datagram and then return immediately, However, in my test, I found that in a LAN, when I sending a large number of data patckets(200M) continuously, most of the time consuming is less than 1ms, but sometimes more than 10ms, or even 20~30ms. client.c:
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <arpa/inet.h>
#include <time.h>
#include <sys/time.h>
#define SIZE 1024 * 1024 * 200
int main(int argc, char **argv)
{
/*
if(argc < 2)
{
printf("needs params\n");
exit(0);
}
int num = atoi(argv[1]);
*/
int fd = socket(AF_INET, SOCK_DGRAM, 0);
if(fd == -1)
{
perror("socket");
exit(0);
}
/*
int bufsize = 0;
int len = sizeof(int);
getsockopt(fd, SOL_SOCKET, SO_SNDBUF, (void *)&bufsize, &len);
printf("default: send buff = %d\n", bufsize);
bufsize = bufsize * num;
printf("bufsize change to %d\n", bufsize);
setsockopt(fd, SOL_SOCKET, SO_SNDBUF, (const void *)&bufsize, sizeof(int));
bufsize = -1; //reset
len = sizeof(int);
getsockopt(fd, SOL_SOCKET, SO_SNDBUF, (void *)&bufsize, &len);
printf("after set, send buff = %d\n", bufsize);
*/
struct sockaddr_in seraddr;
seraddr.sin_family = AF_INET;
unsigned short port = 49244;
seraddr.sin_port = htons(port);
inet_pton(AF_INET, "43.82.153.211", &seraddr.sin_addr.s_addr);
char buf[1400];
memset(buf, 'g', sizeof(buf));
long long size = SIZE;
struct timeval before, after;
double mseconds = 0;
unsigned int count = 1;
while(size > 0)
{
gettimeofday(&before, NULL);
int err = sendto(fd, buf, 1400, 0, (struct sockaddr*)&seraddr, sizeof(seraddr));
gettimeofday(&after, NULL);
if (-1 == err)
{
perror("sendto");
exit(0);
}
mseconds = 1000 * (after.tv_sec - before.tv_sec) + (double)(after.tv_usec - before.tv_usec) / 1000;
printf("%d: %-0.6f mseconds\n", count, mseconds);
memset(buf, 0, sizeof(buf));
size -= 1400;
count++;
}
close(fd);
return 0;
}
log:
...
546: 0.074000 mseconds
547: 0.083000 mseconds
548: 0.041000 mseconds
549: 0.067000 mseconds
550: 0.072000 mseconds
551: 0.048000 mseconds
552: 7.541000 mseconds(Much higher than other values)
553: 0.082000 mseconds
554: 0.084000 mseconds
...
I want to know which cause this time consuming.
I found 2 parameters, sendbuf and SIOCSIFTXQLEN. Below is what I have tried: (running on arm, kernel: linux-3.0.27)
# time ./client > client.log
real 0m 18.19s
user 0m 2.35s
sys 0m 4.78s
only print time>10ms(the same for all the following sample)
# time ./client_print_largethan_10ms > client_print_largethan_10ms.log
real 0m 17.88s
user 0m 0.50s
sys 0m 4.44s
# sysctl -a | grep "net.core.wmem_"
net.core.wmem_max = 110592
net.core.wmem_default = 110592
# sysctl -w net.core.wmem_max=2211840
net.core.wmem_max = 2211840
# sysctl -a | grep "net.core.wmem_"
net.core.wmem_max = 2211840
net.core.wmem_default = 110592
then run the test
# time ./client_sndbuf_x 10 > client_sndbuf.log
real 0m 9.34s
user 0m 0.62s
sys 0m 4.63s
It takes less time, but there are still records which large that 10ms in the log.
# time ./client_print_largethan_10ms 49244 > client_print_largethan_10ms_afterset_SIOCSIFTXQLEN.log
real 0m 17.82s
user 0m 0.49s
sys 0m 4.32s
No obvious change
# time ./client_sndbuf_x 10 > client_sndbuf.log
real 0m 17.84s
user 0m 0.55s
sys 0m 4.27s
even in this case, it takes longer. I restored the SIOCSIFTXQLEN, and it takes 9 seconds again.
There seems to be some relationship between these two parameters. I don't want to completely eliminate time consuming, but I want to know how these time consuming are generated. Thanks.
Upvotes: 0
Views: 182