Alamgir Qazi
Alamgir Qazi

Reputation: 793

Packets getting dropped with Libpcap in C on 1 Gig Traffic

I'm writing a packets parser in C using libpcap library. Here is the simple code


int main(int argc, char *argv[])
{
    pcap_t *pcap;
    const unsigned char *packet;
    char errbuf[PCAP_ERRBUF_SIZE];
    struct pcap_pkthdr header;
    clock_t begin = clock();

    // Type your interface name here 
    
    char *device = "ens33";
    char error_buffer[PCAP_ERRBUF_SIZE];

    pcap_t *handle;
    int timeout_limit = 10000; // milliseconds

    if (device == NULL)
    {
        printf("Error finding device: %s\n", error_buffer);
        return 1;
    }

    // For live packet capturing
    handle = pcap_open_live(
        device,
        BUFSIZ,
        0,
        timeout_limit,
        error_buffer);

    if (handle == NULL)
    {
        printf("Error getting handle%s\n", error_buffer);
        return 2;
    }

    pcap_loop(handle, 0, dump_UDP_packet, NULL);

    clock_t end = clock();
    double time_spent = (double)(end - begin) / CLOCKS_PER_SEC;


    printf("Program completed: total packets processed: %d (non UDP/Radius packets: %d) in %f seconds\n", count, non_packets, time_spent);
    return 0;
}


I'm using tcpreplay to play live traffic from a pcap file. My program however is only able to process / read around 80,000 packets from a file of 240,000 packets. When I try to read the same packets from tcpdump, I get no packet loss.

Is this due to buffer size? How can I ensure packets are not lost ?

The tcpreplay takes around 1.5 to 2 seconds at high speed (~500MB/sec)

I'm running it on Ubuntu 18.04 (32 GB RAM, 24 core processor)

Upvotes: 2

Views: 888

Answers (1)

user13951124
user13951124

Reputation: 176

When I try to read the same packets from tcpdump, I get no packet loss.

Is this due to buffer size?

Unless you use the -B command-line option, tcpdump does not set the kernel buffer size (which you're also not doing, as you're using pcap_open_live(), which doesn't have a way to set it; the snapshot length is not the buffer size), so it's using the same default buffer size (2 MiB) that your program is using, so it's probably not the buffer size.

How can I ensure packets are not lost ?

Try using tcpdump's timeout of 1 second (1000 milliseconds) rather than 10 seconds (10000 milliseconds), which is what you're currently using.

Also, make sure dump_UDP_packet() isn't doing too much work (you haven't provided the source to let us know what it's doing).

I'm running it on Ubuntu 18.04 (32 GB RAM, 24 core processor)

And both your program and tcpdump are using one core. (After a context switch, they may run on a core different from the one on which they were running before the context switch, but it's still one core at a time; neither of them are multi-threaded, unless there's some threading in the part of your application that you didn't show.)

Upvotes: 2

Related Questions