user1504439
user1504439

Reputation: 51

Low Performance with Scapy

I'm creating a script that sends to Eth0 all traffic from Tap0, and sends to Tap0 all traffic from Eth0. After finding many examples online, i managed to make it work. The issue i have is that the performance is very low.

Pinging between 2 VMs without using the script, takes less than 1ms. With the script it takes ~15ms.

When i send a 10 MB file from a VM to another using scp, the avg. transfer rate is 12 Mbps without the script. With the script it goes down to less than 1 Mbps.

I know that Python is not actually the fastest language to deal with network traffic, but is it that slow?

Is there a way to optimize this code?

My VMs are Ubuntu 10.04 32 bits.

Here is the code:

import os,sys,getopt,struct,re,string,logging

from socket import *
from fcntl  import ioctl
from select import select

from scapy.all import *

TUNSETIFF = 0x400454ca
IFF_TAP   = 0x0002
TUNMODE   = IFF_TAP

ETH_IFACE  = "eth0"
TAP_IFACE = "tap0"

conf.iface = ETH_IFACE

# Here we capture frames on ETH0
s = conf.L2listen(iface = ETH_IFACE)

# Open /dev/net/tun in TAP (ether) mode (create TAP0)
f = os.open("/dev/net/tun", os.O_RDWR)
ifs = ioctl(f, TUNSETIFF, struct.pack("16sH", "tap%d", TUNMODE))


# Speed optimization so Scapy does not have to parse payloads
Ether.payload_guess=[]

os.system("ifconfig eth0 0.0.0.0")
os.system("ifconfig tap0 192.168.40.107")
os.system("ifconfig tap0 down")
os.system("ifconfig tap0 hw ether 00:0c:29:7a:52:c4")
os.system("ifconfig tap0 up")

eth_hwaddr = get_if_hwaddr('eth0')

while 1:
 r = select([f,s],[],[])[0] #Monitor f(TAP0) and s(ETH0) at the same time to see if a frame came in.

 #Frames from TAP0
 if f in r:  #If TAP0 received a frame
  # tuntap frame max. size is 1522 (ethernet, see RFC3580) + 4
  tap_frame = os.read(f,1526)
  tap_rcvd_frame = Ether(tap_frame[4:]) 
  sendp(tap_rcvd_frame,verbose=0) #Send frame to ETH0

 #Frames from ETH0
 if s in r: #If ETH0 received a frame   
  eth_frame = s.recv(1522)
  if eth_frame.src != eth_hwaddr:           
   # Add Tun/Tap header to frame, convert to string and send. "\x00\x00\x00\x00" is a requirement when writing to tap interfaces. It is an identifier for the Kernel.
   eth_sent_frame = "\x00\x00\x00\x00" + str(eth_frame)     
   os.write(f, eth_sent_frame) #Send frame to TAP0

Upvotes: 5

Views: 7459

Answers (2)

jcchuks
jcchuks

Reputation: 907

I had similar issue: From a link that seems to have disected scapy's source code

Every time you invoke send() or sendp() Scapy will automatically create and close a socket for every packet you send! I can see convenience in that, makes the API much simpler! But I'm willing to bet that definitely takes a hit on performance!

Also a similar analysis here (link2). You can thus optimize following this sample code from link2.

 #The code sample is from
 #https://home.regit.org/2014/04/speeding-up-scapy-packets-sending/
 #also see https://byt3bl33d3r.github.io/mad-max-scapy-improving-scapys-packet-sending-performance.html for similar sample. This works.
 def run(self):
     # open filename
     filedesc = open(self.filename, 'r')
     s = conf.L2socket(iface=self.iface) #added
     # loop on read line
     for line in filedesc:
         # Build and send packet
         # sendp(pkt, iface = self.iface, verbose = verbose) This line goes out
         s.send(pkt) #sendp() is replaced with send()

Upvotes: 2

tMC
tMC

Reputation: 19315

To be honest, I'm surprised its preforming as well as it is. I would be surprised if you could do much better than you already are.

Keep in mind the path a packet has to follow to cross your user-land bridge:

Coming in one interface, through the NIC driver, into a kernel, then it has to wait for the context switch to user-land where it has to clime the scapy protocol abstractions before it can be evaluated by your code. Followed by your code sending back down the scapy protocol abstractions (possibly reassembling the packet in python user-space), being written to the socket, waiting for context switch back into kernel-land, written to the NIC driver and finally being sent out the interface...

Now, when you ping across that link, your measuring the time it take to go through that entire process twice- once going and once returning.

To consider your context switching from kernel to user land 4 times (2 for each direction) and your able to do it in 0.015 seconds- thats pretty good.

Upvotes: 3

Related Questions