Python Scapy vs dpkt

12.4k views Asked by At

I am trying to analyse packets using Python's Scapy from the beginning. Upon recent searching, I found there is another module in python named as dpkt. With this module I can parse the layers of a packet, create packets, read a .pcap file and write into a .pcap file. The difference I found among them is:

  1. Missing of live packet sniffer in dpkt

  2. Some of the fields need to be unpacked using struct.unpack in dpkt.

Is there any other differences I am missing?

2

There are 2 answers

3
RatDon On

Scapy is a better performer than dpkt.

  1. You can create, sniff, modify and send a packet using scapy. While dpkt can only analyse packets and create them. To send them, you need raw sockets.
  2. As you mentioned, Scapy can sniff live. It can sniff from a network as well as can read a .pcap file using the rdpcap method or offline parameter of sniff method.
  3. Scapy is generally used to create packet analyser and injectors. Its modules can be used to create a specific application for a specific purpose.

There might be many other differences also.

3
JenyaKh On

I don't understand why people say that Scapy is better performer. I quickly checked as shown below and the winner is dpkt. It's dpkt > scapy > pyshark.

My input pcap file used for testing is about 12.5 MB. The time is derived with bash time command time python testing.py. In each snippet I ensure that the packet is indeed decoded from raw bites. One can assign variable FILENAME with the needed pcap-file name.

dpkt

from dpkt.pcap import *
from dpkt.ethernet import *
import os

readBytes = 0
fileSize  = os.stat(FILENAME).st_size

with open(FILENAME, 'rb') as f:
    for t, pkt in Reader(f):
        readBytes += len(Ethernet(pkt))
        print("%.2f" % (float(readBytes) / fileSize * 100))

The average time is about 0.3 second.


scapy -- using PcapReader

from scapy.all import *
import os

readBytes = 0
fileSize  = os.stat(FILENAME).st_size

for pkt in PcapReader(FILENAME):

    readBytes += len(pkt)
    print("%.2f" % (float(readBytes) / fileSize * 100))

The average time is about 4.5 seconds.


scapy -- using RawPcapReader

from scapy.all import *
import os

readBytes = 0
fileSize  = os.stat(FILENAME).st_size

for pkt, (sec, usec, wirelen, c) in RawPcapReader(FILENAME):

    readBytes += len(Ether(pkt))
    print("%.2f" % (float(readBytes) / fileSize * 100))

The average time is about 4.5 seconds.


pyshark

import pyshark
import os

filtered_cap = pyshark.FileCapture(FILENAME)

readBytes = 0
fileSize  = os.stat(FILENAME).st_size

for pkt in filtered_cap:
     readBytes += int(pkt.length)
     print("%.2f" % (float(readBytes) / fileSize * 100))

The average time is about 12 seconds.


I do not advertise dpkt at all -- I do not care. The point is that I need to parse 8GB files currently. So I checked that with dpkt the above-written code for a 8GB pcap-file is done for 4.5 minutes which is bearable, while I would not even wait for other libraries to ever finish. At least, this is my quick first impression. If I have some new information I will update the post.