I have setup a netcat client and server with the following:
client_ip=2.2.2.0
server_ip=2.2.2.1
server_port 12345
# create server namespace
ip netns add server
# create veth pair from client to server
ip link add client type veth peer name server
ip link set dev server netns server
ip -netns server link set server up
ip link set client up
# add ip address to veth pair interfaces
ip addr add $client_ip/31 dev client
ip -netns server addr add $server_ip/31 dev server
# disable gro on the server interface
sudo ip netns exec server ethtool -K server gro off
# start the server
sudo ip netns exec server nc -lk $server_ip $server_port
# in another terminal send a large payload to the server
echo -n "$(printf 'a%.0s' {1..14000})" | nc -t 2.2.2.1 12345
aka i have a client in the default namepsace which uses netcat to send a large tcp payload (larger than interface mtu) to a netcat server in the server namespace. The server interface has an mtu of 1500 and has generic receive offload disabled. When i take a tcpdump of the server namespace:
sudo ip netns exec server tcpdump -i any
it shows packets much larger than 1500 bytes. I would have expected that with gro disabled, the tcpdump would have captured only packets of size <= mtu of the server interface. Why does it not?
Note if i put the client on a different machines and do a similar test:
server_ip = 1.2.3.4
client_ip = 5.6.7.8
#### machine 1 (server) ####
sudo nc -lk $server_ip 12345 # start a server
sudo ethtool -K eth0 gro off # turn off generic receive offload
#### machine 2 (client) ####
echo -n "$(printf 'a%.0s' {1..14000})" | nc -t $client_ip 12345 # send large payload to machine 1
In this case it does what i expect. The client tcpdump shows packets that are larger than the 1500 byte MTU, but on the server side all packets are less than or equal to 1500 bytes. If i enable gro on machine 1 (server) and send the large payload, the server tcpdump now reports packets larger than the MTU. So clearly gro works at least for packets ingressing from another machine, but why does it not work in the veth pair instance?
Also if i disable tcp segmentation offloading on the client veth interface in the veth pair setup:
sudo ethtool -K client tso off
And again send the large payload, both client and server now only capture packets <= 1500 bytes. So tcp-segmentation-offloading works as expected in the veth pair setup.