Mininet TCP Congestion Control Losing Its Fairness Upon Reaching 5 Connections?

120 views Asked by At

I am using Mininet to experiment around with TCP congestion control. This is the network that is currently being tested. The Python code for this topology is as follows.

class ServerClient(Topo):
    def build(self):
        h1 = self.addHost('h1', cls=Host, defaultRoute=None)
        h2 = self.addHost('h2', cls=Host, defaultRoute=None)
        h3 = self.addHost('h3', cls=Host, defaultRoute=None)
        h4 = self.addHost('h4', cls=Host, defaultRoute=None)
        h5 = self.addHost('h5', cls=Host, defaultRoute=None)
        h6 = self.addHost('h6', cls=Host, defaultRoute=None)
        
        server = self.addHost('server', cls=Host, defaultRoute=None)
        
        s1 = self.addSwitch('s1', cls=OVSKernelSwitch, failMode='standalone')
        s2 = self.addSwitch('s2', cls=OVSKernelSwitch, failMode='standalone')
        
        self.addLink(h1, s1, cls=TCLink, bw=1000, delay='0.2ms')
        self.addLink(h2, s1, cls=TCLink, bw=1000, delay='0.2ms')
        self.addLink(h3, s1, cls=TCLink, bw=1000, delay='0.2ms')
        self.addLink(h4, s1, cls=TCLink, bw=1000, delay='0.2ms')
        self.addLink(h5, s1, cls=TCLink, bw=1000, delay='0.2ms')
        self.addLink(h6, s1, cls=TCLink, bw=1000, delay='0.2ms')
        
        self.addLink(s1, s2, cls=TCLink, bw=500, delay='0.2ms')
        
        self.addLink(server, s2, cls=TCLink, bw=1000, delay='0.2ms')

All links in this network have a small delay of 0.2ms and have no packet losses. However, I am seeing weird behaviours when there are 5 or more iperf3s running in parallel. My computer has TCP congestion control set to Reno using sudo sysctl net.ipv4.tcp_congestion_control=reno.

This was run on each host with 20 seconds delay between them.

h1: iperf3 -c <Server IP> -P 1 -p 20000 -t 140
h2: iperf3 -c <Server IP> -P 1 -p 20001 -t 120
h3: iperf3 -c <Server IP> -P 1 -p 20002 -t 100
h4: iperf3 -c <Server IP> -P 1 -p 20003 -t 80
h5: iperf3 -c <Server IP> -P 1 -p 20004 -t 60
h6: iperf3 -c <Server IP> -P 1 -p 20005 -t 40

This code was used to run them in parallel.

    def run_iperf(hostid):
        with open('./res/h' + str(hostid + 1) + '.txt', 'w') as f:
            port = 20000 + hostid
            server.popen('iperf3 -s -p ' + str(port))
            command = 'iperf3 -c ' + server.IP() + ' -P 1 -p ' + str(port) + ' -t ' + str((6 - hostid) * 20 + 20)
            procs.append([hosts[hostid].popen(command, stdout=f), f])
            time.sleep(20)
    
    for i in range(6):
        run_iperf(i)

This was the result of each host's transfer rate. You can see that all hosts get a similar level of bandwidth until h5 joins the testing, where h3's transfer rate surges, and h5 fails to send much. That is not the case with h6, which gains large amount of share, and later dips.

This test was repeated multiple times, and it has shown similar result. The fairness breaks as soon as 5 or more hosts start running iperf3. I have tried using -C reno just in case, but the result was similar.

I have also conducted similar test where h6 is the iperf3 server. The result was as expected this time. This was the result. All 5 maintained similar level of transfer rate.

I would like to know what exactly is causing the unfairness. Thank you.

0

There are 0 answers