How to connect two docker containers to openvswitch+DPDK

1.7k views Asked by At

I'm trying to test the throughput between two docker containers using Iperf3 (any throughput tester app) connected to OVS (openvswitch) and DPDK on ubuntu 18.04 (VMWare workstation). The goal of this is to compare the performance of OVS-DPDK vs Linux kernel in some scenarios.

I can't find a proper solution, which explains how to connect OVS+DPDK to the docker containers so that the containers can pass TCP/UDP traffic to each other.

I'd appreciate your help explaining how to connect two docker containers with OVS+DPDK. The configuration that needs to be done in the docker containers, and the ones that need to be done in the host OS.

BTW I don't have traffic from outside.

Thanks

Edit

  • DPDK version is 20.11.0
  • OVS version is 2.15.90
  • Iperf3

Here are the steps I take:

  1. I install dpdk using apt: sudo apt install openvswitch-switch-dpdk

  2. set the alternative as: sudo update-alternatives --set OvS-vswitchd /usr/lib/openvswitch-switch -dpdk/OvS-vswitchd-dpdk

  3. Allocate the hugepages and update the grub.

  4. mount hugepages

  5. bind NIC to DPDK: sudo dpdk-devbind --bind=vfio-pci ens33. Although I don't need this step because I don't have traffic from outside if I don't bind my NIC the sudo service openvswitch-switch restart fails.

  6. I create a bridge: ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev

  7. I create two ports for my containers: ovs-vsctl add-port br0 client -- set Interface client type=dpdk options:dpdk-devargs=<binded_nic_pci_addr> and ovs-vsctl add-port br0 server -- set Interface server type=dpdk options:dpdk-devargs=<binded_nic_pci_addr>. (server port number: 1, client port number: 2)

  8. Open bidirectional flow between ports:

    1. sudo ovs-ofctl del-flows br0
    2. sudo ovs-ofctl add-flow br0 in_port=1,action=output:2
    3. ovs-ofctl add-flow br0 in_port=2,action=output:1

After step 8 I don't know how to connect my iperf3 docker containers to use these ports. I appreciate your help in letting me know how to connect containers to the ports and test the network.

Edit 2

Based on Vipin's answer these steps won't work considering my requirements.

1

There are 1 answers

16
Vipin Varghese On BEST ANSWER

[EDIT: update to reflect only using OVS-DPDK and iperf3 on container]

There are multiple ways one can connect 2 dockers to talk directly with each other using to run iperf3.

  1. Virtual Interface like TAP-1|MAC-VETH-1 from Docker-1 is connected to TAP-2| MAC-VETH-2 via Linux Bridge.
  2. Virtual port-1 (TAP|memif) from OVS-DPDK to Docker-1 and virtual port-2 (tap|memif) to Docker-2 via DPDK-OVS

For scenario 2 one needs to add TAP interface to OVS. because end application iperf3 is using Kernel Stack for TCP|UDP termination. One can use the below settings (modified based on OVS-DPDK version) to achieve the result.

sudo ./utilities/ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev
sudo ./utilities/ovs-vsctl add-port br0 myeth0 -- set Interface myeth0 type=dpdk options:dpdk-devargs=net_tap0,iface=tap0
sudo ./utilities/ovs-vsctl add-port br0 myeth1 -- set Interface myeth1 type=dpdk options:dpdk-devargs=net_tap1,iface=tap1
sudo ./utilities/ovs-ofctl add-flow br0 in_port=1,action=output:2
sudo ./utilities/ovs-ofctl add-flow br0 in_port=2,action=output:1

Note:

  1. as mentioned in comments, I am not in favour with this approach as TAP PMD defeats the benefit of bypassing Kernel (Docker1 ==> Kenrel TAP-1 ==> DPDK PMD ==> OVS ==> DPDK PMD ==> kernel TAP2 ==> Docker2)
  2. If one needs to simply check iperf3 performance, please use DPDK-iperf3 such as github project which do the same.
  3. reason for recommending TAP PMD over KNI PMD is using 2 CPU cores (DPDK thread and Kernel thread) tap and KNI are on par around 4Gbps with iperf3

[EDIT-1] based on the conversation https://chat.stackoverflow.com/rooms/231963/ovs-dpdk, @MohammadSiavashi

  1. Iperf3 requires either kernel or userspace network stack.
  2. In Linux docker, one can use the kernel stack to achieve this.
  3. DPDK-OVS will only bypass the Linux kernel bridge.
  4. Hence easiest alternative is to use TAP interface to inject back to the kernel for dockers.
  5. There is an alternative (as shared in the answer) for userspace network stack and iperf3 purely on DPDK.
  6. OVS-DPDK is not mandatory for current testing, because one can run testpmd, l2fwd, skeleton instead of running OVS-DPDK.
  7. one can always use the userspace network stack instead of Kernel network stack too.

Current agreement:

  • Dockers run on the host and use Kernel stack bifurcated by namespace and groups
  • With the current understanding @MohammadSiavashi will try out TAP PMD based OVS-DPDK and alternate to userspace iperf3.