Simulating network failures in Docker

2k views Asked by At

I am trying to simulate partial/total network/container failure in docker in order to see how my application behaves in failure conditions. I have started by using pumba, but it isn't working right. More specifically, this command fails when run, both via pumba and when run directly on the container with docker exec:

tc qdisc add dev eth0 root netem delay 2000ms 10ms 20.00

with the following output:

RTNETLINK answers: Operation not permitted

Now here is where it gets stranger. It works when run inside my service containers Actually, it only works when run via pumba, not when run directly (rabbitmq:3.6.10, redis:4.0.1, mongo:3.5.11), after installing the iproute2 package. It does not work inside my application containers, all of which use node:8.2.1 as the base image, which already has iproute2 installed. None of the containers have any add_caps applied.

Output of ip addr on one of the application containers:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1
    link/ipip 0.0.0.0 brd 0.0.0.0
3: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN group default qlen 1
    link/gre 0.0.0.0 brd 0.0.0.0
4: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN group default qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
5: ip_vti0@NONE: <NOARP> mtu 1332 qdisc noop state DOWN group default qlen 1
    link/ipip 0.0.0.0 brd 0.0.0.0
6: ip6_vti0@NONE: <NOARP> mtu 1500 qdisc noop state DOWN group default qlen 1
    link/tunnel6 :: brd ::
7: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1
    link/sit 0.0.0.0 brd 0.0.0.0
8: ip6tnl0@NONE: <NOARP> mtu 1452 qdisc noop state DOWN group default qlen 1
    link/tunnel6 :: brd ::
9: ip6gre0@NONE: <NOARP> mtu 1448 qdisc noop state DOWN group default qlen 1
    link/gre6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
113: eth0@if114: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:12:00:06 brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.6/16 scope global eth0
       valid_lft forever preferred_lft forever
3

There are 3 answers

0
polson136 On

Ok, I found part of the answer. It turns out that the tc command was not working when run directly on the service containers. Sorry for the bit of incorrect information in the original question. Pumba works on the service containers and not the application containers. The tc command does not work in any of the containers.

It turns out that it was a problem with running as an unprivileged user. I opened an issue with pumba to address the problem.

The tc comand still isn't working when run as root, and I still don't know why. However, I was only using that command for debugging, so while I am curious as to why it doesn't work, my main issue has been resolved.

0
rebecca stankus On

I had a similar issue on Windows and finally was able to resolve by turning off the WSL 2 based engine in Docker settings. Now all my tc qdisc... commands are working.

0
Colis On

You should call exec on the containner using root user: -u=0

like:

sudo docker exec-u=0 myContainer tc qdisc add dev eth0 root netem delay 2000ms 10ms 20.00