Unable to get the pods on Worker node talk to the pod (coredns) on the Master node

25 views Asked by At

I have a relatively simple k8s setup. One Master node and one worker node.

$ kubectl get pods -A -o wide

kubectl get pods -A -o wide
NAMESPACE          NAME                                       READY   STATUS    RESTARTS       AGE     IP                NODE           NOMINATED NODE   READINESS GATES
calico-apiserver   calico-apiserver-67dd546d9c-8qpfb          1/1     Running   1 (5h8m ago)   46h     192.168.77.138    master-node    <none>           <none>
calico-apiserver   calico-apiserver-67dd546d9c-m4gck          1/1     Running   1 (5h8m ago)   46h     192.168.77.139    master-node    <none>           <none>
calico-system      calico-kube-controllers-5c9df676df-b6k6d   1/1     Running   1 (5h8m ago)   46h     192.168.77.137    master-node    <none>           <none>
calico-system      calico-node-ggdm2                          1/1     Running   1 (5h8m ago)   46h     10.0.16.174       master-node    <none>           <none>
calico-system      calico-node-xmm27                          1/1     Running   1 (5h7m ago)   46h     10.0.16.197       worker-node1   <none>           <none>
calico-system      calico-typha-6b76b7b784-t52h2              1/1     Running   1 (5h8m ago)   46h     10.0.16.174       master-node    <none>           <none>
calico-system      csi-node-driver-7njf6                      2/2     Running   2 (5h7m ago)   46h     192.168.180.203   worker-node1   <none>           <none>
calico-system      csi-node-driver-v2j8f                      2/2     Running   2 (5h8m ago)   46h     192.168.77.140    master-node    <none>           <none>
default            busybox                                    1/1     Running   4 (48m ago)    4h48m   192.168.180.206   worker-node1   <none>           <none>
default            dnsutils                                   1/1     Running   1 (5h7m ago)   27h     192.168.77.200    worker-node1   <none>           <none>
default            redis-8464495b9b-q24wr                     1/1     Running   1 (5h7m ago)   23h     192.168.180.204   worker-node1   <none>           <none>
default            redis-8464495b9b-srpb8                     1/1     Running   1 (5h7m ago)   23h     192.168.180.205   worker-node1   <none>           <none>
kube-system        coredns-787d4945fb-4c6lx                   1/1     Running   1 (5h8m ago)   46h     192.168.77.136    master-node    <none>           <none>
kube-system        coredns-787d4945fb-pq7q4                   1/1     Running   1 (5h8m ago)   46h     192.168.77.135    master-node    <none>           <none>
kube-system        etcd-master-node                           1/1     Running   3 (5h8m ago)   46h     10.0.16.174       master-node    <none>           <none>
kube-system        kube-apiserver-master-node                 1/1     Running   3 (5h8m ago)   46h     10.0.16.174       master-node    <none>           <none>
kube-system        kube-controller-manager-master-node        1/1     Running   1 (5h8m ago)   46h     10.0.16.174       master-node    <none>           <none>
kube-system        kube-proxy-65flc                           1/1     Running   1 (5h7m ago)   46h     10.0.16.197       worker-node1   <none>           <none>
kube-system        kube-proxy-7mjg2                           1/1     Running   1 (5h8m ago)   46h     10.0.16.174       master-node    <none>           <none>
kube-system        kube-scheduler-master-node                 1/1     Running   3 (5h8m ago)   46h     10.0.16.174       master-node    <none>           <none>
tigera-operator    tigera-operator-78d7857c44-jtd8m           1/1     Running   2 (5h7m ago)   46h     10.0.16.174       master-node    <none>           <none>

On the Master Node:

$ ip a | grep "inet "     

inet 127.0.0.1/8 scope host lo     
inet 10.0.16.174/24 metric 100 brd 10.0.16.255 scope global dynamic eth0     
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0     
inet 192.168.77.128/32 scope global vxlan.calico

On the Worker Node:

$ ip a | grep "inet "     

inet 127.0.0.1/8 scope host lo     
inet 10.0.16.197/24 metric 100 brd 10.0.16.255 scope global dynamic eth0     
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0     
inet 192.168.180.192/32 scope global vxlan.calico 

  • From Master Node, I can’t ping the IPs of pods deployed on the worker node.
  • From Work Node, I can’t ping the IPs of pods deployed on the master node.

Pings from Worker Node (I can’t reach 192.168.77 addresses)

$ping 10.0.16.174
PING 10.0.16.174 (10.0.16.174) 56(84) bytes of data.
64 bytes from 10.0.16.174: icmp_seq=1 ttl=64 time=0.459 ms
^C
-- 10.0.16.174 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms

$ ping 192.168.180.206
PING 192.168.180.206 (192.168.180.206) 56(84) bytes of data.
64 bytes from 192.168.180.206: icmp_seq=1 ttl=64 time=0.062 ms
^C
-- 192.168.180.206 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms

$ ping 192.168.77.138
PING 192.168.77.138 (192.168.77.138) 56(84) bytes of data.
^C
-- 192.168.77.138 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2055ms

Pings from Master Node - I can’t reach 192.168.180 addresses

$ ping 10.0.16.197
PING 10.0.16.197 (10.0.16.197) 56(84) bytes of data.
64 bytes from 10.0.16.197: icmp_seq=1 ttl=64 time=0.553 ms
64 bytes from 10.0.16.197: icmp_seq=2 ttl=64 time=0.686 ms
^C
-- 10.0.16.197 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3055ms
rtt min/avg/max/mdev = 0.466/0.550/0.686/0.084 ms

$ ping 192.168.77.138
PING 192.168.77.138 (192.168.77.138) 56(84) bytes of data.
64 bytes from 192.168.77.138: icmp_seq=1 ttl=64 time=0.068 ms
64 bytes from 192.168.77.138: icmp_seq=2 ttl=64 time=0.084 ms
^C
-- 192.168.77.138 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1011ms
rtt min/avg/max/mdev = 0.068/0.076/0.084/0.008 ms

$ ping 192.168.180.192
PING 192.168.180.192 (192.168.180.192) 56(84) bytes of data.
^C
-- 192.168.180.192 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1009ms

On Worker Node:

$ sudo iptables -L -t nat | grep 192.168.180
KUBE-MARK-MASQ  all  --  ip-192-168-180-204.ca-central-1.compute.internal  anywhere             /* default/redis */
DNAT       tcp  --  anywhere             anywhere             /* default/redis */ tcp to:192.168.180.204:6379
KUBE-MARK-MASQ  all  --  ip-192-168-180-205.ca-central-1.compute.internal  anywhere             /* default/redis */
DNAT       tcp  --  anywhere             anywhere             /* default/redis */ tcp to:192.168.180.205:6379
KUBE-SEP-7JGTFA7XNH6WUV4O  all  --  anywhere             anywhere             /* default/redis -> 192.168.180.204:6379 */ statistic mode random probability 0.50000000000
KUBE-SEP-NQMUIFXC5KYTO5AR  all  --  anywhere             anywhere             /* default/redis -> 192.168.180.205:6379 */


$ telnet 192.168.180.204 6379
Trying 192.168.180.204...
Connected to 192.168.180.204.
Escape character is '^\]'.
^\]
telnet\> Connection closed.

From Master Node:

$ ip r
default via 10.0.16.1 dev eth0 proto dhcp src 10.0.16.174 metric 100
10\.0.0.2 via 10.0.16.1 dev eth0 proto dhcp src 10.0.16.174 metric 100
10\.0.16.0/24 dev eth0 proto kernel scope link src 10.0.16.174 metric 100
10\.0.16.1 dev eth0 proto dhcp scope link src 10.0.16.174 metric 100
172\.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
blackhole 192.168.77.128/26 proto 80
192\.168.77.135 dev calicfd8c8d22d9 scope link
192\.168.77.136 dev calib23a9ebef36 scope link
192\.168.77.137 dev calied3a4a65ec7 scope link
192\.168.77.138 dev calie3469641205 scope link
192\.168.77.139 dev cali6592c897030 scope link
192\.168.77.140 dev cali9e6e0abfa20 scope link
192\.168.77.192/26 via 10.0.16.197 dev eth0 proto 80 onlink
192\.168.180.192/26 via 10.0.16.197 dev eth0 proto 80 onlink

$ telnet 192.168.180.204 6379
Trying 192.168.180.204...
telnet: Unable to connect to remote host: Connection timed out

I created a dnsutils pod with an ip address in the CIDR range of coredns and other pods on the Master node. But, the dnsutils pod still can't communicate with coredns.

0

There are 0 answers