Kubernetes Pods cannot find each other on different nodes

1.1k views Asked by At

I set up a Kubernetes cluster with a master and 2 slaves on 3 bare-metal CentOS 7 server. I used kubeadm for that, following this guide: https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/ and using Weave Net for the pod network.

For testing I set up 2 default-http-backends with services, to expose the ports:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: default-http-backend
  labels:
    k8s-app: default-http-backend
spec:
  template:
    metadata:
      labels:
        k8s-app: default-http-backend
    spec:
      terminationGracePeriodSeconds: 60
      containers:
      - name: default-http-backend
        # Any image is permissable as long as:
        # 1. It serves a 404 page at /
        # 2. It serves 200 on a /healthz endpoint
        image: gcr.io/google_containers/defaultbackend:1.0
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 30
          timeoutSeconds: 5
        ports:
        - containerPort: 8080
        resources:
          limits:
            cpu: 10m
            memory: 20Mi
          requests:
            cpu: 10m
            memory: 20Mi
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: default-http-backend-2
  labels:
    k8s-app: default-http-backend-2
spec:
  template:
    metadata:
      labels:
        k8s-app: default-http-backend-2
    spec:
      terminationGracePeriodSeconds: 60
      containers:
      - name: default-http-backend-2
        image: gcr.io/google_containers/defaultbackend:1.0
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 30
          timeoutSeconds: 5
        ports:
        - containerPort: 8080
        resources:
          limits:
            cpu: 10m
            memory: 20Mi
          requests:
            cpu: 10m
            memory: 20Mi
---
apiVersion: v1
kind: Service
metadata:
  name: default-http-backend
  labels:
    k8s-app: default-http-backend
spec:
  ports:
  - port: 80
    targetPort: 8080
  selector:
    k8s-app: default-http-backend
---
apiVersion: v1
kind: Service
metadata:
  name: default-http-backend-2
  labels:
    k8s-app: default-http-backend-2
spec:
  ports:
  - port: 80
    targetPort: 8080
  selector:
    k8s-app: default-http-backend-2

If the 2 pods get deployed on the same node I can curl the port of one pod from the other, but if they are deployed to different nodes, they dont find a route to host:

$~ kubectl get svc
NAME                     CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
default-http-backend     10.111.59.235   <none>        80/TCP    34m
default-http-backend-2   10.106.29.17    <none>        80/TCP    34m
$~ kubectl get po -o wide
NAME                                     READY     STATUS    RESTARTS   AGE       IP          NODE
default-http-backend-2-990549169-dd29z   1/1       Running   0          35m       10.44.0.1   vm0059
default-http-backend-726995137-9994z     1/1       Running   0          35m       10.36.0.1   vm0058

$~ kubectl exec -it default-http-backend-726995137-9994z sh
/ # wget 10.111.59.235:80
Connecting to 10.111.59.235:80 (10.111.59.235:80)
wget: server returned error: HTTP/1.1 404 Not Found
/ # wget 10.106.29.17:80
Connecting to 10.106.29.17:80 (10.106.29.17:80)
wget: can't connect to remote host (10.106.29.17): No route to host

used versions:

$~ docker version
Client:
 Version:         1.12.6
 API version:     1.24
 Package version: docker-1.12.6-32.git88a4867.el7.centos.x86_64
 Go version:      go1.7.4
 Git commit:      88a4867/1.12.6
 Built:           Mon Jul  3 16:02:02 2017
 OS/Arch:         linux/amd64
Server:
 Version:         1.12.6
 API version:     1.24
 Package version: docker-1.12.6-32.git88a4867.el7.centos.x86_64
 Go version:      go1.7.4
 Git commit:      88a4867/1.12.6
 Built:           Mon Jul  3 16:02:02 2017
 OS/Arch:         linux/amd64

$~ kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5", GitCommit:"17d7182a7ccbb167074be7a87f0a68bd00d58d97", GitTreeState:"clean", BuildDate:"2017-08-31T09:14:02Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5", GitCommit:"17d7182a7ccbb167074be7a87f0a68bd00d58d97", GitTreeState:"clean", BuildDate:"2017-08-31T08:56:23Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

$~ iptables-save
*nat
:PREROUTING ACCEPT [7:420]
:INPUT ACCEPT [7:420]
:OUTPUT ACCEPT [17:1020]
:POSTROUTING ACCEPT [21:1314]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-SEP-3N4EFB5KN7DZON3G - [0:0]
:KUBE-SEP-5LXBJFBNQIVWZQ4R - [0:0]
:KUBE-SEP-5WQPOVEQM6CWLFNI - [0:0]
:KUBE-SEP-64ZDVBFDSQK7XP5M - [0:0]
:KUBE-SEP-6VF4APMJ4DYGM3KR - [0:0]
:KUBE-SEP-TPSZNIDDKODT2QF2 - [0:0]
:KUBE-SEP-TR5ETKVRYPRDASMW - [0:0]
:KUBE-SEP-VMZRVJ7XGG63C7Q7 - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-2BEQYC4GXBICFPF4 - [0:0]
:KUBE-SVC-2J3GLVYDXZLHJ7TU - [0:0]
:KUBE-SVC-2QFLXPI3464HMUTA - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-OWOER5CC7DL5WRNU - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:KUBE-SVC-V76ZVCWXDRE26OHU - [0:0]
:WEAVE - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.30.38.0/24 ! -o docker0 -j MASQUERADE
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A DOCKER -i docker0 -j RETURN
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/driveme-service:" -m tcp --dport 31305 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/driveme-service:" -m tcp --dport 31305 -j KUBE-SVC-2BEQYC4GXBICFPF4
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/registry-server:" -m tcp --dport 31048 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/registry-server:" -m tcp --dport 31048 -j KUBE-SVC-2J3GLVYDXZLHJ7TU
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/auth-service:" -m tcp --dport 31722 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/auth-service:" -m tcp --dport 31722 -j KUBE-SVC-V76ZVCWXDRE26OHU
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/api-gateway:" -m tcp --dport 32139 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/api-gateway:" -m tcp --dport 32139 -j KUBE-SVC-OWOER5CC7DL5WRNU
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-3N4EFB5KN7DZON3G -s 10.32.0.15/32 -m comment --comment "default/api-gateway:" -j KUBE-MARK-MASQ
-A KUBE-SEP-3N4EFB5KN7DZON3G -p tcp -m comment --comment "default/api-gateway:" -m tcp -j DNAT --to-destination 10.32.0.15:8080
-A KUBE-SEP-5LXBJFBNQIVWZQ4R -s 10.32.0.13/32 -m comment --comment "default/registry-server:" -j KUBE-MARK-MASQ
-A KUBE-SEP-5LXBJFBNQIVWZQ4R -p tcp -m comment --comment "default/registry-server:" -m tcp -j DNAT --to-destination 10.32.0.13:8888
-A KUBE-SEP-5WQPOVEQM6CWLFNI -s 172.16.16.102/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-5WQPOVEQM6CWLFNI -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-5WQPOVEQM6CWLFNI --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 172.16.16.102:6443
-A KUBE-SEP-64ZDVBFDSQK7XP5M -s 10.32.0.12/32 -m comment --comment "default/driveme-service:" -j KUBE-MARK-MASQ
-A KUBE-SEP-64ZDVBFDSQK7XP5M -p tcp -m comment --comment "default/driveme-service:" -m tcp -j DNAT --to-destination 10.32.0.12:9595
-A KUBE-SEP-6VF4APMJ4DYGM3KR -s 10.32.0.11/32 -m comment --comment "kube-system/default-http-backend:" -j KUBE-MARK-MASQ
-A KUBE-SEP-6VF4APMJ4DYGM3KR -p tcp -m comment --comment "kube-system/default-http-backend:" -m tcp -j DNAT --to-destination 10.32.0.11:8080
-A KUBE-SEP-TPSZNIDDKODT2QF2 -s 10.32.0.14/32 -m comment --comment "default/auth-service:" -j KUBE-MARK-MASQ
-A KUBE-SEP-TPSZNIDDKODT2QF2 -p tcp -m comment --comment "default/auth-service:" -m tcp -j DNAT --to-destination 10.32.0.14:9090
-A KUBE-SEP-TR5ETKVRYPRDASMW -s 10.32.0.10/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-TR5ETKVRYPRDASMW -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.32.0.10:53
-A KUBE-SEP-VMZRVJ7XGG63C7Q7 -s 10.32.0.10/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-VMZRVJ7XGG63C7Q7 -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.32.0.10:53
-A KUBE-SERVICES -d 10.104.131.183/32 -p tcp -m comment --comment "kube-system/default-http-backend: cluster IP" -m tcp --dport 80 -j KUBE-SVC-2QFLXPI3464HMUTA
-A KUBE-SERVICES -d 10.96.244.116/32 -p tcp -m comment --comment "default/driveme-service: cluster IP" -m tcp --dport 9595 -j KUBE-SVC-2BEQYC4GXBICFPF4
-A KUBE-SERVICES -d 10.108.120.94/32 -p tcp -m comment --comment "default/registry-server: cluster IP" -m tcp --dport 8888 -j KUBE-SVC-2J3GLVYDXZLHJ7TU
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 10.96.104.233/32 -p tcp -m comment --comment "default/auth-service: cluster IP" -m tcp --dport 9090 -j KUBE-SVC-V76ZVCWXDRE26OHU
-A KUBE-SERVICES -d 10.98.19.144/32 -p tcp -m comment --comment "default/api-gateway: cluster IP" -m tcp --dport 8080 -j KUBE-SVC-OWOER5CC7DL5WRNU
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-2BEQYC4GXBICFPF4 -m comment --comment "default/driveme-service:" -j KUBE-SEP-64ZDVBFDSQK7XP5M
-A KUBE-SVC-2J3GLVYDXZLHJ7TU -m comment --comment "default/registry-server:" -j KUBE-SEP-5LXBJFBNQIVWZQ4R
-A KUBE-SVC-2QFLXPI3464HMUTA -m comment --comment "kube-system/default-http-backend:" -j KUBE-SEP-6VF4APMJ4DYGM3KR
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-TR5ETKVRYPRDASMW
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-5WQPOVEQM6CWLFNI --mask 255.255.255.255 --rsource -j KUBE-SEP-5WQPOVEQM6CWLFNI
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-5WQPOVEQM6CWLFNI
-A KUBE-SVC-OWOER5CC7DL5WRNU -m comment --comment "default/api-gateway:" -j KUBE-SEP-3N4EFB5KN7DZON3G
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-VMZRVJ7XGG63C7Q7
-A KUBE-SVC-V76ZVCWXDRE26OHU -m comment --comment "default/auth-service:" -j KUBE-SEP-TPSZNIDDKODT2QF2
-A WEAVE -s 10.32.0.0/12 -d 224.0.0.0/4 -j RETURN
-A WEAVE ! -s 10.32.0.0/12 -d 10.32.0.0/12 -j MASQUERADE
-A WEAVE -s 10.32.0.0/12 ! -d 10.32.0.0/12 -j MASQUERADE
COMMIT
# Completed on Wed Sep 13 09:29:35 2017
# Generated by iptables-save v1.4.21 on Wed Sep 13 09:29:35 2017
*filter
:INPUT ACCEPT [1386:436876]
:FORWARD ACCEPT [67:11075]
:OUTPUT ACCEPT [1379:439138]
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-SERVICES - [0:0]
:WEAVE-NPC - [0:0]
:WEAVE-NPC-DEFAULT - [0:0]
:WEAVE-NPC-INGRESS - [0:0]
-A INPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A INPUT -j KUBE-FIREWALL
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-ISOLATION -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A WEAVE-NPC -m state --state RELATED,ESTABLISHED -j ACCEPT
-A WEAVE-NPC -d 224.0.0.0/4 -j ACCEPT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-DEFAULT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-INGRESS
-A WEAVE-NPC -m set ! --match-set weave-local-pods dst -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-iuZcey(5DeXbzgRFs8Szo]+@p dst -m comment --comment "DefaultAllow isolation for namespace: kube-system" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-4vtqMI+kx/2]jD%_c0S%thO%V dst -m comment --comment "DefaultAllow isolation for namespace: kube-public" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-k?Z;25^M}|1s7P3|H9i;*;MhG dst -m comment --comment "DefaultAllow isolation for namespace: default" -j ACCEPT
COMMIT
# Completed on Wed Sep 13 09:29:35 2017

the 404 is the expected response from the service.

Any1 has an idea, where this problem could be caused?

Edit: Added examples and additional information

1

There are 1 answers

0
Seeron On BEST ANSWER

So, I resolved my issue. For anyone finding this post and having the same Problem: In my case all UDP traffic between the nodes was blocked and just TCP was allowed. But DNS is handled via UDP, so this also has to be allowed.