Pod-to-service communication with Kubernetes and Flannel

798 views Asked by At

I have recently set up a multi-machine Kubernetes cluster w/ Docker and Flannel. I have set up Flannel on a subnet 172.16.0.0/24 such that a container on host A with an assigned IP of 172.16.78.2 can ping a container on host B with an assigned IP of 172.16.74.2.

I have Kubernetes set up with all of its various components (kubelet, kube-proxy kube-apiserver, kube-scheduler, kube-controller-manager) and I can successfully launch deployments and pods around the cluster.

Problem

I deployed a Redis service and my webapp pod onto the cluster. On my webapp pod, the environmental variables REDIS_SERVICE_HOST and REDIS_SERVICE_PORT are set, but REDIS_SERVICE_HOST is a random IP on the 172.16.0.0/16 subnet. To be clear, if I run ifconfig and get the IP address for eth0 in the Redis container, I can ping that from my webapp pod. But not the IP address assigned to REDIS_SERVICE_HOST.

I'm fairly certain this is a configuration problem, but here are some flags I'm setting for each service:

kube-proxy arguments

  • --cluster-cidr 172.16.0.0/16

kube-apiserver arguments

  • --service-cluster-ip-range=172.16.0.0/16

kube-controller-manager arguments

  • --cluster-cidr=172.16.0.0/16
  • --service-cluster-ip-range=172.16.0.0/16

I'm not really sure how the above flags work in conjunction to Flannel, but I tried a lot of things, and I couldn't get anything to work. Some explanation on how these things work would be a great help. Thanks.

1

There are 1 answers

0
agro On

So, after playing around and reading more issues, I figured out that I don't have a problem here at all. From here https://github.com/kubernetes/kubernetes/issues/7996, I learned that pinging a service doesn't do anything. I hadn't actually tried to connect to the service up to now, but I did, and it worked.

Above, I used a CIDR of 172.16.0.0/16, which is the Flannel subnet. That's wrong, and it should actually be something that doesn't overlap with Flannel's subnet. I'll verify all of this and make sure it works across multiple nodes.