kubernetes' readinessProbe prevents inter-pod communication during startup

119 views Asked by At

From the Kubernetes documentation:

A pod with containers reporting that they are not ready does not receive traffic through Kubernetes Services.

Does this mean that a failing readinessProbe supposed to prevent incoming traffic from pods within the same deployment? If I set up a readinessProbe that checks an endpoint which is only available after the pod joined the application cluster, then the endpoint never comes up. I checked manually with netcat, and the port of the not ready pod (which is used to establish cluster membership) is indeed not accessible from the other ready pods of the application cluster.

Is this expected? Any possible workaround for this?

3

There are 3 answers

1
Ray John Navarro On

The behavior that you are seeing is expected as it is used to determine if the pod is ready for traffic. In addition, the readiness behavior applies to all incoming traffic and is not limited to inter-pod comms within the same deployment. A suggestion as a workaround is to create separate services for readiness checks as this will not disrupt the traffic flow of your pods.

0
VonC On

From your answer, you now have, by setting the Service to headless and enabling publishNotReadyAddresses:

+----------------------------------------------------+
| Kubernetes Cluster                                 |
|                                                    |
|   +----------------+   +----------------+          |
|   | Pod 1          |   | Pod 2          |          |
|   | (Ready)        |   | (Not Ready)    |          |
|   |                |   |                |          |
|   | +------------+ |   | +------------+ |          |
|   | | Container  | |   | | Container  | |          |
|   | |            | |   | |            | |          |
|   | +------------+ |   | +------------+ |          |
|   +----------------+   +----------------+          |
|                                                    |
|   Service: Headless                                |
|   publishNotReadyAddresses: true                   |
+----------------------------------------------------+

With:

Your Kubernetes deployment would be:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-container
        image: my-image
        ports:
        - containerPort: 80
        readinessProbe:
          httpGet:
            path: /health
            port: 80
          initialDelaySeconds: 10 # Adjust based on your app's needs

Service:

apiVersion: v1
kind: Service
metadata:
  name: my-headless-service
spec:
  clusterIP: None # This makes the Service headless
  publishNotReadyAddresses: true
  selector:
    app: my-app
  ports:
  - protocol: TCP
    port: 80

The pods will be directly accessible through their IPs, even before they are ready. This configuration is particularly useful in scenarios like yours, where pods need to communicate with each other during their startup process, before they are marked as ready.

0
kupsef On

Figured it out.

It is actually accessible, I probably did something wrong and came to the wrong conclusion. Ports that are bound directly to the pods' IP are accessible from the other pods even if the target pod is not ready.

Modifying the Service type to headless made the DNS entries return the pod IPs directly, rather than the service IPs. Setting publishNotReadyAddresses: true was necessary to be set to have the DNS entries before the ready state.