EKS LoadBalancer service not returning response outside from EKS

885 views Asked by At

I have a EKS cluster with VPC that contains couple of pods and services One pod is connected to a service that defined with LoadBalancer type. The load balancer is internal (running on PVC)

I encountered with weird issue after deploying the pod and service:

After the deployment completed i ran "kubectl get svc" and copied the external IP, the ip looked something like this:

internal-XXXXXXXXXXXXXXXXXXXXX.<region>.elb.amazonaws.com

I test the connection fro my laptop (that connected to the VPC) and run the following

telnet internal-XXXXXXXXXXXXXXXXXXXXX.<region>.elb.amazonaws.com 8081

and got the following response

Trying 10.0.0.1 (some internal IP)...
Connected to internal-XXXXXXXXXXXXXXXXXXXXX.<region>.elb.amazonaws.com

So the result basically say that i have access to the pod behind the service, but when i ran WGET command i got the following result

--2020-10-05 13:55:14--  http://internal-XXXXXXXXXXXXXXXXXXXXX.<region>.elb.amazonaws.com:8081/
Resolving internal-XXXXXXXXXXXXXXXXXXXXX.<region>.elb.amazonaws.com (internal-XXXXXXXXXXXXXXXXXXXXX.<region>.elb.amazonaws.com)... 10.0.0.1, 10.0.0.2
Connecting to internal-XXXXXXXXXXXXXXXXXXXXX.<region>.elb.amazonaws.com (internal-XXXXXXXXXXXXXXXXXXXXX.<region>.elb.amazonaws.com)|10.0.0.1|:8081... connected.
HTTP request sent, awaiting response... Read error (Operation timed out) in headers.
Retrying.

But when i run the same WGET command in other pod that running on EKS i got a valid response (downloaded the index.html file)

So it seems that the pod is accessible via the service only from other pods in the EKS but not from outside the EKS (although there is a connection to the service)

Anyone also experience the same issue and can assist ? Here are my pod and service describe output:

Service:

Name:                     service
Namespace:                default
Labels:                   app.kubernetes.io/managed-by=Helm
Annotations:              meta.helm.sh/release-name: help_repo
                          meta.helm.sh/release-namespace: default
                          service.beta.kubernetes.io/aws-load-balancer-internal: true
Selector:                 app=test-app
Type:                     LoadBalancer
IP:                       172.X.X.X
LoadBalancer Ingress:     internal-XXXXXXXXXXXXXXXXXXXXX.<region>.elb.amazonaws.com
Port:                     rpc  6123/TCP
TargetPort:               6123/TCP
NodePort:                 rpc  32648/TCP
Endpoints:                **<same-pod-ip>**:6123
Port:                     blob  6124/TCP
TargetPort:               6124/TCP
NodePort:                 blob  31041/TCP
Endpoints:                **<same-pod-ip>**:6124
Port:                     ui  8081/TCP
TargetPort:               8081/TCP
NodePort:                 ui  30608/TCP
Endpoints:                **<same-pod-ip>**:8081
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

Pod:

Name:         test-app-ff8c566c7-rfkrh
Namespace:    default
Priority:     0
Node:         <node ip>
Start Time:   Mon, 05 Oct 2020 13:42:19 +0300
Labels:       app=test-app
              pod-template-hash=ff8c566c7
Annotations:  kubernetes.io/psp: eks.privileged
Status:       Running
IP:           **<same-pod ip>**
IPs:
  IP:           **<same-pod ip>**
Controlled By:  ReplicaSet/test-app-ff8c566c7
Containers:
  test-app:
    Container ID:  docker://XXXXXXXXX
    Image:         ECR_URL

    Ports:         6123/TCP, 6124/TCP, 8081/TCP
    Host Ports:    0/TCP, 0/TCP, 0/TCP
    Args: <run app command>
    State:          Running
      Started:      Mon, 05 Oct 2020 13:42:33 +0300
    Ready:          True
    Restart Count:  0
    Liveness:       tcp-socket :6123 delay=30s timeout=1s period=60s #success=1 #failure=3
    Environment:    <none>
    

Thanks!

1

There are 1 answers

0
Javier Aranda On

You can use an Ingress, this is by definition an entrance to your cluster. In EKS, you should use an Ingress Controller called "alb", meaning "Application Load Balancer". An ingress you could use looks like:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: <your-ingress-name>
  annotations:
    kubernetes.io/ingress.class: alb
    # required to use ClusterIP
    alb.ingress.kubernetes.io/target-type: ip
    # required to place on public-subnet
    alb.ingress.kubernetes.io/scheme: internet-facing
    # use TLS registered to our domain, ALB will terminate the certificate
    alb.ingress.kubernetes.io/certificate-arn: <acm-certificate-arn>
    # respond to both ports
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
    # redirect to port 80 to port 443
    alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
spec:
  rules:
  - host: <your.host.com>
    http:
      paths:
      - backend:
          serviceName: <your-service-name> # this should be a ClusterIp service
          servicePort: <yout-service-port>
        path: /

Important: this will provision an Application Load Balancer in your aws account

After that, you can redirect your host name traffic to your application load balancer. If You are using Route53, you can follow this tutorial.