Rancher: L4 Balancer stuck on Pending despite working L7 Ingress

2.2k views Asked by At

Running Rancher v 2.4.5 with a cluster which has 2 nodes. I have tried to install Wordpress using Helm Chart from Bitnami.

All it went well, I'm able to access site via the ingress, except that L4 Balancer created by the chart is still in pending status for some reason.

image

> kubectl get svc -n wordpress -o wide
NAME                                       TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE     SELECTOR
ingress-d5bf098ee05c3bbaa0a93a2ceedd8d1a   ClusterIP      10.43.51.5      <none>        80/TCP                       15m     workloadID_ingress-d5bf098ee05c3bbaa0a93a2ceedd8d1a=true
wordpress                                  LoadBalancer   10.43.137.240   <pending>     80:31672/TCP,443:31400/TCP   5d22h   app.kubernetes.io/instance=wordpress,app.kubernetes.io/name=wordpress
wordpress-mariadb                          ClusterIP      10.43.7.73      <none>        3306/TCP                     5d22h   app=mariadb,component=master,release=wordpress

No LoadBalancer Ingress assigned to the wordpress service:

> kubectl describe services wordpress -n wordpress
Name:                     wordpress
Namespace:                wordpress
Labels:                   app.kubernetes.io/instance=wordpress
                          app.kubernetes.io/managed-by=Tiller
                          app.kubernetes.io/name=wordpress
                          helm.sh/chart=wordpress-9.5.1
                          io.cattle.field/appId=wordpress
Annotations:              <none>
Selector:                 app.kubernetes.io/instance=wordpress,app.kubernetes.io/name=wordpress
Type:                     LoadBalancer
IP:                       10.43.137.240
Port:                     http  80/TCP
TargetPort:               http/TCP
NodePort:                 http  31672/TCP
Endpoints:                10.42.1.16:8080
Port:                     https  443/TCP
TargetPort:               https/TCP
NodePort:                 https  31400/TCP
Endpoints:                10.42.1.16:8443
Session Affinity:         None
External Traffic Policy:  Cluster
Events
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    field.cattle.io/creatorId: user-6qmpk
    field.cattle.io/ingressState: '{"d29yZHByZXNzLWluZ3Jlc3Mvd29yZHByZXNzL3hpcC5pby8vLzgw":""}'
    field.cattle.io/publicEndpoints: '[{"addresses":["10.105.1.77"],"port":80,"protocol":"HTTP","serviceName":"wordpress:wordpress","ingressName":"wordpress:my","hostname":"my.wordpress.10.105.1.77.xip.io","path":"/","allNodes":true}]'
  creationTimestamp: "2020-09-01T19:32:27Z"
  generation: 3
  labels:
    cattle.io/creator: norman
  managedFields:
  - apiVersion: networking.k8s.io/v1beta1
    fieldsType: FieldsV1
    fieldsV1:
      f:status:
        f:loadBalancer:
          f:ingress: {}
    manager: nginx-ingress-controller
    operation: Update
    time: "2020-09-01T19:32:27Z"
  - apiVersion: extensions/v1beta1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:field.cattle.io/creatorId: {}
          f:field.cattle.io/ingressState: {}
          f:field.cattle.io/publicEndpoints: {}
        f:labels:
          .: {}
          f:cattle.io/creator: {}
      f:spec:
        f:rules: {}
    manager: Go-http-client
    operation: Update
    time: "2020-09-01T19:49:08Z"
  name: my
  namespace: wordpress
  resourceVersion: "6073928"
  selfLink: /apis/extensions/v1beta1/namespaces/wordpress/ingresses/my
  uid: 8a88e16e-cbda-4f1f-bb1c-9d63d0af1b93
spec:
  rules:
  - host: my.wordpress.10.105.1.77.xip.io
    http:
      paths:
      - backend:
          serviceName: wordpress
          servicePort: 80
        path: /
        pathType: ImplementationSpecific
status:
  loadBalancer:
    ingress:
    - ip: 10.105.1.77
    - ip: 10.105.1.78

I have opened issue on the Bitnami github but it came up, based on responses, issue is on the Rancher/RKE side.

Any thought on that?

PS.

Should I have both L7 Ingress and L4 Balancer for Rancher operating on a bare metal or L7 Ingress can be configured as a load balancer too and remove L4 Balancer from that project?

1

There are 1 answers

0
Coffeeholic On BEST ANSWER

I got this resolved by clearing firewall, restarting docker (so it gets new firewall) and then installing metallb (or whatever you have as the loadbalancer). If you do not have a L2 loadbalancer yet, this step can be skipped since in my case the issue was caused by the firewall of the loadbalancer not being registered.

The loadbalancer needs to get an IP from either metallb, your cloudprovider, cloudflare or anything like that. It is external, this means that kubernetes itself is not going to provide it.

You need to use a L2 loadbalancer that provides IPs If you don't have one you can try https://metallb.universe.tf

You could also just leave it, you will never get an external IP but nginx/traefik will still route the traffic since it finds no other route..