minikube with ingress not working on ubuntu with docker driver

258 views Asked by At

I am attempting to access my service hosted on minikube within ubuntu via docker driver from outside the cluster using ingress but it's not working.

First, the details of the OS and minikube

$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 22.04.3 LTS
Release:    22.04
Codename:   jammy

Minikube is running

$ minikube status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

Minikube is using the docker driver on Linux

$ minikube start
  minikube v1.32.0 on Ubuntu 22.04
✨  Using the docker driver based on existing profile
  Starting control plane node minikube in cluster minikube
  Pulling base image ...
  Updating the running docker "minikube" container ...
  Preparing Kubernetes v1.28.3 on Docker 24.0.7 ...
  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
╭──────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                      │
│    Registry addon with docker driver uses port 32780 please use that instead of default port 5000    │
│                                                                                                      │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────╯
  For more information see: https://minikube.sigs.k8s.io/docs/drivers/docker
    ▪ Using image docker.io/registry:2.8.3
    ▪ Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
  Verifying registry addon...
  Enabled addons: default-storageclass, storage-provisioner, registry
  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

I have a running pod, service, deployment and replicaset

$ kubectl get all
NAME                         READY   STATUS    RESTARTS        AGE
pod/mysvc-54cdf74f4f-kmn44   1/1     Running   17 (6m5s ago)   137m

NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/kubernetes     ClusterIP   10.96.0.1        <none>        443/TCP    3h54m
service/mysvc          ClusterIP   10.107.181.101   <none>        5000/TCP   137m

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/mysvc   1/1     1            1           137m

NAME                               DESIRED   CURRENT   READY   AGE
replicaset.apps/mysvc-54cdf74f4f   1         1         1       137m

Next, we have an ingress definition as follows

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: local-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: mysvc.k2.local
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: mysvc 
            port:
              number: 80

Minikube ip is

$ minikube ip
192.168.49.2

The /etc/hosts entry is made as follows

192.168.49.2 mysvc.k2.local

Therefore the expectation is that http://mysvc.k2.local would point to the service.

On starting the tunnel

$ minikube tunnel
✅  Tunnel successfully started

  NOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ...

❗  The service/ingress local-ingress requires privileged ports to be exposed: [80 443]
  sudo permission will be asked for it.
  Starting tunnel for service local-ingress.
[sudo] password for osuser: 

and providing the password for sudo access,

the ingress details being

$ kubectl get ingress local-ingress
NAME            CLASS    HOSTS          ADDRESS   PORTS   AGE
local-ingress   <none>   mysvc.k2.local             80      35m
$ kubectl describe ingress local-ingress
Name:             local-ingress
Labels:           <none>
Namespace:        default
Address:          
Ingress Class:    <none>
Default backend:  <default>
Rules:
  Host          Path  Backends
  ----          ----  --------
  mysvc.k2.local  
                /   mysvc:80 ()
Annotations:    nginx.ingress.kubernetes.io/rewrite-target: /
Events:         <none>

However the issue is that the underlying minikube ip is itself not accessible from the host.

$ curl mysvc.k2.local
^C 
$ ping mysvc.k2.local
PING mysvc.k2.local (192.168.49.2) 56(84) bytes of data.
^C
--- mysvc.k2.local ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1032ms

EDIT 1: the ingress addons were disabled

$ minikube addons list | grep ingress
| ingress                     | minikube | disabled     | Kubernetes                     |
| ingress-dns                 | minikube | disabled     | minikube                       |

Enabling them leads to an error:

$ minikube addons enable ingress
  ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
    ▪ Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
    ▪ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
    ▪ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
  Verifying ingress addon...

❌  Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]

╭───────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                           │
│      If the above advice does not help, please let us know:                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                           │
│                                                                                           │
│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    Please also attach the following file to the GitHub issue:                             │
│    - /tmp/minikube_addons_ee0e2a5ffa0c23cbbbf48fa0fc668431256e65f3_0.log                  │
│                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────╯

EDIT 2:

I tried to enable ingress-dns addon first.

$ minikube addons enable ingress-dns
  ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
    ▪ Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
  The 'ingress-dns' addon is enabled

It worked.

Now, I tried to enable ingress addon as well

$ minikube addons enable ingress
  ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
    ▪ Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
    ▪ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
    ▪ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
  Verifying ingress addon...
  The 'ingress' addon is enabled

Surprisingly, that worked too. So ingress enabled is down without actually doing anything other than enabling ingress-dns addon before enabling ingress addon.

Apart from starting the minikube tunnel (which has been done already), is there anything else that needs to be done?

1

There are 1 answers

0
alok On

After EDIT 2, this gave a different error

$ kubectl apply -f local-ingress.yaml 
Error from server (InternalError): error when creating "local-ingress.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": failed to call webhook: Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s": context deadline exceeded

This error was resolved via

$ kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission
validatingwebhookconfiguration.admissionregistration.k8s.io "ingress-nginx-admission" deleted

Applying the local-ingress.yaml configuration again got the ingress working.