ExternalName in managed kubernetes; Host not resolve

28 views Asked by At

enter image description here Hi everyone, As in above image , I have a managed k8s v 1.28 running on a Ubuntu PC. I have some web api available in another machine. This is a intranet network. Typicaly i modify /etc/hosts file and api are able to connect to IIS via custom domain.

In this senario, I created below external service

apiVersion: v1
kind: Service
metadata:
  name: nodea-dev-external-api
  namespace: demo
spec:
  type: ExternalName
  externalName: nodea.dev

I also created a ConfigMap,

apiVersion: v1
kind: ConfigMap
metadata:
  name: custom-dns-config
  namespace: demo
data:
  Corefile: |
    .:53 {
        errors
        health
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          pods insecure
          upstream /etc/resolv.conf
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
        hosts {
            nodea.dev 10.12.0.1
            fallthrough
        }
    }

still when in do kubectl exec -it <pod> -- nslookup nodea.dev Pod is not able to resolve noda.dev. Since this is a managed K8s, do i need to modify coredns directly, but how can i automate later ?

Is this the correct solution ? The external api is HTTPS.

Since external IP and domain name are dynamic depends on env, I want to convert this total solution to a helm chart eventualy and read the external ip and custom domain from values.yaml file.

Thank you for time and consideration. Any comments are helpfull. Thank you

0

There are 0 answers