Background: We hosted application in a GKE cluster, the application running on the GKE cluster has an ingress resource containing the rules to point to our application services. We are using ingress-nginx as the ingress controller for this cluster..
We have now created a GCP Internal Load Balancer(TCP) to point to the nodeport where the ingress-controller service is listening. (Note: nginx ingress controller service" is of type node port)
When we try to access application with http://ILB-IP:80 (http-port), it throws connection refused exception but gets a desired response when we access directly with nodeport http://ILB-IP:31380 (nodeport)
When we give ingress service as type load balancer, GCP creates an ILB in the background. In this case application is accessible over http port and all the request is being served.
Can anyone help us to figure out when we explicitly creates ILB and sends the request, why application is not accessible when we are hitting the ILB front end on http port while the same is accessible when ILB front end is ILB-IP:?
Thanks in advance!
Over discussion with google support came to know that service type as LoadBalancer creates an IP table entry on each nodes, which allow traffic to redirect from port 80 to the defined node port.
So, if our use-case requires to create ILB explicitly and our application to be accessible over Node Port, Then "we have to manually edit the IP tables on each kubernetes node in order to traffic to be redirect from the port 80 to the node port"