Is there any extra security settings by default that it won't allow the public access of the Kubernetes Cluster on the IBM Cloud?
I exposed the application using the a NodePort service, but it is not accessible via 80 port and even I tried the other ports.
But it is working from the pod, such as visiting this public LoadBalancer by using the curl command. Even I can ping the public IP address of this LoadBalancer, this happens also for the Ingress as well.
The Ingress subdomain is also enabled.
This is an example of the External LoadBalancer in my Kubernetes cluster:
---
apiVersion: v1
kind: Pod
metadata:
labels:
app: hello-world
spec:
containers:
- image: us.icr.io/my-space/hello-world
imagePullPolicy: IfNotPresent
name: hello-world
ports:
- containerPort: 8080
name: http
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
labels:
app: hello-world
name: hello-world-service
spec:
ports:
- nodePort: 31190
port: 80
protocol: TCP
targetPort: 8080
selector:
app: hello-world
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer: {}
Public load balancers are open to the internet by design so there is nothing blocking your LB. However, if you have a firewall between the internet and the cluster you might be cutting off traffic as it tries to enter the cluster. If you do a ‘kubectl get svc xxx’, you should see the external IP for the service and that should be accessible via port 80 per your spec above. Or you can use one of your worker nodes public IPs and the node port and try accessing it from there. If either of these fail, you’re blocking something somewhere.
If you still have trouble, jump into slack by registering here https://bxcs-slack-invite.mybluemix.net/ and then give me a ping at @john.
We can help you out at length there and then come back and address this post once we’ve nailed down your issue.