I have a situation like this:
- a cluster of web machines
- a cluster of db machines and other services
The question is how put in communication the 2 clusters in order to use some hostnames in /etc/hosts of web machines.
To protect your data, is it safe create an ingress service to make visible the db from the external? I tried with a nodePort service (so using internal ip addresses) but I'm not able to put in contact db-web between different clusters
At the moment my temporary solution is:
a) define a public static ip with the command: gcloud compute addresses create my-public-static-ip --global
b) use an ingress configuration for my db service where I set the static ip with the option:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: my-public-static-ip
c) in my daemonset.yaml I define a hostAliases:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: my-daemonset
spec:
updateStrategy:
type: RollingUpdate
template:
spec:
nodeSelector:
app: frontend-node
terminationGracePeriodSeconds: 30
hostAliases:
- ip: <public_ip_addr>
hostnames:
- "my-db-service"
and it's working. But I'm not too convinced that this solution is the best or however correct on a live environment
In my opinion, I think the best approach to get 2 different Kubernetes Clusters(GKE-Google Kubernetes Engine) to communicate with each other is to use Istio - open platform to connect, manage, and secure microservices. Take a look at the following link:- https://istio.io/v1.3/docs/examples/multicluster/gke/. It is pretty straight forward and would also like to mention that Istio should fit well with implementations like Amazon Elastic Container, Azure Kubernetes Service etc as well.