When listing the nodes of the GKE admin-cluster, I get the name of the node with hostname taken from from the ipBlock, which is a useless name:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
vm-kube-adm001 Ready control-plane,master 14d v1.23.5-gke.1504
vm-kube-adm002 Ready <none> 14d v1.23.5-gke.1504
vm-kube-adm003 Ready <none> 14d v1.23.5-gke.1504
vm-kube-adm004 Ready <none> 7d17h v1.23.5-gke.1504
vm-kube-adm005 Ready <none> 7d17h v1.23.5-gke.1504
vm-kube-adm006 Ready <none> 7d17h v1.23.5-gke.1504
vm-kube-adm007 Ready <none> 6d23h v1.23.5-gke.1504
vm-kube-adm008 Ready <none> 6d23h v1.23.5-gke.1504
vm-kube-adm009 Ready <none> 6d23h v1.23.5-gke.1504
How can I show the kubernetes user-cluster that is managed by each node ?
The node have some useful labels attached to them, especially
kubernetes.googleapis.com/cluster-name:So you can use
get nodes -L kubernetes.googleapis.com/cluster-name,onprem.gke.io/lbnode:You can also use the
kubectl -o custom-columns=syntax, make sure to escape the dots.