Any suggestions on how track of kubectl configs(~/.kube/config) which allows you to access the kubernetes clusters? I have clusters running on different environments(local/prod) where i connect to the same namespace where project is deployed on and whenever i need to connect to a particular cluster, i run the below to configure ( different commands on aws/gcp/ microk8s etc) and the configuration gets attached to ~/.kube/config. Is there any easy way to know where you are connected or track which config is being used? Its a disaster waiting to happen unless you do a explicit check.
aws eks --region region update-kubeconfig --name cluster_name
Current method used:
- Either (cat ~/.kube/config) i check the to see what cluster im connecting to.
- move the config to some other directory and move the config back once im done.
- kubectl get nodes to see where I'm connected.
Using kubectl
Kubectl has built in support for managing contexts. After you add a context in
~/.kube/config
file, manually or, viaaws eks update-kubeconfig
, you can use theconfig
sub-command to switch between contexts.To view all saved contexts and highlight the current one:
To just view the current context:
To switch to another context
To delete a context:
Specific configuration file
Sometimes it might be the case that all the cluster connections cannot be in the same kube config file, but instead, user has separate kube config files per cluster.
To run
kubectl
with a specific configuration, one can use--kubeconfig
argument:Shell Aliases
And when running from Linux shell or windows powershell, one can also use "aliases".
Linux Bash example:
Use bash
alias
to define commands as aliases:Usage:
The alias definitions can be saved to
~/.profile
for permanent usage.Windows Powershell example:
In Windows Powershell, a function can be defined as follows:
And used as:
More arguments like
-n <namespace>
can also be specified in function definition before$args
. Make sure to properly quote (") the arguments with special characters on windows.