I want to push some jobs.yml script to multiple Kubernetes clusters programmatically, connection details will be provided by customers and we can store it in encrypted format in DB or maybe S3.
I'm trying to achieve this with @kubernetes/client-node
the npm package. I was checking the official repository and documentation, I can do this by loading my ~/kube/config
file inside kc.loadFromFile()
, but my k8s was created with aws eksctl create cluster -f cluster.yaml
command, which autneticated with your aws configure
.
What kind of credentials i need to ask from my user? and what fields should i take? Because i can see my kubeconfig file have some aws related keys/values and i want to make my solution available to k8s independent not specific to aws cloud. So, what credentials i need to ask?
I tried to use my /.kube/config
file @kubernetes/client-node
in a separate environment where I don't have aws configure
the setup done. So I simply copied my /.kube/config
to another EC2 instance for testing inside a nodejs project and I'm getting an error.
index.js
const k8s = require('@kubernetes/client-node');
const kc = new k8s.KubeConfig();
kc.loadFromFile('/.kube/config');
const k8sApi = kc.makeApiClient(k8s.CoreV1Api);
k8sApi.listNamespacedPod('default').then((res) =\> {
let pods = res.body.items;
console.log(pods);
});
Error
(node:259164) UnhandledPromiseRejectionWarning: TypeError: Cannot read property 'toString' of null at ExecAuth.getCredential (/home/ubuntu/test/node_modules/@kubernetes/client-node/dist/exec_auth.js:87:39) at ExecAuth.applyAuthentication (/home/ubuntu/test/node_modules/@kubernetes/client-node/dist/exec_auth.js:23:33) at KubeConfig.applyAuthorizationHeader (/home/ubuntu/test/node_modules/@kubernetes/client-node/dist/config.js:368:33) at KubeConfig.applyOptions (/home/ubuntu/test/node_modules/@kubernetes/client-node/dist/config.js:376:20) at KubeConfig.applyToRequest (/home/ubuntu/test/node_modules/@kubernetes/client-node/dist/config.js:99:20) at /home/ubuntu/test/node_modules/@kubernetes/client-node/dist/gen/api/coreV1Api.js:9993:95 (Use `node --trace-warnings ...` to show where the warning was created) (node:259164) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 2) (node:259164) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
My ./kube/config looks like this
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: A_LONG_STRING_HERE
server: https://XXXXXXXXXXXXX.gr7.ap-south-1.eks.amazonaws.com
name: xxxxxxx.ap-south-1.eksctl.io
contexts:
- context:
cluster: xxxxxx.ap-south-1.eksctl.io
user: [email protected]
name: [email protected]
- context:
cluster: xxxx.ap-south-1.eksctl.io
user: [email protected]
name: [email protected]
current-context: [email protected]
kind: Config
preferences: {}
users:
- name: [email protected]
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- token
- -i
- xxxxxxx
command: aws-iam-authenticator
env:
- name: AWS_STS_REGIONAL_ENDPOINTS
value: regional
- name: AWS_DEFAULT_REGION
value: ap-south-1
provideClusterInfo: false
- name: <NAME>@xxxxx.ap-south-1.eksctl.io
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- token
- -i
- xxxxxxxx
command: aws-iam-authenticator
env:
- name: AWS_STS_REGIONAL_ENDPOINTS
value: regional
- name: AWS_DEFAULT_REGION
value: ap-south-1
provideClusterInfo: false