I have deployed EKS cluster. At this time in order to use its cluster id I was using this code
data "aws_eks_cluster" "cluster" {
name = module.eks.cluster_id
}
data "aws_eks_cluster_auth" "cluster" {
name = module.eks.cluster_id
}
After deployment I have refactored my code into modules and now I using output for providers
outputs.tf (this file is within same directory as eks.tf which uses eks module)
output "eks_cluster_id" {
value = module.eks.cluster_id
}
providers.tf in root module
data "aws_eks_cluster" "cluster" {
name = module.base.eks_cluster_id
}
data "aws_eks_cluster_auth" "cluster" {
name = module.base.eks_cluster_id
}
provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority[0].data)
token = data.aws_eks_cluster_auth.cluster.token
}
Now the problem is that deployment without modules didnt have modules so terraform state does not have this cluster id information. if I try to do terraform plan (after refactoring into modules ) it fails since cluster_id information is not there to connect to kubernetes cluster
provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority[0].data)
token = data.aws_eks_cluster_auth.cluster.token
}
How to solve this?
I think if I use terraform apply -target=module.base.aws_eks_cluster.this this will update the output information. However when I tried this it is destroying the cluster which is already created.
What I have found is working a bit better is using a different approach to configuring the
kubernetesprovider:The important thing to note here is that you can use any additional options in
argsthat the AWS CLI command provides. As a side note, this works only with AWS CLI v2. Additionally, using it this way will fall back to the default profile. If you are using a profile other than default, you can add the--profile, <profile name>in theargslist. Finally, to be able to use this cluster and perform actions on it, you need to update the.kubeconfigfile. This is achieved by running the following AWS CLI command:There is an
--aliasparameter available, which if omitted, will default to the cluster ARN. Also note the following:so make sure to check the context prior to applying any Kubernetes manifest files.