I have tried to use the following terraform configuration:
provider "helm" {
kubernetes {
host = aws_eks_cluster.La-Production-EKS.endpoint
cluster_ca_certificate = base64decode(aws_eks_cluster.La-Production-EKS.certificate_authority[0].data)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
args = ["eks", "get-token", "--cluster-name", aws_eks_cluster.La-Production-EKS.id]
command = "aws"
}
}
}
### ---------------------- EKS LB Controller ----------------------
resource "helm_release" "aws-load-balancer-controller" {
name = "aws-load-balancer-controller"
repository = "https://aws.github.io/eks-charts"
chart = "aws-load-balancer-controller"
namespace = "kube-system"
version = "1.4.1"
set {
name = "clusterName"
value = aws_eks_cluster.cluster.id
}
set {
name = "image.tag"
value = "v2.4.2"
}
set {
name = "serviceAccount.name"
value = "aws-load-balancer-controller"
}
set {
name = "serviceAccount.annotations.eks\\.amazonaws\\.com/role-arn"
value = aws_iam_role.aws_load_balancer_controller.arn
}
}
When I have used locally it proceeds without any error, however when I have tried using in our CI/CD for terraform, mainly it fails on terraform plan command on CI/CD. And I get the following output:
Error: Kubernetes cluster unreachable: the server has asked for the client to provide credentials
with helm_release.aws-load-balancer-controller, on EKS-LoadBalancer-Controller.tf line 15, in resource "helm_release" "aws-load-balancer-controller": 15: resource "helm_release" "aws-load-balancer-controller" {
How to fix? I have tried to go with using:
provider "helm" {
kubernetes {
config_path = "$PATH_KUBECONFIG"
}
} # And passing Gitlab CI/CD File Env Variable as $PATH_KUBECONFIG to my Kubernetes
However, it still output the same error. Any tips or ideas is appreciated