Terraform version: "1.3.7"
I need to configure multiple helm providers set up with different K8s clusters context. How can I make this dynamic, where my code accepts a list of clusters(cluster-ids) and for each cluster-id I am able to setup a helm provider.
I am looking to achieve something like below:
variable "k8s_cluster_ids" {
description = "List of K8S Cluster IDs to fetch context for"
type = list(string)
default = ["cluster-id-1", "cluster-id-2"]
}
data "ibm_container_cluster_config" "cluster_config" {
for_each = toset(var.k8s_cluster_ids)
cluster_name_id = each.value
}
provider "helm" {
for_each = data.ibm_container_cluster_config.cluster_config
alias = "helm_cluster_${each.value.id}"
kubernetes {
host = each.value.host
token = each.value.token
}
}
I understand Terraform does not allow for_each
inside the provider
block, but my use-case is something which requires me to do so. Any suggestions would be helpful here.
Both the set of declared provider configurations and the relationships between resources and provider configurations are static in a Terraform configuration. There is no option for deciding dynamically how many provider configurations to declare or which resource is managed by each provider configuration.
There are some possible alternatives, but neither is exactly equivalent to what you wanted to write.
Use a separate workspace for each cluster
Terraform CLI supports multiple separate states for the same configuration using a concept called "workspaces". One possible design for your situation would be to decide on the convention that your workspaces are always named after Kubernetes cluster ids, and then write your configuration in this way:
You can then create a workspace for each of your clusters and update each one separately:
If you try to use a workspace name that doesn't match one of the cluster names known by the remote system then
data.ibm_container_cluster_config.cluster_config
will presumably return an error indicating that mistake, and thus block creation of anything else against an incorrect cluster name.This option means that each of your clusters will be managed independently via its own
terraform apply
runs.Use code generation as a separate step before running Terraform
If you factor out everything that uses the "helm" provider into a child module then you can use an extra step before running Terraform to generate a separate file for each cluster ID where each one contains the following boilerplate, which I'm writing in Terraform's alternative JSON syntax (
.tf.json
files) because that'll be easier to generate from a separate script written in your favorite programming language:(I've used CLUSTERID to mark the locations where the code generator should insert the current cluster ID that it's generating boilerplate for.)
In the
./each-cluster
subdirectory you can hand-write a relatively-normal-looking Terraform module that expects to be passed a default (unaliased) configuration for the provider -- that is, no need for writing explicitprovider
arguments in any of theresource
ordata
blocks -- and then the code generation only needs to worry about generating the boilerplate provider configurations and module calls for each one.This approach will allow you to manage all of the clusters together with a single
terraform apply
, as long as you also run the boilerplate generation step first.