Terraform: Is it possible to create multiple "helm" providers with context set to different Kubernetes clusters dynamically?

268 views Asked by At

Terraform version: "1.3.7"

I need to configure multiple helm providers set up with different K8s clusters context. How can I make this dynamic, where my code accepts a list of clusters(cluster-ids) and for each cluster-id I am able to setup a helm provider.

I am looking to achieve something like below:

variable "k8s_cluster_ids" {
  description = "List of K8S Cluster IDs to fetch context for"
  type        = list(string)
  default     = ["cluster-id-1", "cluster-id-2"]
}

data "ibm_container_cluster_config" "cluster_config" {
  for_each = toset(var.k8s_cluster_ids)
  cluster_name_id   = each.value
}

provider "helm" {
  for_each = data.ibm_container_cluster_config.cluster_config

  alias = "helm_cluster_${each.value.id}"
  
  kubernetes {
    host  = each.value.host
    token = each.value.token
  }
}

I understand Terraform does not allow for_each inside the provider block, but my use-case is something which requires me to do so. Any suggestions would be helpful here.

1

There are 1 answers

0
Martin Atkins On

Both the set of declared provider configurations and the relationships between resources and provider configurations are static in a Terraform configuration. There is no option for deciding dynamically how many provider configurations to declare or which resource is managed by each provider configuration.

There are some possible alternatives, but neither is exactly equivalent to what you wanted to write.

  1. Use a separate workspace for each cluster

    Terraform CLI supports multiple separate states for the same configuration using a concept called "workspaces". One possible design for your situation would be to decide on the convention that your workspaces are always named after Kubernetes cluster ids, and then write your configuration in this way:

    locals {
      # For this configuration, the current workspace name
      # specifies the Kubernetes cluster ID to use.
      k8s_cluster_id = terraform.workspace
    }
    
    data "ibm_container_cluster_config" "cluster_config" {
      cluster_name_id = local.k8s_cluster_id
    }
    
    provider "helm" {
      kubernetes {
        host  = data.ibm_container_cluster_config.cluster_config.host
        token = data.ibm_container_cluster_config.cluster_config.token
      }
    }
    

    You can then create a workspace for each of your clusters and update each one separately:

    terraform workspace new cluster-id-1
    terraform workspace new cluster-id-2
    
    terraform workspace select cluster-id-1
    terraform apply
    
    terraform workspace select cluster-id-2
    terraform apply
    

    If you try to use a workspace name that doesn't match one of the cluster names known by the remote system then data.ibm_container_cluster_config.cluster_config will presumably return an error indicating that mistake, and thus block creation of anything else against an incorrect cluster name.

    This option means that each of your clusters will be managed independently via its own terraform apply runs.

  2. Use code generation as a separate step before running Terraform

    If you factor out everything that uses the "helm" provider into a child module then you can use an extra step before running Terraform to generate a separate file for each cluster ID where each one contains the following boilerplate, which I'm writing in Terraform's alternative JSON syntax (.tf.json files) because that'll be easier to generate from a separate script written in your favorite programming language:

    {
      "data": {
        "ibm_container_cluster_config": {
          "cluster_config_CLUSTERID": {
            "cluster_name_id": "CLUSTERID"
          }
        }
      },
      "provider": {
        "helm": {
          "kubernetes": {
            "alias": "CLUSTERID",
            "host": "${data.ibm_container_cluster_config.cluster_config_CLUSTERID.host}",
            "token": "${data.ibm_container_cluster_config.cluster_config_CLUSTERID.token}"
          }
        }
      },
      "module": {
        "cluster_CLUSTERID": {
          "source": "./each-cluster"
          "providers": {
            "helm": "helm.CLUSTERID"
          }
        }
      }
    }
    

    (I've used CLUSTERID to mark the locations where the code generator should insert the current cluster ID that it's generating boilerplate for.)

    In the ./each-cluster subdirectory you can hand-write a relatively-normal-looking Terraform module that expects to be passed a default (unaliased) configuration for the provider -- that is, no need for writing explicit provider arguments in any of the resource or data blocks -- and then the code generation only needs to worry about generating the boilerplate provider configurations and module calls for each one.

    This approach will allow you to manage all of the clusters together with a single terraform apply, as long as you also run the boilerplate generation step first.