I want to have an elasticache with all shards in single az us-east-2a. Since I already have an existing subnet_group "multi-az-subnet-group" which includes two subnets: us-east-2a and us-east-2b. so I just re-use it. but I set preferred_cache_cluster_azs with one az "us-east-2a". Here is the terraform code:
resource "aws_elasticache_replication_group" "test" {
num_node_groups = var.num_node_groups
subnet_group_name = "multi-az-subnet-group"
preferred_cache_cluster_azs = ["us-east-2a", "us-east-2a"]
....
}
However, when I create the elasticache with only 1 shard, both primary and replica nodes are in the same az us-east-2a. But when I add one more shard, the primary and replica nodes of the 2nd shard go to different AZ.
To resolve that, I create another subnet_group "single-az-subnet-group", which includes only one subnet us-east-2a, and assign it to subnet_group_name of the elasticache, but terraform tries to recreate a new elasticache, which is not what I want. I want terraform to modify, rather than destroy the original elasticache. and I think since the first shard is already in us-east-2a, and subnet_group "single-az-subnet-group" only includes us-east-2a, then aws should be able to keep the elasticache in us-east-2a without destroying it.
Question: in this case, how to modify the terraform code so that I can add new shards in the same az us-east-2a without recreating the elasticache? Note, the primary and replica node of the first shard is already in us-east-2a.