Calling out the Route53 experts
I have an existing Route53 hosted zone called domain.ml and has a hosted zone id of ZxxxxxxxxxxxxxxxxxxxxxxX6I
Given the below terraform plan output snippet, will this change cause any problems to the existing DNS zone and records, or will it just create new records?
Plan output
Terraform will perform the following actions:
# aws_route53_record.ns will be created
+ resource "aws_route53_record" "ns" {
+ allow_overwrite = (known after apply)
+ fqdn = (known after apply)
+ id = (known after apply)
+ name = "domain-blue-green-eks.domain.ml"
+ records = (known after apply)
+ ttl = 30
+ type = "NS"
+ zone_id = " ZxxxxxxxxxxxxxxxxxxxxxxX6I"
}
# aws_route53_zone.sub will be created
+ resource "aws_route53_zone" "sub" {
+ arn = (known after apply)
+ comment = "Managed by Terraform"
+ force_destroy = false
+ id = (known after apply)
+ name = "domain-blue-green-eks.domain.ml"
+ name_servers = (known after apply)
+ primary_name_server = (known after apply)
+ tags_all = (known after apply)
+ zone_id = (known after apply)
}
Code Snippet that generates this plan
provider "aws" {
region = local.region
}
locals {
name = var.environment_name
region = var.aws_region
vpc_cidr = var.vpc_cidr
num_of_subnets = min(length(data.aws_availability_zones.available.names), 3)
azs = slice(data.aws_availability_zones.available.names, 0, local.num_of_subnets)
argocd_secret_manager_name = var.argocd_secret_manager_name_suffix
hosted_zone_name = var.hosted_zone_name
tags = {
Blueprint = local.name
GithubRepo = "github.com/aws-ia/terraform-aws-eks-blueprints"
}
}
data "aws_availability_zones" "available" {}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 5.0"
name = local.name
cidr = local.vpc_cidr
azs = local.azs
public_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 6, k)]
private_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 6, k + 10)]
enable_nat_gateway = true
single_nat_gateway = true
public_subnet_tags = {
"kubernetes.io/role/elb" = 1
}
private_subnet_tags = {
"kubernetes.io/role/internal-elb" = 1
}
tags = local.tags
}
# Retrieve existing root hosted zone
data "aws_route53_zone" "root" {
name = local.hosted_zone_name
}
# Create Sub HostedZone for our deployment
resource "aws_route53_zone" "sub" {
name = "${local.name}.${local.hosted_zone_name}"
}
# Validate records for the new HostedZone
resource "aws_route53_record" "ns" {
zone_id = data.aws_route53_zone.root.zone_id
name = "${local.name}.${local.hosted_zone_name}"
type = "NS"
ttl = "30"
records = aws_route53_zone.sub.name_servers
}
module "acm" {
source = "terraform-aws-modules/acm/aws"
version = "~> 4.0"
domain_name = "${local.name}.${local.hosted_zone_name}"
zone_id = aws_route53_zone.sub.zone_id
subject_alternative_names = [
"*.${local.name}.${local.hosted_zone_name}"
]
wait_for_validation = true
tags = {
Name = "${local.name}.${local.hosted_zone_name}"
}
}
The situation here is that I want to add these records to my existing, hosted zone which I see that the module is using, but to me the code seems to be put in a way that it is supposed to create a new sub-hosted zone and I'm not quite sure what will the implications on the existing records be.
I was considering the below solution as in to use the existing hosted zone.
AWS - Route53 - Terraform - Update domain nameservers
I tried running a speculative plan in terraform.
The Terraform code in the question is creating an entirely new Route53 hosted zone for a subdomain of your root domain, and creating a new
NS
record in the root hosted zone to delegate management of that subdomain to the newly Route53 hosted zone.Then you shouldn't be writing code that creates an entirely new hosted zone. You should just be creating records in the existing hosted zone.