I am using terragrunt with following structure:
├── env
│ ├── prod
│ └── dev
│ ├── IAM
│ │ └── terragrunt.hcl
│ ├── S3
│ │ └── terragrunt.hcl
│ └── account.hcl
├── src
│ ├── IAM
│ │ ├── main.tf
│ └── S3
│ ├── main.tf
│ └── variables.tf
├── terragrunt.hcl
root terragrunt.hcl:
locals {
account_vars = read_terragrunt_config(find_in_parent_folders("account.hcl"))
aws_profile = local.account_vars.locals.aws_profile_name
}
terraform {
extra_arguments "aws_profile" {
commands = [
"init",
"apply",
"refresh",
"import",
"plan",
"destroy"
]
env_vars = {
AWS_PROFILE = "${local.aws_profile}"
}
}
}
remote_state {
backend = "local"
config = {
path = "${get_parent_terragrunt_dir()}/${local.aws_profile}-terraform.tfstate"
}
}
generate "provider" {
path = "provider.tf"
if_exists = "overwrite"
contents = <<EOF
provider "aws" {
region = "eu-central-1"
profile = "${local.aws_profile}"
default_tags {
tags = {
<tag1>
<tag2>
}
}
}
EOF
}
I have clean AWS account without zero resources created yet.
When I run terragrunt apply
from dev/IAM directory, it creates the resources.
When I switch to dev/S3 and runterragrunt apply
, it wants to destroy all the IAM resources and create S3 resources after that. Looks like each module is writing its own state file, overriding the one created before.
Do you have any ideas why is this happening like that?
Tried to use several configuration variations, same results all the time
You parent terragrunt file setup is not correct. Currently you are using
get_parent_terragrunt_dir()
which will be returning the dir where the parentterragrunt.hcl
runs. This means that when you run your sub dirs they will end up with the same state file since both sub dirs will have the same parent dir. Instead you should be usingpath_relative_to_include()
which will be the path from your parent to the sub dir. Essentially you should see https://terragrunt.gruntwork.io/docs/features/keep-your-remote-state-configuration-dry/#filling-in-remote-state-settings-with-terragrunt