I have an AWS ou/account/infra deployment automation pipeline I am working on that uses terragrunt, terraform, and terraform cloud to deploy accounts to an AWS org. I am using local
execution in TF Cloud, so I run it locally on my machine at this time, and only the state is stored in TFC (as opposed to S3 or GCS). Terragrunt is compatible with this state storage technique but I found that if I apply
the resources (works great) and then clean the .terragrunt-cache
(find . -type d -name ".terragrunt-cache" -prune -exec rm -rf {} \;
) and then plan
or apply
the resources that were previously created, I lose sync with the remote state, and it wants to recreate everything. When I replan, the backend.tf
file is regenerated by Terragrunt in .terragrunt-cache
, and I'm wondering if the provider is getting a new ID that doesn't sync with the previous provider in the other state. One hack I'm going to try is using aliases. According to the terragrunt docs, this should not be an issue as the state persists and the cache can be lost and regenerated.
Any ideas as to what my issue might be? I am new to Terragrunt and am doing some initial investigation now.
My design is based on the infrastructure-live
example (and corresponding modules linked in the README). It is shared by Terragrunt here: https://github.com/gruntwork-io/terragrunt-infrastructure-live-example
I am running terragrunt plan-all -refresh=true
from this directory structure:
teamname
├── Makefile
├── base
│ └── terragrunt.hcl
├── deploy.hcl
├── dev
│ ├── account
│ │ └── terragrunt.hcl
│ ├── env.hcl
│ ├── iam
│ │ └── terragrunt.hcl
│ ├── regions
│ │ ├── us-east-1
│ │ │ ├── region.hcl
│ │ │ └── terragrunt.hcl
│ │ └── us-west-2
│ │ ├── region.hcl
│ │ └── terragrunt.hcl
│ └── terragrunt.hcl
└── terragrunt.hcl
I generate a state for the account (in account/
), and for the infra in the root account (base/
). All the Terraform modules are in a separate repo.