Folks! Need some help here.
Problem statement - We have a project in which we create AWS infra from TF code. We trigger it through Azure DevOps pipelines, infra gets created, and the state file gets stored in an S3 bucket. This is perfect.
Now, we run some Gradle tests locally to test AWS infra which uses the same TF file and the state file goes to the same bucket. This is where we have the issue.
Need - I want if I run the local tests the TF state file MUST go to another S3 bucket. For eg. Azure DevOps Pipeline - Bucket A. Local Gradle AWS infra tests - Bucket B
Questions -
- Is this even possible?
- How terraform can decide where to store the state file on the basis of the local or ADO call?
The state definition is located in
backend
block. Let's suppose you have that definition in abackend.tf
file and Bucket B as state source. You could store anotherbackend.tf
file (with bucket A as state source) in a subfolder, which means it will not be read autimatically. Then, in ADO pipeline you can switch the state file (overwrite the mainbackend.tf
file) with this one.I do not know the reason behind two different buckets (probably permissions) but you can also use workspaces (for example
default
andado
) and:default
workspace,ado
workspace.More information: