I am following this guide for getting a docker image to run on AWS ECS:
https://github.com/cdktf/docker-on-aws-ecs-with-terraform-cdk-using-typescript/tree/main
In their 'build and push' docker step they do this:
const asset = new TerraformAsset(this, `project`, {
path: projectPath,
});
const version = require(`${projectPath}/package.json`).version;
this.tag = `${repo.repositoryUrl}:${version}-${asset.assetHash}`;
// Workaround due to https://github.com/kreuzwerker/terraform-provider-docker/issues/189
this.image = new Resource(this, `image`, {
provisioners: [
{
type: "local-exec",
workingDir: asset.path,
command: `docker login -u ${auth.userName} -p ${auth.password} ${auth.proxyEndpoint} &&
docker build -t ${this.tag} . &&
docker push ${this.tag}`,
},
],
});
That is - they use a hash of the directory that makes up the Dockerfile as the tag for the Docker file.
This means that the docker image will not need to be rebuilt if nothing has changed.
The problem I have is my folder structure looks like this:
/
backend/
frontend/
infra/
cdktf.json
main.ts
Dockerfile
Note the Dockerfile is in the root directory.
During the cdktf synth process cdktf will copy the entire directory, including cdktf.out itself, into the infra/cdktf.out folder and recursively keep copying until it hits an ENAMETOOLONG error.
A good solution for me would be to either be able to specify specific files/directories for the TerraformAsset, or exclude certain folders from it.
This github issue:
https://github.com/hashicorp/terraform-cdk/issues/2113
Proposes adding functionality to ignore folders. I'm wondering if there is an immediate solution.
Adding a .terraformignore does not solve this problem.