Problem
- Terraform is taking forever to complete simple tasks.
- I have discovered that the state file, on terraform cloud, is over (100MB), and so the upload of the state file, after each command, is what causes the looong delay.
- The state file is so big because it contains the entire contents of the lambda layer zip files.
This ties into my general confusion as to where to build the zipfile artifacts, which are big python libraries. These zipfiles are derivatives, so do not belong in the repository, but must be built somehow (when the requirements file changes) and placed somewhere that allows terraform to create the layer resource and attach that to a lambda resource.
Question
How to build and handle these big artifact files, which do not change often, such that they do not become embedded in the state file verbatim; and thus avoiding a huge upload every time a plan or apply is invoked?
data "local_file" "google_layer" {
filename = "./build/google-layer.zip"
depends_on = [null_resource.pip_install_google]
}
resource "aws_s3_object" "google_layer" {
bucket = var.code_bucket
key = "lambda_src/big_query_google_layer.zip"
source = data.local_file.google_layer.filename
source_hash = data.local_file.google_layer.content_base64sha256
}
resource "aws_lambda_layer_version" "google_layer" {
layer_name = "google-layer"
s3_bucket = var.code_bucket
s3_key = aws_s3_object.google_layer.key
s3_object_version = aws_s3_object.google_layer.version_id
source_code_hash = data.local_file.google_layer.content_base64sha256
compatible_runtimes = ["python3.9"]
}
resource "null_resource" "pip_install_google" {
triggers = {
requirements_hash = data.local_file.requirements_google_layer.content
}
provisioner "local-exec" {
command = <<-EOT
python -m pip install \
--requirement ../requirements-google.txt \
--target /tmp/bigquery/google-layer/python/ \
zip -r build/google-layer.zip *
EOT
}
}