Here’s a simple example.
The lambda function and its requirements are in lambda/
.
A null_resource
is responsible for triggering a rebuild. The trigger
calculates a base64sha256 on each file in lambda/
, and concatenates the
results together into a single string. Any time files are added, removed, or
modified inside lambda/
, the null resource is rerun. The provisioner removes
any existing build result directory out/
, then installs the packages with
pip, and copies source files into the build result dir.
archive_file
zips the output directory to be used with aws_lambda_function
later.
$ tree
.
├── lambda
│ ├── main.py
│ └── requirements.txt
└── main.tf
resource "null_resource" "this" {
# Build the lambda_function.zip
provisioner "local-exec" {
command = "rm -rf ${path.module}/out && pip install -r ${path.module}/lambda/requirements.txt -t ${path.module}/out && cp ${path.module}/lambda/* ${path.module}/out"
}
# Rebuild when new files are added to lambda/ or the contents of any existing files change
triggers = {
source_files = "${join("", [for f in fileset(path.module, "lambda/*") : filebase64sha256("${path.module}/${f}")])}"
}
}
data "archive_file" "this" {
type = "zip"
source_dir = "out"
output_path = "lambda_function.zip"
depends_on = [null_resource.this]
}
resource "aws_lambda_function" "this" {
filename = data.archive_file.lambda.output_path
source_code_hash = data.archive_file.lambda.output_base64sha256
...
}
Edit 2023-04-14: This doesn’t really work that great on remote builders that
don’t store the results of the null_resource
to reuse in subsequent runs.
archive_file
ends up looking for an out
directory that doesn’t exist,
becauase it didn’t get rebuilt when files in the null_resource.trigger
didn’t
change. For example, rebuilding on Terraform Cloud without having changed
anything in the lambda
dir causes an error.