Reputation: 1752
I have a Terraform configuration for allocating an AWS Lambda. I want it to automatically update the code when it detects a change. To do this, I'm using the source_code_hash
property, as you can see below. As part of my build process I zip all the code up and then use openssl
to get an SHA256 hash of the zip file, stick it in a text file, and then upload both to an S3 bucket.
Here is my Terraform configuration:
data "aws_s3_bucket_object" "mylambdacode_sha256" {
bucket = "myapp-builds"
key = "build.zip.sha256"
}
resource "aws_lambda_function" "my_lambda" {
function_name = "${var.lambda_function_name}"
role = "${aws_iam_role.iam_for_lambda.arn}"
handler = "index.handler"
runtime = "nodejs10.x"
s3_bucket = "myapp-builds"
s3_key = "build.zip"
source_code_hash = "${data.aws_s3_bucket_object.mylambdacode_sha256.body}"
timeout = 900
}
I run terraform apply
and it shows me that the source_code_hash
has changed:
~ source_code_hash = <<~EOT
- c9Nl4RRfuh/Z5fJvEOT69GDTJqY9n/QTB5cBGtOniYc=
+ 73d365e1145fba1fd9e5f26f10e4faf460d326a63d9ff4130797011ad3a78987
The 73d365e1145fba1fd9e5f26f10e4faf460d326a63d9ff4130797011ad3a78987
part is indeed what's in the file on S3. So, this all looks right. But then when I run terraform apply
again immediately afterwards it thinks the hash has changed again.
Checking the .tfstate
file I see that the hash it just said was new is not being saved.
Is Terraform hashing my hash or doing some other manipulation and saving the result of that to .tfstate
? Or do I have something configured incorrectly?
EDIT:
Per the answer below I've updated the source_code_hash
statement to use the base64encode
function as follows:
source_code_hash = "${base64encode(data.aws_s3_bucket_object.my_lambda_build_sha256.body)}"
Here is the output from terraform state show aws_lambda_function.my_lambda && terraform apply && terraform state show aws_lambda_function.my_lambda
$ terraform state show aws_lambda_function.my_lambda && terraform apply && terraform state show aws_lambda_function.my_lambda
# aws_lambda_function.my_lambda:
resource "aws_lambda_function" "my_lambda" {
arn = "arn:aws:lambda:us-east-2:XXXXXXXXXX:function:my_lambda"
function_name = "my_lambda"
handler = "index.handler"
id = "my_lambda"
invoke_arn = "arn:aws:apigateway:us-east-2:lambda:path/2015-03-31/functions/arn:aws:lambda:us-east-2:XXXXXXXXXX:function:my_lambda/invocations"
last_modified = "2019-08-02T15:00:32.621+0000"
layers = []
memory_size = 128
publish = false
qualified_arn = "arn:aws:lambda:us-east-2:XXXXXXXXXX:function:my_lambda:$LATEST"
reserved_concurrent_executions = -1
role = "arn:aws:iam::XXXXXXXXXX:role/my_lambda-role"
runtime = "nodejs10.x"
s3_bucket = "myapp-builds"
s3_key = "myapp/build.zip"
source_code_hash = "r5D6o1FuCug6FD4QPQ43VdUlfYP4Qe1l1DcElNsf5E0="
source_code_size = 12793672
tags = {}
timeout = 900
version = "$LATEST"
tracing_config {
mode = "PassThrough"
}
}
postgresql_database.postgres: Refreshing state... [id=postgres]
aws_iam_policy.lambda_logging: Refreshing state... [id=arn:aws:iam::XXXXXXXXXX:policy/lambda_logging]
aws_cloudwatch_log_group.my_lambda: Refreshing state... [id=/aws/lambda/my_lambda]
aws_iam_role.iam_for_lambda: Refreshing state... [id=my_lambda-role]
aws_cloudwatch_event_rule.my_lambda-rule: Refreshing state... [id=my_lambda-rule]
data.aws_s3_bucket_object.myapp_lambda_build_sha256: Refreshing state...
data.aws_s3_bucket_object.myapp_lambda_build_zip: Refreshing state...
aws_iam_role_policy_attachment.lambda_logs: Refreshing state... [id=my_lambda-role-20190801190935907200000001]
aws_lambda_function.my_lambda: Refreshing state... [id=my_lambda]
aws_lambda_permission.allow_cloudwatch_to_call_trigger_lambda: Refreshing state... [id=AllowExecutionFromCloudWatch]
aws_cloudwatch_event_target.my_lambda-rule-event-target: Refreshing state... [id=my_lambda-rule-terraform-20190801192030362600000001]
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# aws_lambda_function.my_lambda will be updated in-place
~ resource "aws_lambda_function" "my_lambda" {
arn = "arn:aws:lambda:us-east-2:XXXXXXXXXX:function:my_lambda"
function_name = "my_lambda"
handler = "index.handler"
id = "my_lambda"
invoke_arn = "arn:aws:apigateway:us-east-2:lambda:path/2015-03-31/functions/arn:aws:lambda:us-east-2:XXXXXXXXXX:function:my_lambda/invocations"
~ last_modified = "2019-08-02T15:00:32.621+0000" -> (known after apply)
layers = []
memory_size = 128
publish = false
qualified_arn = "arn:aws:lambda:us-east-2:XXXXXXXXXX:function:my_lambda:$LATEST"
reserved_concurrent_executions = -1
role = "arn:aws:iam::XXXXXXXXXX:role/my_lambda-role"
runtime = "nodejs10.x"
s3_bucket = "myapp-builds"
s3_key = "myapp/build.zip"
~ source_code_hash = "r5D6o1FuCug6FD4QPQ43VdUlfYP4Qe1l1DcElNsf5E0=" -> "YWY5MGZhYTM1MTZlMGFlODNhMTQzZTEwM2QwZTM3NTVkNTI1N2Q4M2Y4NDFlZDY1ZDQzNzA0OTRkYjFmZTQ0ZAo="
source_code_size = 12793672
tags = {}
timeout = 900
version = "$LATEST"
tracing_config {
mode = "PassThrough"
}
}
Plan: 0 to add, 1 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
aws_lambda_function.my_lambda: Modifying... [id=my_lambda]
aws_lambda_function.my_lambda: Modifications complete after 1s [id=my_lambda]
Apply complete! Resources: 0 added, 1 changed, 0 destroyed.
# aws_lambda_function.my_lambda:
resource "aws_lambda_function" "my_lambda" {
arn = "arn:aws:lambda:us-east-2:XXXXXXXXXX:function:my_lambda"
function_name = "my_lambda"
handler = "index.handler"
id = "my_lambda"
invoke_arn = "arn:aws:apigateway:us-east-2:lambda:path/2015-03-31/functions/arn:aws:lambda:us-east-2:XXXXXXXXXX:function:my_lambda/invocations"
last_modified = "2019-08-02T15:05:08.079+0000"
layers = []
memory_size = 128
publish = false
qualified_arn = "arn:aws:lambda:us-east-2:XXXXXXXXXX:function:my_lambda:$LATEST"
reserved_concurrent_executions = -1
role = "arn:aws:iam::XXXXXXXXXX:role/my_lambda-role"
runtime = "nodejs10.x"
s3_bucket = "myapp-builds"
s3_key = "myapp/build.zip"
source_code_hash = "r5D6o1FuCug6FD4QPQ43VdUlfYP4Qe1l1DcElNsf5E0="
source_code_size = 12793672
tags = {}
timeout = 900
version = "$LATEST"
tracing_config {
mode = "PassThrough"
}
}
Upvotes: 7
Views: 10948
Reputation: 81
It works for me generating the "build.zip.sha256" before upload it to S3 with:
openssl dgst -sha256 -binary build.zip | openssl enc -base64 > build.zip.sha256
To avoid <<~EOT mismatch with the original sha256 I use:
source_code_hash = chomp(data.aws_s3_bucket_object.my_lambda_build_sha256.body)
Upvotes: 8