wawawa
wawawa

Reputation: 3355

Why Lambda still using the old version of the zip file in S3 bucket?

I'm using Terraform to create a lambda and an S3 bucket, the binaries that the lambda is using are stored in this bucket, after I changed the content of the binary file, it seems like the lambda is still using the old version of the file (I overwrite the file in the bucket), how can I tell lambda to use the latest version of the file in the bucket?

Upvotes: 1

Views: 3048

Answers (1)

mon
mon

Reputation: 22316

Lambda function will not be updated just because the file in the S3 bucket is changed.

Similar to CloudFormation idea as in AWS::Lambda::Function Code

Changes to a deployment package in Amazon S3 are not detected automatically during stack updates. To update the function code, change the object key or version in the template.

In my understanding, we would not know what lambda code we are executing if Lambda function automatically picks up the changes in the S3 bucket. As a proper release management, we should be able to explicitly release a new lambda deployment (better to avoid using $LATEST but publish).

Need to trigger the lambda function update.

One way is to use the source_code_hash attribute of aws_lambda_function Terraform resource but the file needs to be local, not in S3, and needs to be changed before running terraform apply.

Or change the S3 bucket object location and set the new path to s3_key attribute of the aws_lambda_function. Such as for each new release, create a new S3 folder "v1", "v2", "v3", ... and use the new folder (as well as update the alias, preferably).

resource "aws_lambda_function" "authorizer" {
  function_name    = "${var.lambda_authorizer_name}"
  source_code_hash = "${data.archive_file.lambda_authorizer.output_sha}" # <---
  s3_bucket        = "${aws_s3_bucket.package.bucket}"   
  s3_key           = "${aws_s3_bucket_object.lambda_authorizer_package.id}"   # <---

Or enable S3 bucket versioning and change the s3_object_version attribute of the aws_lambda_function, either using version_id from aws_s3_bucket_object, or after changing the file in S3 and checking the version id.

One of these will trigger the update calling UpdateFunctionCode API as in resource_aws_lambda_function.go.

func needsFunctionCodeUpdate(d resourceDiffer) bool {
    return d.HasChange("filename") || 
           d.HasChange("source_code_hash") || 
           d.HasChange("s3_bucket") || 
           d.HasChange("s3_key") || 
           d.HasChange("s3_object_version")
}

// UpdateFunctionCode in the API / SDK
func resourceAwsLambdaFunctionUpdate(d *schema.ResourceData, meta interface{}) error {
    conn := meta.(*AWSClient).lambdaconn

    ...
    codeUpdate := needsFunctionCodeUpdate(d)
    if codeUpdate {
        ...
        log.Printf("[DEBUG] Send Update Lambda Function Code request: %#v", codeReq)

        _, err := conn.UpdateFunctionCode(codeReq)
        if err != nil {
            return fmt.Errorf("Error modifying Lambda Function Code %s: %s", d.Id(), err)
        }
        ...
    }

Alternatively, invoke AWS CLI update-function-code, which basically the terraform aws_lambda_function code is doing.

Upvotes: 3

Related Questions