red888
red888

Reputation: 31570

Can I have terraform keep the old versions of objects?

New to terraform, so perhaps it just not supposed to work this way. I want to use aws_s3_bucket_object to upload a package to a bucket- this is part of an app deploy. Im going to be changing the package for each deploy and I want to keep the old versions.

resource "aws_s3_bucket_object" "object" {
    bucket = "mybucket-app-versions"
    key    = "version01.zip"
    source = "version01.zip"
}

But after running this for a future deploy I will want to upload version02 and then version03 etc. Terraform replaces the old zip with the new one- expected behavior.

But is there a way to have terraform not destroy the old version? Is this a supported use case here or is this not how I'm supposed to use terraform? I wouldn't want to force this with an ugly hack if terraform doesn't have official support for doing something like what I'm trying to do here.

I could of course just call the S3 api via script, but it would be great to have this defined with the rest of the terraform definition for this app.

Upvotes: 0

Views: 1878

Answers (2)

Martin Atkins
Martin Atkins

Reputation: 74309

When using Terraform for application deployment, the recommended approach is to separate the build step from the deploy step and use Terraform only for the latter.

The responsibility of the build step -- which is implemented using a separate tool, depending on the method of deployment -- is to produce some artifact (an archive, a docker container, a virtual machine image, etc), publish it somewhere, and then pass its location or identifier to Terraform for deployment.

This separation between build and deploy allows for more complex situations, such as rolling back to an older artifact (without rebuilding it) if the new version has problems.

In simple scenarios it is possible to pass the artifact location to Terraform using Input Variables. For example, in your situation where the build process would write a zip file to S3, you might define a variable like this:

variable "archive_name" {
}

This can then be passed to whatever resource needs it using ${var.archive_name} interpolation syntax. To deploy a particular artifact, pass its name on the command line using -var:

$ terraform apply -var="archive_name=version01.zip"

Some organizations prefer to keep a record of the "current" version of each application in some kind of data store, such as HashiCorp Consul, and read it using a data source. This approach can be easier to orchestrate in an automated build pipeline, since it allows this separate data store to be used to indirectly pass the archive name between the build and deploy steps, without needing to pass any unusual arguments to Terraform itself.

Upvotes: 1

fishi0x01
fishi0x01

Reputation: 3759

Currently, you tell terraform to manage one aws_s3_bucket_object and terraform takes care of its whole life-cycle, meaning terraform will also replace the file if it sees any changes to it.

What you are maybe looking for is the null_resource. You can use it to run a local-exec provisioner to upload the file you need with a script. That way, the old file won't be deleted, as it is not directly managed by terraform. You'd still be calling the API via a script then, but the whole process of uploading to s3 would still be included in your terraform apply step.

Here an outline of the null_resource:

resource "null_resource" "upload_to_s3" {
  depends_on = ["<any resource that should already be created before upload>"]
  ...

  triggers = ["<A resource change that must have happened so terraform starts the upload>"]

  provisioner "local-exec" {
    command = "<command to upload local package to s3>"
  }
}

Upvotes: 1

Related Questions