Mervin Hemaraju
Mervin Hemaraju

Reputation: 2185

Prevent Terraform from deleting existing paths in S3

I have a simple Terraform code where I manage an application's version code in S3

I want to manage multiple version of this code in S3.

My code is as follows:

main.tf

resource "aws_s3_bucket" "caam_test_bucket" {
  bucket = "caam-test-bucket"

  versioning {
    enabled = true
  }
}

resource "aws_s3_bucket_object" "caam_test_bucket_obj" {
  bucket = aws_s3_bucket.caam_test_bucket.id
  key    = "${var.env}/v-${var.current_version}/app.zip"
  source = "app.zip"
}

Every time I update the code, I export it to app.zip, increment the variable current_version and push the terraform code.

The issue here is that instead of keeping multiple version folders in the S3 buckets, it deletes the existing one and creates another.

I want Terraform to keep any paths and files created and to not delete it.

For e.g if a path dev/v-1.0/app.zip already exists and i increment the current version to 2.0 and push the code, i want Terraform to keep dev/v-1.0/app.zip and also add the dev/v-2.0/app.zip to the bucket.

Is there a way to do that ?

Upvotes: 1

Views: 1428

Answers (1)

Marcin
Marcin

Reputation: 238687

TF deletes your object, because that is how it works:

Destroy resources that exist in the state but no longer exist in the configuration.

One way to overcome this is to keep all your objects in the configuration, through for_each. This way you would keep adding new versions to a map of existing objects, rather then keep replacing them. This can be problematic if you are creating lots of versions, as you have to keep them all.

Probably easier way is to use local-exec which is going to use AWS CLI to upload the object. This happens "outside" of TF, thus TF will not be deleting pre-existing objects, as TF won't be aware of them.

Upvotes: 1

Related Questions