Reputation: 353
I am using terraform to upload a file with contents to s3.However, when the content changes, I need to update the s3 file as well. But since the state file stores that the s3 upload was completed, it doesn't upload a new file.
resource "local_file" "timestamp" {
filename = "timestamp"
content = "${timestamp()}"
}
resource "aws_s3_bucket_object" "upload" {
bucket = "bucket"
key = "date"
source = "timestamp"
}
expected:
aws_s3_bucket_object change detected aws_s3_bucket_object.timestamp Creating...
result:
aws_s3_bucket_object Refreshing state...
Upvotes: 8
Views: 7061
Reputation: 74209
When you give Terraform the path to a file rather than the direct content to upload, it is the name of the file that decides whether the resource needs to be updated, rather than the file's contents.
For a short piece of data as shown in your example, the easiest solution is to specify the data directly in the resource configuration:
resource "aws_s3_bucket_object" "upload" {
bucket = "bucket"
key = "date"
content = "${timestamp()}"
}
If your file is actually too large to reasonably load into a string variable, or if it contains raw binary data that cannot be loaded into a string, you can set the etag
of the object to an MD5 hash of the content so that the provider can see when the content has changed:
resource "aws_s3_bucket_object" "upload" {
bucket = "bucket"
key = "date"
source = "${path.module}/timestamp"
etag = "${filemd5("${path.module}/timestamp")}"
}
By setting the etag
, any change to the content of the file will cause this hash result to change and thus allow the provider to detect that the object needs to be updated.
Upvotes: 17