Reputation: 236
I have a working Terraform project that manages S3 buckets and objects referencing the local file system with the usage of the hashicorp/dir/template
module as follows:
module "bucket_objects" {
source = "hashicorp/dir/template"
base_dir = "${path.module}/../dist"
}
resource "aws_s3_object" "object" {
for_each = module.bucket_objects.files
bucket = aws_s3_bucket.bucket.id
key = "dist/${each.key}"
content_type = each.value.content_type
source = each.value.source_path
etag = each.value.digests.md5
}
When I migrated my Terraform state to the Cloud, the following terraform plan
wanted to delete all the objects:
# module.datafiles.aws_s3_object.file["453.f0b8f6f5.js"] will be destroyed
# (because key ["453.f0b8f6f5.js"] is not in for_each map)
I understand that Terraform Cloud in Remote execution mode is unable to see the files in my local machine. How can I reference the files for this particular resource in order to have them as S3 objects managed by Terraform?
Upvotes: 0
Views: 143
Reputation: 17664
If I had to design an infrastructure like yours I might take a different approach...
The "453.f0b8f6f5.js"
and similar files are generated in every build with a different id for caching invalidation, with that I believe these are minified bundles, so I will refer to the that file as "the bundle", (better than calling it the weird file)
If the bundle file is not committed to the repo, I see no need to keep those in terraform
aws_s3_object
resourceI'm not a fan of using different names just because you need to invalidate the cache
project_bundle.js
and do commit to your repoHere is a popular project that uses common names for the bundles:
https://github.com/swagger-api/swagger-ui/tree/master/dist
I'm designing the project with a few things in mind:
Upvotes: 0
Reputation: 74694
This plan shows that Terraform did still manage to evaluate the module.bucket_objects
results -- or else there would've been an error related to something in that module -- but that the result was an empty map instead of describing all of the files as you had intended.
The module call refers to ${path.module}/../dist
which suggests that this module is expecting to find files outside of its own directory prefix. Therefore I think the most likely explanation is that Terraform CLI is uploading to Terraform Cloud everything under the module's own directory but not uploading the ../dist
directory, because Terraform doesn't realize that directory is treated as being part of the module. (Files used directly by a module should typically be in the same directory as the module, or in a subdirectory of the module.)
If you can't rearrange this so that the files being managed live in a subdirectory of path.module
, then you'll need to reconfigure HCP Terraform (formerly Terraform Cloud) to better understand your directory layout. You can do this by changing the Working Directory setting, as described in Parent Directory Uploads.
In your specific case, you should set the working directory to whatever is the directory name containing the Terraform source file you included in your question. For example, if your directory were called "buckets" then you'd enter the working directory as "buckets".
With that setting, when you run remote operations from the "buckets" directory Terraform CLI will know that it needs to upload the parent directory -- the one containing "buckets" as a subdirectory -- and then in the remote operation run something equivalent to cd buckets; terraform plan
.
The dist
directory should then be uploaded as a sibling of the "buckets" directory, and so all of the needed files will be available in HCP Terraform's remote execution environment.
Upvotes: 0