Reputation: 71
I am looking for the best practice to create and store my state file in S3 bucket.
if it is a different file I also need to store the state file of the s3 bucket created, then in this case I should be creating two s3 buckets one for infrastructure state and other for s3 bucket state file.
Secondly, if remote configuration is set and performing 'terraform destroy' is throwing me an error failed to upload state file: no such bucket found, as the bucket has been destroyed. should i first disable terraform remote config -disable and then run terraform destroy? What's the best practice I should be following?
Upvotes: 5
Views: 2329
Reputation: 326
Personally I use a Terraform base stack to effectively bootstrap an AWS account for use with Terraform. This stack just stores its state file locally which is then committed to version control. This stack should only ever have to be run once so I see no problem with it not using a remote backend.
My Terraform base stack creates:
s3:putObject
& s3:getObject
with statekms:GenerateDataKey*
& kms:Decrypt
This can be expanded to include Roles, especially if your Terraform user will be deploying across multiple accounts.
Upvotes: 6
Reputation: 56877
You have a chicken and egg problem here if you want to store the state of the thing that will store the state.
Creating an S3 bucket outside of Terraform is easy so I would never bother with doing that in Terraform for the actual state bucket and then use Terraform to create absolutely everything else.
The ease of creating an S3 bucket (or one of the other S3 type storage options now covered by remote state) is one of the main benefits of using S3 to back your state files rather than, say, Consul which would require you to build a cluster of instances and configure them before you can store any state files.
Upvotes: 2