Reputation: 1093
I am creating a few terraform modules and inside the modules I also create the resources for storing remote state ( a S3 bucket and dynamodb table)
when I then use the module I launch I write something like this:
# terraform {
# backend "s3" {
# bucket = "name"
# key = "xxxx.tfstate"
# region = "rrrr"
# encrypt = true
# dynamodb_table = "trrrrr"
# }
# }
terraform {
required_version = ">= 1.0.0, < 2.0.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}
provider "aws" {
region = var.region
}
module "mymodule" {
source = "./module/mymodule"
region = "param1"
prefix = "param2"
project = "xxxx"
username = "ddd"
contact = "myemail"
table_name = "table-name"
bucket_name = "uniquebucketname"
}
where I leave commented out the part on remote state and I leave terraform to create a local state and create all resources (including the bucket and the DynamoDB table).
After the resources are created
I re-run terraform init
and I migrate the state to s3.
I wonder if this is a good practice or if there is something better for maintaining the state and also provide isolation.
Upvotes: 0
Views: 796
Reputation: 728
That is an interesting approach. I would create the S3 bucket manually since it's a 1 time create for your state file mgmt. Then I would add a policy to prevent deletion | see here: https://serverfault.com/questions/226700/how-do-i-prevent-deletion-of-s3-buckets | & versioning and/or a bkp. Beyond this approach there are better practises such as using tools like Terraform Cloud which is free for 5 users. Then in your terraform root module configuration you would put this:
terraform {
backend "remote" {
hostname = "app.terraform.io"
organization = "YOUR-TERRAFORM-CLOUD-ORG"
workspaces {
# name = "" ## For single workspace jobs
# prefix = "" ## for multiple workspaces
name = "YOUR-ROOT-MODULE-WORKSPACE-NAME"
}
}
}
More details in this similar Q&A: Initial setup of terraform backend using terraform
Upvotes: 1