Jed Schneider
Jed Schneider

Reputation: 14671

Initial setup of terraform backend using terraform

I'm just getting started with terraform and I'd like to be able to use AWS S3 as my backend for storing the state of my projects.

terraform {
    backend "s3" {
      bucket = "tfstate"
      key = "app-state"
      region = "us-east-1"
    }
}

I feel like it is sensible to setup my S3 bucket, IAM groups and polices for the backend storage infrastructure with terraform as well.

If I setup my backend state before I apply my initial terraform infrastructure, it reasonably complains that the backend bucket is not yet created. So, my question becomes, how do I setup my terraform backend with terraform, while keeping my state for the backend tracked by terraform. Seems like a nested dolls problem.

I have some thoughts about how to script around this, for example, checking to see if the bucket exists or some state has been set, then bootstrapping terraform and finally copying the terraform tfstate up to s3 from the local file system after the first run. But before going down this laborious path, I thought I'd make sure I wasn't missing something obvious.

Upvotes: 125

Views: 64429

Answers (18)

Jason Williscroft
Jason Williscroft

Reputation: 23

It's TOTALLY sensible to set up your Terraform state resources using Terraform.

Just parse the sentence. If the tool doesn't support THAT, there's something wrong with the tool lol.

BUT... there are lots of paths to that destination. Which is silly. It's such a fundamental requirement that there really SHOULD be some sort of reference implementation that works for EVERYBODY.

The reason this is so hard is that Terraform is intrinsically declarative... but the declarative code you actually write is driven by configuration requirements at a higher level of abstraction.

On my current project we solved this problem with code generation: we use a set of Handlebars templates that generate the necessary state resources AND the associated backend & provider resources as vanilla TF code, based on a project-level config doc.

The goal is to produce a generic implementation that adheres to the Open-Closed principle: we write it once, it works, and we don't need to open that box ever again, even when doing things like migrating state.

Here's the relevant portion of the config document:

terraform:
  aws_version: '>= 5.56.1'
  paths: src
  state:
    bucket: '{{#if organization.tokens.namespace}}{{organization.tokens.namespace}}-{{/if}}terraform-state'
    key: terraform.tfstate
    lock_table: terraform-state-lock
  terraform_version: '>= 1.9.0'
workspaces:
  bootstrap:
    cli_defaults:
      # Remove assume_role & aws_profile (or override them with null) to enable SSO.
      assume_role: OrganizationAccountAccessRole
      aws_profile: META-000-BOOTSTRAP

      # Uncomment permission_set (or override it elsewhere) to enable SSO.
      # permission_set: terraform_admin

      # Remove use_local state (or override it with false) to migrate to remote state.
      use_local_state: true
    cli_defaults_path: src/bootstrap/workspace.local.yml
    path: src/bootstrap
    shared_config_path: src/bootstrap/_shared_config.local
    generators:
      src/bootstrap/_accounts.tf: src/bootstrap/templates/accounts.hbs
      src/bootstrap/_backend.tf: src/templates/backend.hbs
      src/bootstrap/_organizational_units.tf: src/bootstrap/templates/organizational_units.hbs
      src/bootstrap/_outputs.tf: src/bootstrap/templates/outputs.hbs
      src/bootstrap/_providers.tf: src/templates/providers.hbs
      src/bootstrap/_s3_access_logs.tf: src/bootstrap/templates/s3_access_logs.hbs
      src/bootstrap/_shared_config.local: src/templates/shared_config.hbs
      src/bootstrap/_sso_terraform_state_writer.tf: src/bootstrap/templates/sso_policies/terraform_state_writer.hbs
      src/bootstrap/_policies/unprotected_resource_writer.tf: src/bootstrap/templates/sso_policies/unprotected_resource_writer.hbs
      src/bootstrap/_sso.tf: src/bootstrap/templates/sso.hbs
      src/bootstrap/_terraform.tf: src/templates/terraform.hbs
      src/modules/global/_outputs.tf: src/modules/global/outputs.hbs

The generators use the Metastructure tool to generate TF code from the indicated Handlebars templates.

Here are a couple of relevant Handlebars templates:

backend.hbs

{{> header_generated}}

terraform { 
  {{! Local backend. }}
  {{#if cli_params.use_local_state~}} 
  backend "local" {}
  
  {{! S3 backend. }}
  {{~else}}
  backend "s3" { 
    {{! Assume role if not using SSO. }}
    {{~#if (logic "and" cli_params.assume_role (logic "not" (lodash "eq" organization.key_accounts.terraform_state organization.key_accounts.master)))}}
    assume_role = { 
      role_arn = "arn:aws:iam::{{lodash "get" accounts (params organization.key_accounts.terraform_state "id")}}:role/{{#if cli_params.assume_role}}{{cli_params.assume_role}}{{else}}{{lodash "get" accounts (params organization.key_accounts.terraform_state "permission_set_roles" cli_params.permission_set)}}{{/if}}"
    } {{/if}}

    {{! Either use the given credentials profile or use the Terraform state SSO profile. }}
    {{#if (logic "or" cli_params.permission_set cli_params.aws_profile)}}
    profile = "{{ifelse cli_params.permission_set organization.key_accounts.terraform_state cli_params.aws_profile}}"
    {{/if}}

    {{! Identify the shared config file if using SSO. }}
    {{! https://github.com/karmaniverous/metastructure/issues/6 }}
    {{#if cli_params.permission_set}}
    shared_config_files = ["./_shared_config.local"]
    {{/if}}

    {{! Remaining S3 backend configuration }}
    bucket = "{{terraform.state.bucket}}"
    dynamodb_table = "{{terraform.state.lock_table}}" 
    encrypt = true 
    key = "{{terraform.state.key}}"
    region = "{{organization.aws_region}}" 
    workspace_key_prefix = "workspaces" 
  } {{~/if}}
}

providers.hbs

{{> header_generated}}

###############################################################################
# Default provider.
###############################################################################
provider "aws" { {{#if cli_params.permission_set}}
  assume_role {
    tags = {
      Generator = "Terraform"
    }
    transitive_tag_keys = ["Generator"]
  } {{/if}} 
  default_tags {
    tags = {
      Generator = "Terraform"
    }
  }{{#if (logic "or" cli_params.permission_set cli_params.aws_profile)}}
  profile = "{{ifelse cli_params.permission_set organization.key_accounts.master cli_params.aws_profile}}"{{/if}}
  region = module.global.config.organization.aws_region{{#if cli_params.permission_set}}
  shared_config_files = ["./_shared_config.local"]{{/if}}
}

###############################################################################
# DIRECT ACCOUNT PROVIDERS
###############################################################################

{{#with (ifelse cli_params.permission_set (lodash "get" sso.reference.permission_set_accounts cli_params.permission_set) (lodash "keys" accounts)) as | accountKeys | }}
  {{~#each accountKeys}}
{{> provider accountKey=this config=../../.}}

  {{/each~}}
{{~/with~}}

###############################################################################
# KEY ACCOUNT PROVIDERS
###############################################################################

{{#each organization/key_accounts ~}}
{{> provider accountKey=this config=../. keyAccount=@key}}

{{/each}}

... and here is the TF generated by those when you're authenticating with SSO. You can change the CLI options and get a different result for direct auth, local state, etc:

_backend.tf

terraform {
  backend "s3" {


    profile = "shared_services"

    shared_config_files = ["./_shared_config.local"]

    bucket               = "metastructure-002-terraform-state"
    dynamodb_table       = "terraform-state-lock"
    encrypt              = true
    key                  = "terraform.tfstate"
    region               = "us-east-1"
    workspace_key_prefix = "workspaces"
  }
}

_providers.tf

###############################################################################
# Default provider.
###############################################################################
provider "aws" {
  assume_role {
    tags = {
      Generator = "Terraform"
    }
    transitive_tag_keys = ["Generator"]
  }
  default_tags {
    tags = {
      Generator = "Terraform"
    }
  }
  profile             = "master"
  region              = module.global.config.organization.aws_region
  shared_config_files = ["./_shared_config.local"]
}

###############################################################################
# DIRECT ACCOUNT PROVIDERS
###############################################################################


###############################################################################
# Create a provider to assume the TerraformAdmin permission set 
# role on account Development Account.
###############################################################################
provider "aws" {
  alias = "dev"
  assume_role {
    tags = {
      Generator = "Terraform"
    }
    transitive_tag_keys = ["Generator"]
  }
  default_tags {
    tags = {
      Generator = "Terraform"
    }
  }
  profile             = "dev"
  region              = module.global.config.organization.aws_region
  shared_config_files = ["./_shared_config.local"]
}


###############################################################################
# Create a provider to assume the TerraformAdmin permission set 
# role on account Log Archive Account.
###############################################################################
provider "aws" {
  alias = "log_archive"
  assume_role {
    tags = {
      Generator = "Terraform"
    }
    transitive_tag_keys = ["Generator"]
  }
  default_tags {
    tags = {
      Generator = "Terraform"
    }
  }
  profile             = "log_archive"
  region              = module.global.config.organization.aws_region
  shared_config_files = ["./_shared_config.local"]
}


###############################################################################
# Create a provider to assume the TerraformAdmin permission set 
# role on account Master Account.
###############################################################################
provider "aws" {
  alias = "master"
  assume_role {
    tags = {
      Generator = "Terraform"
    }
    transitive_tag_keys = ["Generator"]
  }
  default_tags {
    tags = {
      Generator = "Terraform"
    }
  }
  profile             = "master"
  region              = module.global.config.organization.aws_region
  shared_config_files = ["./_shared_config.local"]
}


###############################################################################
# Create a provider to assume the TerraformAdmin permission set 
# role on account Production Account.
###############################################################################
provider "aws" {
  alias = "prod"
  assume_role {
    tags = {
      Generator = "Terraform"
    }
    transitive_tag_keys = ["Generator"]
  }
  default_tags {
    tags = {
      Generator = "Terraform"
    }
  }
  profile             = "prod"
  region              = module.global.config.organization.aws_region
  shared_config_files = ["./_shared_config.local"]
}


###############################################################################
# Create a provider to assume the TerraformAdmin permission set 
# role on account Testing Account.
###############################################################################
provider "aws" {
  alias = "test"
  assume_role {
    tags = {
      Generator = "Terraform"
    }
    transitive_tag_keys = ["Generator"]
  }
  default_tags {
    tags = {
      Generator = "Terraform"
    }
  }
  profile             = "test"
  region              = module.global.config.organization.aws_region
  shared_config_files = ["./_shared_config.local"]
}


###############################################################################
# Create a provider to assume the TerraformAdmin permission set 
# role on account Shared Services Account.
###############################################################################
provider "aws" {
  alias = "shared_services"
  assume_role {
    tags = {
      Generator = "Terraform"
    }
    transitive_tag_keys = ["Generator"]
  }
  default_tags {
    tags = {
      Generator = "Terraform"
    }
  }
  profile             = "shared_services"
  region              = module.global.config.organization.aws_region
  shared_config_files = ["./_shared_config.local"]
}

###############################################################################
# KEY ACCOUNT PROVIDERS
###############################################################################


###############################################################################
# Create a provider to assume the TerraformAdmin permission set 
# role on the master key account.
###############################################################################
provider "aws" {
  alias = "key_account_master"
  assume_role {
    tags = {
      Generator = "Terraform"
    }
    transitive_tag_keys = ["Generator"]
  }
  default_tags {
    tags = {
      Generator = "Terraform"
    }
  }
  profile             = "master"
  region              = module.global.config.organization.aws_region
  shared_config_files = ["./_shared_config.local"]
}


###############################################################################
# Create a provider to assume the TerraformAdmin permission set 
# role on the terraform_state key account.
###############################################################################
provider "aws" {
  alias = "key_account_terraform_state"
  assume_role {
    tags = {
      Generator = "Terraform"
    }
    transitive_tag_keys = ["Generator"]
  }
  default_tags {
    tags = {
      Generator = "Terraform"
    }
  }
  profile             = "shared_services"
  region              = module.global.config.organization.aws_region
  shared_config_files = ["./_shared_config.local"]
}


###############################################################################
# Create a provider to assume the TerraformAdmin permission set 
# role on the log_archive key account.
###############################################################################
provider "aws" {
  alias = "key_account_log_archive"
  assume_role {
    tags = {
      Generator = "Terraform"
    }
    transitive_tag_keys = ["Generator"]
  }
  default_tags {
    tags = {
      Generator = "Terraform"
    }
  }
  profile             = "log_archive"
  region              = module.global.config.organization.aws_region
  shared_config_files = ["./_shared_config.local"]
}

Lots more in that box, including the generated artifacts that produce the actual state-management resources. But the point is that it's a good thing to write very DRY configuration, and let a machine GENERATE your very WET declarative code.

See the Metastructure Wiki for more info on this approach.

P.S. In the HBS templates above you'll see a bunch of non-standard helpers like lodash. Those are packaged with Metastructure but they come from here.

Upvotes: 0

Matt Lavin
Matt Lavin

Reputation: 980

Building on the great contribution from Austin Davis, here is a variation that I use which includes a requirement for data encryption:

provider "aws" {
  region = "us-east-1"
}

resource "aws_s3_bucket" "terraform_state" {
  bucket = "tfstate"

  lifecycle {
    prevent_destroy = true
  }
}

resource "aws_s3_bucket_versioning" "terraform_state" {
  bucket = aws_s3_bucket.terraform_state.id
  versioning_configuration {
    status = "Enabled"
  }
}


resource "aws_dynamodb_table" "terraform_state_lock" {
  name           = "app-state"
  read_capacity  = 1
  write_capacity = 1
  hash_key       = "LockID"

  attribute {
    name = "LockID"
    type = "S"
  }
}

resource "aws_s3_bucket_policy" "terraform_state" {
  bucket = "${aws_s3_bucket.terraform_state.id}"
  policy =<<EOF
{
  "Version": "2012-10-17",
  "Id": "RequireEncryption",
   "Statement": [
    {
      "Sid": "RequireEncryptedTransport",
      "Effect": "Deny",
      "Action": ["s3:*"],
      "Resource": ["arn:aws:s3:::${aws_s3_bucket.terraform_state.bucket}/*"],
      "Condition": {
        "Bool": {
          "aws:SecureTransport": "false"
        }
      },
      "Principal": "*"
    },
    {
      "Sid": "RequireEncryptedStorage",
      "Effect": "Deny",
      "Action": ["s3:PutObject"],
      "Resource": ["arn:aws:s3:::${aws_s3_bucket.terraform_state.bucket}/*"],
      "Condition": {
        "StringNotEquals": {
          "s3:x-amz-server-side-encryption": "AES256"
        }
      },
      "Principal": "*"
    }
  ]
}
EOF
}

Upvotes: 26

abhi shukla
abhi shukla

Reputation: 1179

Managing terraform state bucket with terraform is kind of chicken and egg problem. One of the way we can address is:

Create terraform state bucket with terraform with local backend and then migrate the state to newly create state bucket.

It can be a bit tricky if you are trying to achieve this with a CI/CD pipeline and trying to make the job idempotent in nature.

Modularise backend configuration in a separate file.

terraform.tf

terraform {
  required_version = "~> 1.3.6"
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 4.48.0"
    }
  }
}

provider "aws" {
  region  = "us-east-1"
}

main.tf

module "remote_state" {
  # you can write your own module or use any community module which 
  # creates a S3 bucket and dynamoDB table (ideally with replication and versioning)
  source                         = "../modules/module-for-s3-bucket-and-ddtable"
  bucket_name                    = "terraform-state-bucket-name"
  dynamodb_table_name            = "terraform-state-lock"

}

backend.tf

terraform {
  backend "s3" {
    bucket         = "terraform-state-bucket-name"
    key            = "state.tfstate"
    region         = "us-east-1"
    dynamodb_table = "terraform-state-lock"
  }
}

With following steps we can manage and create state S3 bucket in the same state.

function configure_state() {

  # Disable S3 bucket backend
  mv backend.tf backend.tf.backup
  # Since S3 config is not present terraform local state will be initialized
  # Or copied from s3 bucket if it already existed
  terraform init -migrate-state -auto-approve
  # Terraform apply will create the S3 bucket backend and save the state in local state
  terraform apply -target module.remote_state
 # It will re-enable S3 backend configuration for storing state
  mv backend.tf.backup backend.tf
  #It will migrate the state from local to S3 bucket
  terraform init -migrate-state -auto-approve
}

Upvotes: 3

Orkhan M.
Orkhan M.

Reputation: 163

I've made a script according to that answer. Keep in mind you'll need to import DynamoDB to your tf state as it is created through aws cli.

Upvotes: -1

Tshepo Shere
Tshepo Shere

Reputation: 139

You can just simply use terraform cloud and configure your backend as follows:

terraform {
    backend "remote" {
        hostname     = "app.terraform.io"
        organization = "your-tf-organization-name"
        workspaces {
            name = "your-workspace-name"
        }
    }
}

Upvotes: -1

Austin Davis
Austin Davis

Reputation: 3756

To set this up using terraform remote state, I usually have a separate folder called remote-state within my dev and prod terraform folder.

The following main.tf file will set up your remote state for what you posted:

provider "aws" {
  region = "us-east-1"
}

resource "aws_s3_bucket" "terraform_state" {
  bucket = "tfstate"
     
  lifecycle {
    prevent_destroy = true
  }
}

resource "aws_s3_bucket_versioning" "terraform_state" {
    bucket = aws_s3_bucket.terraform_state.id

    versioning_configuration {
      status = "Enabled"
    }
}

resource "aws_dynamodb_table" "terraform_state_lock" {
  name           = "app-state"
  read_capacity  = 1
  write_capacity = 1
  hash_key       = "LockID"

  attribute {
    name = "LockID"
    type = "S"
  }
}

Then get into this folder using cd remote-state, and run terraform init && terraform apply - this should only need to be run once. You might add something to bucket and dynamodb table name to separate your different environments.

Upvotes: 136

paulg
paulg

Reputation: 728

There are some great answers here & I'd like to offer an alternative to managing your back end state;

  1. Set up a Terraform Cloud Account (it's free for up to 5 users).
  2. Create a workspace for your organization (Version control workflow is typical)
  3. Select your VCS such as github or bitbucket (where you store your terraform plans and modules)
  4. Terraform Cloud will give you the instructions needed for your new OAuth Connection
  5. Once that's setup you'll have the option to set up an SSH keypair which is typically not needed & you can click the Skip & Finish button

Once your terraform cloud account is set up & connected to your VCS repos where you store your terraform plans & modules... Add your terraform module repos in terraform cloud, by clicking on the Registry tab. You will need to ensure that your terraform modules are versioned / tagged & follow proper naming convention. If you have a terraform module that creates a load balancer in AWS, you would name the terraform module repository (in github for example), like this: terraform-aws-loadbalancer. As long as it starts with terraform-aws- you're good. Then you add a version tag to it such as 1.0.0

So let's say you create a terraform plan that points to that load balancer module, this is how you point your backend config to terraform cloud & to the load balancer module:

backend-state.tf contents:

terraform {
  backend "remote" {
    hostname     = "app.terraform.io"
    organization = "YOUR-TERRAFORM-CLOUD-ORG"
    workspaces {
    # name = ""   ## For single workspace jobs
    # prefix = "" ## for multiple workspaces
    # you can use name instead of prefix
    prefix = "terraform-plan-name-"
    }
  }
}

terraform plan main.tf contents;

module "aws_alb" {
  source  = "app.terraform.io/YOUR-TERRAFORM-CLOUD-ORG/loadbalancer/aws"
  version = "1.0.0"
  
  name = "load-balancer-test"
  security_groups = [module.aws_sg.id]
  load_balancer_type = "application"
  internal = false
  subnets = [data.aws_subnet.public.id]
  idle_timeout = 1200
  # access_logs_enabled = true
  # access_logs_s3bucket = "BUCKET-NAME"
  tags = local.tags
}

Locally from your terminal (using Mac OSX as an example);

terraform init
# if you're using name instead of prefix in your backend set 
# up, no need to run terraform workspace cmd
terraform workspace new test
terraform plan
terraform apply

You'll see the apply happening in terraform cloud under your workspaces with this name: terraform-plan-name-test "test" is appended to your workspace prefix name which is defined in your backend-state.tf above. You end up with a GUI / Console full of your terraform plans within your workspace, the same way you can see your Cloudformation Stacks in AWS. I find devops that are used to Cloudformation and transitioning to Terraform, like this set up.

One advantage is, within Terraform Cloud you can easily set it up so that a plan (stack build) is triggered with a git commit or merge to the master branch.

1 reference: https://www.terraform.io/docs/language/settings/backends/remote.html#basic-configuration

Upvotes: 3

Rotem jackoby
Rotem jackoby

Reputation: 22058

I would Highly recommend using Terragrunt to keep your Terraform code manageable and DRY (the Don't repeat yourself principle).

Terragrunt has many capabilities - for your specific case I would suggest following the Keep your remote state configuration DRY section.

I'll add a short and simplified summary below.


Problems with managing remote state with Terraform

Let's say you have the following Terraform infrastructure:

├── backend-app
│   ├── main.tf
│   └── other_resources.tf
│   └── variables.tf
├── frontend-app
│   ├── main.tf
│   └── other_resources.tf
│   └── variables.tf
├── mysql
│   ├── main.tf
│   └── other_resources.tf
│   └── variables.tf
└── mongo
    ├── main.tf
    └── other_resources.tf
    └── variables.tf

Each app is a terraform module that you'll want to store its Terraform state in a remote backend.

Without Terragrunt you'll have to write the backend configuration block for each application in order to save the current state in a remote state storage:

terraform {
  backend "s3" {
    bucket         = "my-terraform-state"
    key            = "frontend-app/terraform.tfstate"
    region         = "us-east-1"
    encrypt        = true
    dynamodb_table = "my-lock-table"
  }
}

Managing a few modules like in the above example its not a burden to add this file for each one of them - but it won't last for real world scenarious.

Wouldn't it be better if we could do some kind of inheritance (like in Object oriented programming)?

This is made easy with Terragrunt.


Terragrunt to the rescue

Back to the modules structure.
With Terragrunt we just need add add a root terragrunt.hcl with all the configurations and for each module you add a child terragrunt.hcl which contains only on statement:

├── terragrunt.hcl       #<---- Root
├── backend-app
│   ├── main.tf
│   └── other_resources.tf
│   └── variables.tf
│   └── terragrunt.hcl   #<---- Child
├── frontend-app
│   ├── main.tf
│   └── other_resources.tf
│   └── variables.tf
│   └── terragrunt.hcl   #<---- Child
├── mysql
│   ├── main.tf
│   └── other_resources.tf
│   └── variables.tf
│   └── terragrunt.hcl   #<---- Child
└── mongo
    ├── main.tf
    └── other_resources.tf
    └── variables.tf
    └── terragrunt.hcl.  #<---- Child

The root terragrunt.hcl will keep your remote state configuration and the children will only have the following statement:

include {
  path = find_in_parent_folders()
}

This include block tells Terragrunt to use the exact same Terragrunt configuration from the root terragrunt.hcl file specified via the path parameter.

The next time you run terragrunt, it will automatically configure all the settings in the remote_state.config block, if they aren’t configured already, by calling terraform init.

The backend.tf file will be created automatically for you.


Summary

You can have hundreds of modules with nested hierarchy (for example divided into regions,tenants, applications etc') and still be able to maintain only one configuration of the remote state.

Upvotes: 3

Michael Reilly
Michael Reilly

Reputation: 91

All of the answers provided are very good. I just want to emphasize the "key" attribute. When you get into advanced applications of Terraform, you will eventually need to reference these S3 keys in order to pull remote state into a current project, or to leverage 'terraform move'.

It really helps to use intelligent key names when you plan your "terraform" stanza to define your backend.

I recommend the following as a base key name: account_name/{development:production}/region/module_name/terraform.tfstate

Revise to fit your needs, but going back and fixing all my key names as I expanded my use of Terraform across many accounts and regions was not fun at all.

Upvotes: 0

awsgeek
awsgeek

Reputation: 51

As a word of caution, I would not create a terraform statefile with terraform in case someone inadvertently deletes it. So use scripts like aws-cli or boto3 which do not maintain state and keep those scripts limited to a variable for s3 bucket name. You will not really change the script for terraform state bucket in the long run except for creating additional folders inside the bucket which can be done outside terraform in the resource level.

Upvotes: 0

Anant Vardhan
Anant Vardhan

Reputation: 71

What I have been doing to address this is that, You can comment out the "backend" block for the initial run, and do a selected terraform apply on only the state bucket and any related resources(like bucket policies).

#  backend "s3" {
#    bucket         = "foo-bar-state-bucket"
#    key            = "core-terraform.tfstate"
#    region         = "eu-west-1"
#  }
#}
provider "aws" {
    region = "eu-west-1"
    profile = "terraform-iam-user"
    shared_credentials_file = "~/.aws/credentials"
  }
terraform apply --target aws_s3_bucket.foobar-terraform --target aws_s3_bucket_policy.foobar-terraform

This will provision your s3 state bucket, and will store .tfstate file locally in your working directory.

Later, Uncomment the "backend" block and reconfigure the backend terraform init --reconfigure , which will prompt you to copy your locally present .tfstate file, (tracking state of your backend s3 bucket) to the remote backend which is now available to be used by terraform for any subsequent runs.

Prompt for copying exisitng state to remote backend

Upvotes: 6

hyperd
hyperd

Reputation: 171

Setting up a Terraform backend leveraging an AWS s3 bucket is relatively easy.

First, create a bucket in the region of your choice (eu-west-1 for the example), named terraform-backend-store (remember to choose a unique name.)

To do so, open your terminal and run the following command, assuming that you have properly set up the AWS CLI (otherwise, follow the instructions at the official documentation):

aws s3api create-bucket --bucket terraform-backend-store \
    --region eu-west-1 \
    --create-bucket-configuration \
    LocationConstraint=eu-west-1
# Output:
{
    "Location": "http://terraform-backend-store.s3.amazonaws.com/"
}

The command should be self-explanatory; to learn more check the documentation here.

Once the bucket is in place, it needs a proper configuration for security and reliability. For a bucket that holds the Terraform state, it’s common-sense enabling the server-side encryption. Keeping it simple, try first AES256 method (although I recommend to use KMS and implement a proper key rotation):

aws s3api put-bucket-encryption \
    --bucket terraform-backend-store \
    --server-side-encryption-configuration={\"Rules\":[{\"ApplyServerSideEncryptionByDefault\":{\"SSEAlgorithm\":\"AES256\"}}]}
# Output: expect none when the command is executed successfully

Next, it’s crucial restricting the access to the bucket; create an unprivileged IAM user as follows:

aws iam create-user --user-name terraform-deployer
# Output:
{
    "User": {
        "UserName": "terraform-deployer",
        "Path": "/",
        "CreateDate": "2019-01-27T03:20:41.270Z",
        "UserId": "AIDAIOSFODNN7EXAMPLE",
        "Arn": "arn:aws:iam::123456789012:user/terraform-deployer"
    }
}

Take note of the Arn from the command’s output (it looks like: “Arn”: “arn:aws:iam::123456789012:user/terraform-deployer”).

To correctly interact with the s3 service and DynamoDB at a later stage to implement the locking, our IAM user must hold a sufficient set of permissions. It is recommended to have severe restrictions in place for production environments, though, for the sake of simplicity, start assigning AmazonS3FullAccess and AmazonDynamoDBFullAccess:

aws iam attach-user-policy --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess --user-name terraform-deployer
# Output: expect none when the command execution is successful

aws iam attach-user-policy --policy-arn arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess --user-name terraform-deployer
# Output: expect none when the command execution is successful

The freshly created IAM user must be enabled to execute the required actions against your s3 bucket. You can do this by creating and applying the right policy, as follows:

cat <<-EOF >> policy.json
{
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::123456789012:user/terraform-deployer"
            },
            "Action": "s3:*",
            "Resource": "arn:aws:s3:::terraform-remote-store"
        }
    ]
}
EOF

This basic policy file grants the principal with arn “arn:aws:iam::123456789012:user/terraform-deployer”, to execute all the available actions (“Action”: “s3:*") against the bucket with arn “arn:aws:s3:::terraform-remote-store”. Again, in production is desired to force way stricter policies. For reference, have a look at the AWS Policy Generator.

Back to the terminal and run the command as shown below, to enforce the policy in your bucket:

aws s3api put-bucket-policy --bucket terraform-remote-store --policy file://policy.json
# Output: none

As the last step, enable the bucket’s versioning:

aws s3api put-bucket-versioning --bucket terraform-remote-store --versioning-configuration Status=Enabled

It allows saving different versions of the infrastructure’s state and rollback easily to a previous stage without struggling.

The AWS s3 bucket is ready, time to integrate it with Terraform. Listed below, is the minimal configuration required to set up this remote backend:

# terraform.tf

provider "aws" {
  region                  = "${var.aws_region}"
  shared_credentials_file = "~/.aws/credentials"
  profile                 = "default"
}

terraform {  
    backend "s3" {
        bucket  = "terraform-remote-store"
        encrypt = true
        key     = "terraform.tfstate"    
        region  = "eu-west-1"  
    }
}

# the rest of your configuration and resources to deploy

Once in place, terraform must be initialized (again). terraform init The remote backend is ready for a ride, test it.

What about locking? Storing the state remotely brings a pitfall, especially when working in scenarios where several tasks, jobs, and team members have access to it. Under these circumstances, the risk of multiple concurrent attempts to make changes to the state is high. Here comes to help the lock, a feature that prevents opening the state file while already in use.

You can implement the lock creating an AWS DynamoDB Table, used by terraform to set and unset the locks. Provision the resource using terraform itself:

# create-dynamodb-lock-table.tf
resource "aws_dynamodb_table" "dynamodb-terraform-state-lock" {
  name           = "terraform-state-lock-dynamo"
  hash_key       = "LockID"
  read_capacity  = 20
  write_capacity = 20
attribute {
    name = "LockID"
    type = "S"
  }
tags {
    Name = "DynamoDB Terraform State Lock Table"
  }
}

and deploy it as shown: terraform plan -out "planfile" && terraform apply -input=false -auto-approve "planfile"

Once the command execution is completed, the locking mechanism must be added to your backend configuration as follow:

# terraform.tf

provider "aws" {
  region                  = "${var.aws_region}"
  shared_credentials_file = "~/.aws/credentials"
  profile                 = "default"
}

terraform {  
    backend "s3" {
        bucket         = "terraform-remote-store"
        encrypt        = true
        key            = "terraform.tfstate"    
        region         = "eu-west-1"
        dynamodb_table = "terraform-state-lock-dynamo"
    }
}

# the rest of your configuration and resources to deploy

All done. Remember to run again terraform init and enjoy your remote backend.

Upvotes: 10

nifCody
nifCody

Reputation: 2444

there is a version issue here within terraform, for me it is working for the mentioned version. also, it is good to have the terraform state on the bucket.

terraform {
    required_version = "~> 0.12.12"
    backend "gcs" {
        bucket = "bbucket-name"
        prefix = "terraform/state"
    }
}

Upvotes: 0

victor h damian chicon
victor h damian chicon

Reputation: 151

The way I have overcome this issue is by creating the project remote state in the first init plan apply cycle and initializing the remote state in the second init plan apply cycle.


# first init plan apply cycle 
# Configure the AWS Provider
# https://www.terraform.io/docs/providers/aws/index.html
provider "aws" {
  version = "~> 2.0"
  region  = "us-east-1"
}

resource "aws_s3_bucket" "terraform_remote_state" {
  bucket = "terraform-remote-state"
  acl    = "private"

  tags = {
    Name        = "terraform-remote-state"
    Environment = "Dev"
  }
}

# add this sniped and execute the 
# the second init plan apply cycle
# https://www.terraform.io/docs/backends/types/s3.html

terraform {
  backend "s3" {
    bucket = "terraform-remote-state"
    key    = "path/to/my/key"
    region = "us-east-1"
  }
}

Upvotes: 1

stavxyz
stavxyz

Reputation: 2025

I created a terraform module with a few bootstrap commands/instructions to solve this:

https://github.com/samstav/terraform-aws-backend

There are detailed instructions in the README, but the gist is:

# conf.tf

module "backend" {
  source         = "github.com/samstav/terraform-aws-backend"
  backend_bucket = "terraform-state-bucket"
}

Then, in your shell (make sure you haven't written your terraform {} block yet):

terraform get -update
terraform init -backend=false
terraform plan -out=backend.plan -target=module.backend
terraform apply backend.plan

Now write your terraform {} block:

# conf.tf

terraform {
  backend "s3" {
    bucket         = "terraform-state-bucket"
    key            = "states/terraform.tfstate"
    dynamodb_table = "terraform-lock"
  }
}

And then you can re-init:

terraform init -reconfigure

Upvotes: 15

Lev
Lev

Reputation: 115

What I usually do is start without remote backend for creating initial infrastructure as you said , S3 , IAM roles and other essential stuff. Once I have that I just add backend configuration and run terraform init to migrate to S3.

It's not the best case but in most cases I don't rebuild my entire environment everyday so this semi automated approach is good enough. I also separate next "layers" (VPC, Subnets, IGW, NAT ,etc) of infrastructure to different states.

Upvotes: 8

Harshit Singh
Harshit Singh

Reputation: 41

Assuming that you are running terraform locally and not on some virtual server and that you want to store terraform state in S3 bucket that doesn't exist. This is how I would approach it,

Create terraform script, that provisions S3 bucket

Create terraform script that provisions your infrastructure

At the end of your terraform script to provision bucket to be used by second terraform script for storing state files, include code to provision null resource.

In the code block for the null resource using local-exec provisioner run command to go into the directory where your second terraform script exist followed by usual terraform init to initialize the backend then terraform plan, then terraform apply

Upvotes: -1

djt
djt

Reputation: 7535

As you've discovered, you can't use terraform to build the components terraform needs in the first place.

While I understand the inclination to have terraform "track everything", it is very difficult, and more headache than it's worth.

I generally handle this situation by creating a simple bootstrap shell script. It creates things like:

  1. The s3 bucket for state storage
  2. Adds versioning to said bucket
  3. a terraform IAM user and group with certain policies I'll need for terraform builds

While you should only need to run this once (technically), I find that when I'm developing a new system, I spin up and tear things down repeatedly. So having those steps in one script makes that a lot simpler.

I generally build the script to be idempotent. This way, you can run it multiple times without concern that you're creating duplicate buckets, users, etc

Upvotes: 18

Related Questions