smcracraft
smcracraft

Reputation: 493

Don't want to provide keys to Terraform even without Vault - how to?

not particularly happy with having to do the redacted keys below in order to spin up an ec2 instance with ssh access on aws.

Have I reached the limit without vault?

Getting past the automation of stackoverflow can be problematical when a large amount of code is published relative to the "explanatory" text.

terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
      version = "~> 2.0"
    }
  }

  backend "s3" {
    bucket = "terraformredactedbucket"
    key = "global/s3/terraform.tfstate"
    region = "us-west-2"
    dynamodb_table = "terraformredactedtable"
    encrypt = true
    access_key = "redacted_access_key"
    secret_key = "redacted_secret_key"
 }
}

provider "aws" {
  region = "us-west-2"
  shared_credentials_file = "/Users/redacted/.aws/credentials"
  profile                 = "redacted"
}

resource "aws_key_pair" "redactedtoo-key" {
  key_name = "redactedtoo-key"
  public_key = "redacted_public_key"
}

resource "aws_instance" "redacted-sandbox-1" {
 ami            = "ami-0800fc0fa715fdcfe"
 instance_type  = "t2.micro"
 security_groups = [
   "redacted-sg"
 ]
 key_name = "redactedtoo-key"
 tags  = {
    Name = "redacted-sandbox-1"
    DeploymentState = "Sandbox"
    Function = "RedactedSandBox"
    Project = "Terraform Sandbox"
 }
}

Upvotes: 0

Views: 1347

Answers (1)

Martin Atkins
Martin Atkins

Reputation: 74249

The most ideal way to configure credentials for the AWS provider and the S3 backend is to not mention authentication inside your configuration at all and instead to rely on the standard mechanisms for session-wide configuration of your authentication credentials, which Terraform's AWS provider and S3 backend both support.

First, let's remove the authentication settings from the configuration and leave the configuration focused on describing only the locations of objects. This means the configuration only describes what Terraform is managing and not who or what is running Terraform:

terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
      version = "~> 2.0"
    }
  }

  backend "s3" {
    bucket         = "terraformredactedbucket"
    key            = "global/s3/terraform.tfstate"
    region         = "us-west-2"
    dynamodb_table = "terraformredactedtable"
    encrypt        = true
 }
}

provider "aws" {
  region = "us-west-2"
}

AWS has various different ways to configure authentication. From your example it seems like you're using a shared credentials file in the default location, but you're using a non-default profile from that file. You can set the profile using the environment variable AWS_PROFILE:

export AWS_PROFILE="redacted"

If you want to be explicit about the credentials file location, rather than relying on the default, you could also set that in the environment:

export AWS_SHARED_CREDENTIALS_FILE=/Users/redacted/.aws/credentials

Alternatively, you can set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to directly specify credentials for Terraform to use.

The above assumes the common situation where you will use the same credentials for both the backend and the AWS provider. In more complicated situations a common approach is to use assume_role so that you can use the same credentials for multiple use-cases, creating an indirection between the credentials used and the AWS principal that's actually taking the actions.

If you can't use assume_role and still need separate settings for the provider and the backend then you'll need to include some settings in either the backend or the provider to specify the difference, but in neither case should you need to put the access key ID and secret key directly in your Terraform configuration: instead, you can use a shared credentials file for both and configure the backend and provider to use different profile names.

Upvotes: 1

Related Questions