Chris F
Chris F

Reputation: 16673

Terraform not using provided AWS profile?

I'll update later, but for now...

$ terraform --version
Terraform v0.12.17
+ provider.aws v3.23.0

I have an AWS profile set in my ./aws/credentials and ~/.aws/config files, like so...

~/.aws/credentials
[default]
aws_access_key_id=****
aws_secret_access_key=****

[myprofile]
aws_access_key_id=****
aws_secret_access_key=****

~/.aws/config
[default]
region=us-east-1
output=json

[profile myprofile]
region=us-east-1
output=json

In my Terraform plan, I have

provider "aws" {
  region  = "us-east-1"
  profile = "myprofile"
}

terraform {
  required_version = ">= 0.12.17, < 0.13"
}

resource "aws_vpc" "vpc" {
  cidr_block = "10.123.123.0/24"

  tags = {
    Name = "test_vpc"
  }
}

output "vpc_id" {
  value = aws_vpc.vpc.id
}

And I have a plan that creates a VPC, so I do

$ export AWS_PROFILE=myprofile
$ terraform apply
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Outputs:

module_vpc_id = vpc-123456abced

As you can see the plan creates the VPC, however, the VPC doesn't get created in the myprofile account but in the default account. I know so because 1) I don't see it in the myprofile account, and 2) when I destroy the plan, it shows the owner_id as the default account number. Why?

Update: Note if I add the access_key and secret_key key/value pairs in my provider {} block, it creates the VPC in the correct account. Of course I don't wanna do this, but just wanted to prove that the script indeed works with the myprofile account.

Update: Note the following commands return nothing (blanks)

$ echo $AWS_ACCESS_KEY_ID
$ echo $AWS_SECRET_ACCESS_KEY

and running env doesn't show those variables.

Upvotes: 5

Views: 11054

Answers (2)

Meysam
Meysam

Reputation: 718

For anyone else stuck in this and spending countless hours to figure things out, I realized the profile you place in the provider "aws" has nothing to do with the one in the backend "s3" block.

In short, you will need to specify the profile in both places just like the following.

provider "aws" {
  profile = "my-aws-profile" # <- this is usual practice
  region  = "eu-central-1"
}

terraform {
  backend "s3" {
    profile        = "my-aws-profile" # <- notice the duplication here
    bucket         = "my-terraform-state-bucket"
    key            = "path/to/terraform.tfstate"
    region         = "eu-central-1"
    encrypt        = true
    dynamodb_table = "terraform-state-lock"
  }
}

The sad part is that you cannot even put it in a locals block to avoid duplication.

You may however be able to get around this using tools like Terragrunt!

Upvotes: 4

Marcin
Marcin

Reputation: 238081

Based on the comments.

The issue was caused by having AWS_PROFILE env variable set. According to TF docs, the variable has higher priority then Shared credentials/configuration file:

  • Static credentials
  • Environment variables
  • Shared credentials/configuration file
  • CodeBuild, ECS, and EKS Roles
  • EC2 Instance Metadata Service (IMDS and IMDSv2)

Upvotes: 10

Related Questions