octopi
octopi

Reputation: 128

Terraform Failure configuring LB attributes

I've followed the first answer on this post on StackOverflow but I obtain this error:

Failure configuring LB attributes: InvalidConfigurationRequest: Access Denied for bucket: myproject-log. Please check S3bucket permission status code: 400

This is my code:

s3_bucket

data "aws_elb_service_account" "main" {}

resource "aws_s3_bucket" "bucket_log" {
  bucket = "${var.project}-log"
  acl    = "log-delivery-write"

policy = <<POLICY
{
  "Id": "Policy",
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "s3:PutObject"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:s3:::${var.project}-log/AWSLogs/*",
      "Principal": {
        "AWS": [
          "${data.aws_elb_service_account.main.arn}"
        ]
      }
    }
  ]
}
POLICY

}

load balancer

resource "aws_lb" "vm_stage" {
  name = "${var.project}-lb-stg"
  internal           = false
  load_balancer_type = "application"
  subnets         = [aws_subnet.subnet_1.id, aws_subnet.subnet_2.id, aws_subnet.subnet_3.id]
  security_groups = [aws_security_group.elb_project_stg.id]
  access_logs {
    bucket  = aws_s3_bucket.bucket_log.id
    prefix  = "lb-stg"
    enabled = true
  }
  tags = {
    Name = "${var.project}-lb-stg"
  }
}

Upvotes: 6

Views: 4299

Answers (5)

Andy Riedel
Andy Riedel

Reputation: 21

This is a complete working example.

data "aws_caller_identity" "current" {}
data "aws_elb_service_account" "elb_account_id" {}

resource "aws_s3_bucket" "lb_logs" {
  bucket = "${local.name}-loadbalancer-logs"
}

resource "aws_s3_bucket_policy" "lb_logs" {
  bucket = aws_s3_bucket.lb_logs.id
  policy = data.aws_iam_policy_document.allow_lb.json
}

data "aws_iam_policy_document" "allow_lb" {
  statement {
    effect = "Allow"
    resources = [
      "arn:aws:s3:::${aws_s3_bucket.lb_logs.bucket}/AWSLogs/${data.aws_caller_identity.current.account_id}/*",
    ]
    actions = ["s3:PutObject"]
    principals {
      type        = "AWS"
      identifiers = ["arn:aws:iam::${data.aws_elb_service_account.elb_account_id.id}:root"]
    }
  }

  statement {
    effect = "Allow"
    resources = [
      "arn:aws:s3:::${aws_s3_bucket.lb_logs.bucket}/AWSLogs/${data.aws_caller_identity.current.account_id}/*",
    ]
    actions = ["s3:PutObject"]
    principals {
      type        = "Service"
      identifiers = ["delivery.logs.amazonaws.com"]
    }
    condition {
      test     = "StringEquals"
      variable = "s3:x-amz-acl"
      values   = ["bucket-owner-full-control"]
    }
  }

  statement {
    effect = "Allow"
    resources = [
      "arn:aws:s3:::${aws_s3_bucket.lb_logs.bucket}",
    ]
    actions = ["s3:GetBucketAcl"]
    principals {
      type        = "Service"
      identifiers = ["delivery.logs.amazonaws.com"]
    }
  }
}

resource "aws_alb" "lb" {
  name               = "${local.name}-lb"
  load_balancer_type = "application"
  subnets            = var.subnet_ids
  security_groups    = [
    aws_security_group.lb.id
  ]

  access_logs {
    bucket  = aws_s3_bucket.lb_logs.id
    enabled = true
  }

  lifecycle {
    create_before_destroy = true
  }

  tags = {
    Name = "${local.name}-lb"
  }
}

Upvotes: 0

David Webster
David Webster

Reputation: 2333

I struggled with this as well the entire terraform bucket policy that worked for me is below.

data "aws_iam_policy_document" "elb_bucket_policy" {

  statement {
    effect = "Allow"
    resources = [
      "arn:aws:s3:::unique-bucket-name/${local.prefix}/AWSLogs/${local.application_account_id}/*",
    ]
    actions = ["s3:PutObject"]
    principals {
      type        = "AWS"
      identifiers = ["arn:aws:iam::${local.elb_account_id}:root"]
    }
  }

  statement {
    effect = "Allow"
    resources = [
      "arn:aws:s3:::unique-bucket-name/${local.prefix}/AWSLogs/${local.application_account_id}/*",
    ]
    actions = ["s3:PutObject"]
    principals {
      type        = "Service"
      identifiers = ["logdelivery.elb.amazonaws.com"]
    }
  }

  statement {
    effect = "Allow"
    resources = [
      "arn:aws:s3:::unique-bucket-name/${local.prefix}/AWSLogs/${local.application_account_id}/*",
    ]
    actions = ["s3:PutObject"]
    principals {
      type        = "Service"
      identifiers = ["logdelivery.elb.amazonaws.com"]
    }
    condition {
      test     = "StringEquals"
      variable = "s3:x-amz-acl"
      values   = ["bucket-owner-full-control"]
    }
  }
}

Upvotes: 0

Dantheman91
Dantheman91

Reputation: 894

Just going to drop this here since this cross applied to another question that was asked.

This took me awhile to figure out, but the S3 bucket has two requirements per the documentation:

  • The bucket must be located in the same Region as the load balancer.
  • Amazon S3-Managed Encryption Keys (SSE-S3) is required. No other encryption options are supported.

Source: https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-access-logs.html

While it makes it seem like it's a permissions issue with the error message given it may actually be an issue with the bucket having the wrong encryption type. In my case the issue was that my bucket was unencrypted.

Updated the bucket to SSE-S3 encryption and I no longer received the error:

resource "aws_s3_bucket" "s3_access_logs_bucket" {
  bucket = var.access_logs_bucket_name
  acl = "private"
  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        sse_algorithm = "AES256"
      }
    }
  }

  versioning {
    enabled = true
  }

}

And just because, here's the policy I used:

data "aws_elb_service_account" "main" {}


data "aws_iam_policy_document" "s3_lb_write" {
  statement {
    principals {
      identifiers = ["${data.aws_elb_service_account.main.arn}"]
      type = "AWS"
    }

    actions = ["s3:PutObject"]

    resources = [
      "${aws_s3_bucket.s3_access_logs_bucket.arn}/*"
    ]
  }
}

resource "aws_s3_bucket_policy" "load_balancer_access_logs_bucket_policy" {
  bucket = aws_s3_bucket.s3_access_logs_bucket.id
  policy = data.aws_iam_policy_document.s3_lb_write.json
}

Upvotes: 3

ir0h
ir0h

Reputation: 361

As per this post, I was able to resolve this issue by disabling KMS and using SSE-S3 for bucket encryption. Also, there are additional permissions listed in the AWS docs.

Upvotes: 0

Geoff
Geoff

Reputation: 516

Official AWS Docs

https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-access-logs.html

Solution

Reference the docs above and change your bucket's iam policy to reflect what the documentation states. The logging is actually done by AWS and not your roles or IAM users. So you need to give ÅWS permission to do this. That's why the docs show statements in the policy that specify the delivery.logs.amazonaws.com principal. That principal is the AWS logging service. Even though your bucket is hosted on AWS, they don't give themselves access to your bucket by default. You have to explicitly grant access to AWS if you want their services to work.

Upvotes: 1

Related Questions