Jeremy
Jeremy

Reputation: 1845

Dynamically add origin/cache behavior to existing CloudFront distro

I believe I have hit a limitation in either the AWS api or the aws_cloudfront_distribution module (version v0.11). What I am trying to accomplish is dynamically adding origins and cache behavior to an existing CloudFront distribution.

I know this is achievable via the AWS CLI in a way. An example of that is shown below (pulled from the update-distribution documentation. My question pertains to how how I can simplify this implementation using terraform (shown below the code snippet).

$ JSON_OUTPUT=$(aws cloudfront get-distribution-config --id=<MY-DISTROS-ID>)
$ echo $JSON_OUTPUT
{
  "ETag": "E2ABCDEFGHIJKL",
  "DistributionConfig": {
    "CacheBehaviors": {
      "Quantity": 0
    },
    "Origins": {
      "Items": [
        {
          ...origin-info-for-default-cache-behavior
        }
      ],
      "Quantity": 1
    },
    "Enabled": true,
    "DefaultCacheBehavior": {
      ...default-cache-behavior
    },
    ...more-cloudfront-stuff
}
$ # remove the ETag
$ # add an entry to the DistributionConfig.Origins.Items array
$ # add an array of Items to the DistributionConfig.CacheBehaviors.Items = [{ the cache behavior }] and set Quantity = 1
$ # save DistributionConfig key value to a file file:///tmp/new-distro-config.json
$ aws cloudfront update-distribution --id <MY-DISTROS-ID> --distribution-config file:///tmp/new-distro-config.json --if-match E2ABCDEFGHIJKL

This will successfully add a new cache behavior and origin to the existing CloudFront distro.

I would like to accomplish this same end goal using terraform so that I can persist state in S3. So for example, using terraform, I would

In pseudocode-terraform, and for more clarification, the code block below illustrates this further

# original CloudFront distro
resource "aws_cloudfront_distribution" "my-distro" {
  aliases = ["${var.my_alias_domain}"]

  origin {
    ...origin-info-for-default-cache-behavior
  }

  enabled = true

  default_cache_behavior {
    ...default-cache-behavior
  }

  ...more-cloudfront-stuff

}

Then in another terraform module (and sometime in the future), something to the effect of

data "terraform_remote_state" "CloudFront" {
  backend = "s3"

  config {
    acl    = "bucket-owner-full-control"
    bucket = "${var.CloudFront_BUCKET}"
    key    = "${var.CloudFront_KEY}"
    region = "${var.CloudFront_REGION}"
  }
}

resource "aws_cloudfront_distribution_origin" "next-origin" {
  cloudfront_distribution_id = "${data.terraform_remote_state.CloudFront.id}"

  origin {
    ...origin-info-for-default-cache-behavior
  }

  ordered_cache_behavior {
    ...origin-info-for-cache-behavior
  }
}

I'm well aware that aws_cloudfront_distribution_origin is not a resource in terraform. I am just writing this pseudocode to convey what I am trying to accomplish. This is a practical use-case in order to break apart a service that should all live under the same domain.

After writing this all out, I am assuming that I should just abandon terraform in this scenario in lieu of the AWS CLI directly. However, if anyone else accomplished this using terraform, I would love to hear how that is possible.

Upvotes: 2

Views: 6755

Answers (2)

jmgreg31
jmgreg31

Reputation: 11

I went ahead and took a crack at this in terraform version 12. Open to feedback!

dynamic "origin" {
    for_each = [for i in "${var.dynamic_custom_origin_config}" : {
      name                     = i.domain_name
      id                       = i.origin_id
      path                     = i.origin_path
      http_port                = i.http_port
      https_port               = i.https_port
      origin_keepalive_timeout = i.origin_keepalive_timeout
      origin_read_timeout      = i.origin_read_timeout
      origin_protocol_policy   = i.origin_protocol_policy
      origin_ssl_protocols     = i.origin_ssl_protocols
    }]
    content {
      domain_name = origin.value.name
      origin_id   = origin.value.id
      origin_path = origin.value.path
      custom_origin_config {
        http_port                = origin.value.http_port
        https_port               = origin.value.https_port
        origin_keepalive_timeout = origin.value.origin_keepalive_timeout
        origin_read_timeout      = origin.value.origin_read_timeout
        origin_protocol_policy   = origin.value.origin_protocol_policy
        origin_ssl_protocols     = origin.value.origin_ssl_protocols
      }
    }
  }

More details and additional examples can be found here: https://github.com/jmgreg31/terraform_aws_cloudfront

Upvotes: 1

Jeremy
Jeremy

Reputation: 1845

The answer is that this is not possible with terraform v0.11 (as far as I can tell, still open to suggestions otherwise). I attempted to dynamically increase the origin/ordered_cache_behavior arguments by setting the count attribute. Something like the following

variable "s3_buckets_info" {
  type        = "list"
  description = "S3 buckets information"
  default = [{
    domain_name = "this.is.a.domain.com"
    app = "this.is.the.app"
    env = "this.is.the.env"
    path = "/match/this/path"
  }, {
    domain_name = "that.is.a.domain.com"
    app = "that.is.the.app"
    env = "that.is.the.env"
    path = "/match/that/path"
  }]
}

resource "aws_cloudfront_distribution" "eb_app" {
  aliases = ["${var.docs_alias_domain}"]

  origin {
    count       = "${length(var.s3_buckets_info)}"

    domain_name = "${lookup(var.s3_buckets_info[count.index + 1], "domain_name")}"
    origin_id   = "${lookup(var.s3_buckets_info[count.index + 1], "app")}-${lookup(var.s3_buckets_info[count.index + 1], "env")}"
    origin_path = ""

Then I ran into the issue that origin[0].count is not an accepted argument. After more google searching I found the following GH issues:

Which led me to the disheartening realization that in order to leverage this kind of behavior one needs the for and for-each features that are only available (at the moment) in v0.12.0-beta1 which they advise against using in production because, well, it's a beta release.

Upvotes: 1

Related Questions