Reputation: 477
I am trying to provision AWS Elasitcache Global Replication group. This is being done in two stages. In the first stage Primary region aws_elasticache_replication_group and Global aws_elasticache_global_replication_group are provisioned. In the second stage secondary region aws_elasticache_replication_group is provisioned and attached to Global Datastore using global_replication_group_id attribute. The secondary region is provisoned and attached to global replication group. However the seconday cluster by default is setting the auto fail over to true. By default this should be false as per documentation. Terraform plan shows that the auto fail over is false, however when terraform apply is run auto fail over is set to true. So now if I rerun terraform apply, the secondary cluster tries to change the auto fail over to false which is the correct default value and this causes terraform apply to fail as this can not be modified.
number_of _cache_clusters is set to 2 in both primary and secondary. I tried with just one cluster in secondary with the same results.
module.elasticache_redis_global.aws_elasticache_replication_group.redis_cache_cluster_sec: Modifying… [id=sp360commercial-pdx-dev-test4-redis]
231Error: error updating ElastiCache Replication Group (sp360commercial-pdx-dev-test4-redis): error requesting modification: InvalidParameterValue: Cluster [sp360commercial-pdx-dev-test4-redis] is part of a global cluster [ldgnf-sp360commercial-iad-dev-test4-global]. Request rejected.
232 status code: 400, request id: b15e578b-7906-412f-aef8-1d038c9fbb81
233 on …/…/…/modules/aws/elasticache-global/redis.tf line 1, in resource “aws_elasticache_replication_group” “redis_cache_cluster_sec”:
234 1: resource “aws_elasticache_replication_group” “redis_cache_cluster_sec” {
236Cleaning up file based variables
If I explicitly set auto fail over to true in secondary cluster provisioning, I get an error saying that the auto fail over attribute conflicts with global replication group ID attribute
Error: ConflictsWith
173 on …/…/…/modules/aws/elasticache-global/redis.tf line 4, in resource “aws_elasticache_replication_group” “redis_cache_cluster_sec”:
174 4: global_replication_group_id = “${local.globalstore_prefix}-global”
175"global_replication_group_id": conflicts with automatic_failover_enabled
176Terraform apply is skipped because DISRUPTERRA_DRY_RUN was set to true.
Terraform Version v0.12.24 AWS Provider Version 3.37.0
I also tried with Terraform Version v0.12.31 and AWS provider 3.58 but he issue exists. I use a config.yml file as input for this code. Below is the the file content. This will be converted to a json file by a shell script before consumed by terraform resources
storage:
elasticache:
instances:
test4:
nodeType: cache.r5.large
applyImmediately: true
numShards: 1
numReplicas: 2
atRestEncryption: true
transitEncryption: true
multiAz: true
globalDatastore:
primaryRegion: us-east-1
secondaryRegion: us-west-2
Primary and Global Cluster
resource "aws_elasticache_global_replication_group" "redis_global_datastore" {
count = 1
global_replication_group_id_suffix = "${local.cluster_prefix}-global"
primary_replication_group_id = aws_elasticache_replication_group.redis_cache_cluster.id
}
resource "aws_elasticache_replication_group" "redis_cache_cluster" {
replication_group_id = "${local.cluster_prefix}-redis"
replication_group_description = "Provisioned using Terraform"
number_cache_clusters = 2
node_type = lookup(local.config, "nodeType", "cache.t2.micro")
port = 6379
engine_version = lookup(local.config, "engineVersion", "5.0.6")
parameter_group_name = contains(keys(local.config), "parameters") ? aws_elasticache_parameter_group.parameter_group[0].name : local.default_parameter_group
subnet_group_name = aws_elasticache_subnet_group.subnet_group.name
security_group_ids = [aws_security_group.redis_sg.id]
maintenance_window = lookup(local.config, "maintenanceWindow", "sun:02:00-sun:04:00")
automatic_failover_enabled = lookup(local.config, "numReplicas", 1) > 1 || lookup(local.config, "numShards", 1) > 1 ? true : false
apply_immediately = lookup(local.config, "applyImmediately", true)
at_rest_encryption_enabled = lookup(local.config, "atRestEncryption", false)
transit_encryption_enabled = lookup(local.config, "transitEncryption", false)
auth_token = var.auth_token
multi_az_enabled = lookup(local.config, "multiAz", false)
dynamic "cluster_mode" {
for_each = lookup(local.config, "numShards", 1) > 1 ? [true] : []
content {
replicas_per_node_group = lookup(local.config, "numReplicas", 1)
num_node_groups = lookup(local.config, "numShards", 2)
}
}
lifecycle {
prevent_destroy = true
ignore_changes = [parameter_group_name]
}
}
Secondary Cluster
resource "aws_elasticache_replication_group" "redis_cache_cluster_sec" {
replication_group_id = "${local.cluster_prefix}-redis"
replication_group_description = "Provisioned using Terraform"
global_replication_group_id = "${local.globalstore_prefix}-global"
auth_token = var.auth_token
subnet_group_name = aws_elasticache_subnet_group.subnet_group.name
security_group_ids = [aws_security_group.redis_sg.id]
}
Upvotes: 0
Views: 2305
Reputation: 1
This is what worked for me (stealing what you had for my own implementation :P)
resource "aws_elasticache_replication_group" "redis_cache_cluster_sec" {
count = var.existing_global_replication_group_id != "" ? 1 : 0
replication_group_id = var.cluster_name
replication_group_description = "Secondary cluster provisioned by Terraform"
global_replication_group_id = var.existing_global_replication_group_id
number_cache_clusters = var.number_cache_clusters
subnet_group_name = aws_elasticache_subnet_group.subnet_group.name
security_group_ids = var.security_group_ids
automatic_failover_enabled = true
dynamic "cluster_mode" {
for_each = var.num_node_groups > 1 ? [true] : []
content {
replicas_per_node_group = var.replicas_per_node_group
num_node_groups = null
}
}
}
Upvotes: 0