jmarks
jmarks

Reputation: 41

Terraform: how to attach volumes from previous EC2 instance?

I have a terraform file that creates an EC2 instance along with a couple of volumes:

resource "aws_instance" "generic" {
  count                  = "${lookup(var.INSTANCE_COUNT, var.SERVICE)}"
  ami                    = "${var.AMI}"
  instance_type          = "${lookup(var.INSTANCE_TYPE, var.BLD_ENV)}"
  subnet_id              = "${element(var.SUBNET,count.index)}"
  vpc_security_group_ids = ["${var.SECURITY_GROUP}"]
  key_name               = "${var.AWS_KEY_NAME}"
  availability_zone      = "${element(var.AWS_AVAILABILITY_ZONE,count.index)}"
  iam_instance_profile   = "${var.IAM_ROLE}"

  root_block_device {
    volume_type           = "gp2"
    delete_on_termination = "${var.DELETE_ROOT_ON_TERMINATION}"
  }

  ebs_block_device {
    device_name           = "${lookup(var.DEVICE_NAME,"datalake")}"
    volume_type           = "${lookup(var.DATALAKE_VOLUME_TYPE, var.SERVICE)}"
    volume_size           = "${var.NONDATADIR_VOLUME_SIZE}"
    delete_on_termination = "${var.DELETE_ROOT_ON_TERMINATION}"
    encrypted             = true
  }

  ebs_block_device {
    device_name           = "${lookup(var.DEVICE_NAME,"datalake_logdir")}"
    delete_on_termination = "${var.DELETE_ROOT_ON_TERMINATION}"
    volume_type           = "${lookup(var.LOGDIR_VOLUME_TYPE, var.SERVICE)}"
    volume_size           = "${var.NONDATADIR_VOLUME_SIZE}"
    encrypted             = true
  }

  volume_tags {
    Name = "${lookup(var.TAGS, "Name")}-${count.index}"
  }
}

If the ec2 instance terminates how can I attach the existing volumes to the new ec2 instance created when I rerun terraform? I was hoping that terraform could somehow tell from the state file the the instance is gone but the volumes aren't and therefore they should be attached to the newly created EC2.

Thanks in advance!

Upvotes: 4

Views: 12968

Answers (3)

clyon
clyon

Reputation: 161

First, separate your instances, volumes and volume attachments like so:

resource "aws_instance" "generic" {
  ami           = "${var.ami_id}"
  instance_type = "${var.instance_type}"
  count         = "${var.node_count}"
  subnet_id     = "${var.subnet_id}"
  key_name      = "${var.key_pair}"
  
  root_block_device = {
    volume_type           = "gp2"
    volume_size           = 20
    delete_on_termination = false
  }

  vpc_security_group_ids = ["${var.security_group_ids}"]
}

resource "aws_ebs_volume" "vol_generic_data" {
  size              = 120
  count             = "${var.node_count}"
  type              = "gp2"
}

resource "aws_volume_attachment" "generic_data_vol_att" {
  device_name = "/dev/xvdf"
  volume_id   = "${element(aws_ebs_volume.vol_generic_data.*.id, count.index)}"
  instance_id = "${element(aws_instance.generic.*.id, count.index)}"
  count       = "${var.node_count}"
}

Then, if your instance gets manually terminated TF should detect that the instance is gone but still referenced in TF state and should try to recreate it and attach the existing volume. I have not tried this. However, I have tried importing an existing instance and its volume into TF state so the same logic should apply for just importing the volume alone and attaching to an existing TF managed instance. You should be able to simply import the existing volume like so:

terraform import module.generic.aws_ebs_volume.vol_generic_data vol-0123456789abcdef0

Then TF will attach the volume or update the state if already attached.

Upvotes: 3

Mostafa Wael
Mostafa Wael

Reputation: 3838

After a lot of searching and reading documentation, I came to a solution for this problem.

Here, I will illustrate with a simple example how to preserve your ebs volumes using terraform i.e. you can create and destroy instances and they will be attached to the same ebs volume each time:

  1. I have created a new Terraform folder in which I have written a script to create an ebs volume with a specific tag.
  2. In my main script I have added a data source to search for ebs volumes with specific tags:
    data "aws_ebs_volume" "test" {
      filter {
        name   = "volume-type"
        values = ["gp2"]
      }
    
      most_recent = true
    }
    locals { # save the volume id value in this local
      ebs_vol_id = "${data.aws_ebs_volume.test.id}"
    }
    output "volume_id" { # print the volume id value
      value = "${local.ebs_vol_id}"
    }
  1. I have used this local (which now holds the volume id) in my aws_volume_attachment resource.
# attach the instacne to a volume
resource "aws_volume_attachment" "ebs_att" {
  device_name   = "/dev/sdh"
  volume_id     = "${local.default_ami}"
  instance_id   = aws_instance.ec2_instance.id
  skip_destroy  = true # (if true) Don't detach the volume from the instance to which it is attached at destroy time, and instead just remove the attachment from Terraform state. 
  
}
  1. Holaaa, know every time you run terraform apply or terraform destroy, your ec2instance will connect to the sameebs` volume.

Discussion:

  1. This is such a workaround to achieve the intended behavior.
  2. You can achieve the same by using terraform import but, I think this way is easier.
  3. The main drawback of this solution is that now we have two terraform states which is now a recommended option.

Upvotes: 0

Yevgeniy Brikman
Yevgeniy Brikman

Reputation: 9361

  1. Create the EBS volume using a separate resource: aws_ebs_volume.
  2. Configure the Instance to attach the volume during boot. For example, you could have a User Data script that uses the attach-volume command of the AWS CLI.
  3. If the Instance crashes, or if you want to replace it to deploy new code, you run terraform apply, and the replacement Instance will boot up and reattach the same EBS Volume.

If you want the Instance to be able to recover itself automatically, it gets trickier.

  • You can configure your Instance with Auto Recovery, but that only detects if the actual VM dies; it won't detect if the app running on that VM dies (e.g., crashes, runs out of memory).
  • A better approach is to use an Auto Scaling Group (ASG) with a Load Balancer. If any of the Instances in the ASG fail the Load Balancer health checks, they will be replaced automatically. The catch is that an Instance can only attach an EBS Volume in the same Availability Zone (AZ), but an ASG can launch Instances in any AZ, so an Instance might launch in an AZ without any EBS Volume! Solving this, especially in a way that supports zero-downtime deployment, typically requires going outside of Terraform. For example, the Auto Scaling Group module in the Gruntwork IaC Library implements this using multiple ASGs and a Python script activated via an external data source.

Upvotes: 1

Related Questions