Reputation: 410
Using Terraform 0.12 with the remote state in an S3 bucket with DynamoDB locking.
It seems that a common pattern for Terraforming automation goes more or less like this:
terraform plan -out=plan
terraform apply plan
But then, maybe I'm overlooking something obvious, there's no guarantee other terraform apply
invocations haven't updated the infrastructure between 1 and 3 above.
I know locking will prevent a concurrent run of terraform apply
while another one is running (and locking is enabled) but can I programmatically grab a "long term locking" so the effective workflow looks like this?
terraform plan -out=plan
terraform apply plan
Are there any other means to "protect" infrastructure from concurrent/interdependant
updates that I'm overlooking?
Upvotes: 2
Views: 2892
Reputation: 56839
You don't need this as long as you are only worrying about the state file changing.
If you provide a plan output file to apply and the state has changed since then Terraform will error before making any changes, complaining that the saved plan is stale.
As an example:
$ cat <<'EOF' >> main.tf
> resource "random_pet" "bucket_suffix" {}
>
> resource "aws_s3_bucket" "example" {
> bucket = "example-${random_pet.bucket_suffix.id}"
> acl = "private"
>
> tags = {
> ThingToChange = "foo"
> }
> }
> EOF
$ terraform init
# ...
$ terraform apply
# ...
$ $ sed -i 's/foo/bar/' main.tf
$ terraform plan -out=plan
# ...
$ sed -i 's/bar/baz/' main.tf
$ terraform apply
# ...
$ terraform apply plan
Error: Saved plan is stale
The given plan file can no longer be applied because the state was changed by
another operation after the plan was created.
What it won't do is fail if something outside of Terraform has changed anything. So if instead of applying Terraform again with baz
as the tag for the bucket I had changed the tag on the bucket via the AWS CLI or the AWS console then Terraform would have happily changed it back to bar
on the apply with the stale plan.
Upvotes: 1