Reputation: 22262
I got the following error during a terraform plan
which occured in my pipeline:
Error: Error locking state: Error acquiring the state lock: ConditionalCheckFailedException: The conditional request failed
Lock Info:
ID: 9db590f1-b6fe-c5f2-2678-8804f089deba
Path: ...
Operation: OperationTypePlan
Who: ...
Version: 0.12.25
Created: 2020-05-29 12:52:25.690864752 +0000 UTC
Info:
Terraform acquires a state lock to protect the state from being written
by multiple users at the same time. Please resolve the issue above and try
again. For most commands, you can disable locking with the "-lock=false"
flag, but this is not recommended.
It is weird because I'm sure there is no other concurrent plan. Is there a way to deal with this? How should I remove this lock?
Upvotes: 219
Views: 314015
Reputation: 28800
I experienced this issue when trying to run a terraform command for a resource in Google Cloud Console within the GitHub actions workflow.
The issue started after I cancelled a running terraform apply
command.
When I tried to run other terraform commands like the terraform apply -refresh-only
command, I got the error below:
Run terraform apply -var-file env/dev-01-default.tfvars -refresh-only -auto-approve
/runner/_work/_temp/8cdffd5c-b7a1-446d-a294-c1e34b63cde4/terraform-bin apply -var-file env/dev-01-default.tfvars -refresh-only -auto-approve
╷
│ Error: Error acquiring the state lock
│
│ Error message: writing "gs://my-dev/k8s/default/dev-01.tflock" failed:
│ googleapi: Error 412: At least one of the pre-conditions you specified did
│ not hold., conditionNotMet
│ Lock Info:
│ ID: 1688478112611453
│ Path: gs://my-dev/k8s/default/dev-01.tflock
│ Operation: OperationTypeApply
│ Who: runner@pm-runners-m4h7k-gcxqk
│ Version: 1.1.8
│ Created: 2023-07-04 13:41:52.517578046 +0000 UTC
│ Info:
│
│
│ Terraform acquires a state lock to protect the state from being written
│ by multiple users at the same time. Please resolve the issue above and try
│ again. For most commands, you can disable locking with the "-lock=false"
│ flag, but this is not recommended.
╵
Error: Terraform exited with code 1.
Error: Process completed with exit code 1.
Here's how I fixed this:
This issue created a lockfile called dev-01.tflock
in the location gs://my-dev/k8s/default/dev-01.tflock
with a lock-info-id of 1688478112611453
One way is to run the command below:
terraform force-unlock -force <lock-info-id>
Another way is to locate the location of the lockfile in the Google cloud storage account where the statefile is located. And within the same directory where the statefile is stored you will see the lockfile. Go ahead to delete the lockfile.
Afterwhich you can run the terraform command you want to run and it will run fine.
Upvotes: 5
Reputation: 1
You should try running the plan/apply multiple times in a span of 30 minutes. If you are constantly getting the same lock-id, it means the #2 situation has occurred. Or rather just check the Created date to see how old the lock is. To fix this, you run the following:
terragrunt force-unlock --terragrunt-working-dir foundation 64136d7d-4139-eb1c-c0e5-
You run it from the same location where you run plan/apply normally. Don’t forget to replace the folder foundationwith the folder for which you are getting the lock and the lock-id itself.
Upvotes: 0
Reputation: 22262
It looks like the lock persist after the previous pipeline. I had to remove it using the following command in order to remove it:
terraform force-unlock -force <ID>
Or to relaunch the plan with the following option -lock=false
terraform plan -lock=false ...
NOTE: ID
you can get it from the error in the previous pipeline.
│ Error: Error acquiring the state lock
│
│ Error message: ...
│ Lock Info:
│ ID: 9db590f1-b6fe-c5f2-2678-8804f089deba
│ ...
Upvotes: 167
Reputation: 31
If you messed up with lock ID and DynamoDB records. You can put an empty JSON in the Info field and run the command with an empty string to unlock the state.
terraform force-unlock ""
Upvotes: 0
Reputation: 403
In my situation the issue that was causing this error was due to running out of memory while in middle of running a terraform command which caused the command to get killed mid run.
The reason I was running out of memory was due to the fact that the Terraform command was running within a Docker container on my Mac.
In order to fix this I needed to increase the Docker VM's memory limit in addition to running force-unlock
to free up the locked state.
Upvotes: 0
Reputation: 27
When using the -lock=false on any cloud provider should be done with caution. As using it means you're removing the safeguard put in place to prevent conflict especially if you're working in a team.
The usual cause of state lock errors is either a or some terraform processes that are currently ongoing. For example, the terraform console is currently active and a plan or apply is being executed at the same time.
Try listing the terraform processes currently active and kill them
Upvotes: 1
Reputation: 3606
I ran into the same issue when i was using terraform with S3 and dynamodb backend
Reason : i forcefully terminate apply process that actually prevents me to aquire the lock again
Solution : when it gets fail it return an ID with the ID we can unlock forcefully
terraform force-unlock -force 6638e010-8fb0-46cf-966d-21e806602f3a
Upvotes: 21
Reputation: 1030
For anyone running into this issue when running Terraform against AWS, make sure you're running against the expected profile. I ran into this issue today and realised that I needed to switch my profile:
$ export AWS_PROFILE=another_one
Upvotes: 4
Reputation: 149
I got the state lock error because I was missing s3:DeleteObject
and dynamodb:DeleteItem
permissions.
I had get and put permissions, but not delete. So my CircleCI IAM user could check for locks and add locks, but couldn't remove locks when it was done updating state. (Maybe I had watched tutorials that used remote state but didn't use state locking.)
These steps fixed the issue:
terraform force-unlock <error message lock ID>
(I got this step from Falk Tandetzky
and veben
's answers)"s3:DeleteObject"
permission for the resource "arn:aws:s3:::mybucket/path/to/my/key"
"dynamodb:DeleteItem"
permission for the resource "arn:aws:dynamodb:*:*:table/mytable"
All the permissions, with examples, are listed in the Terraform S3 backend documentation:
https://www.terraform.io/language/settings/backends/s3
Upvotes: 2
Reputation: 177
I've run through the same issue in AWS and our pipele. We are transitioning to git-actions. Our terraform is using dynamodb as its lockstate persistence and s3 to hold the actual terraform statefile. When I looked at the lock state in the dynamodb, the md5 digest column is empty and the key did not indicate and -md5, just a normal .
Note: Do not try this if you are not familiar with Terraform State File.
What I did is I cloned the said lockstate and renamed to -md5. Look at my s3 statefile for the hashkey and copy it over to the digest column in dynamo table. Rename the old lockstate to a different key so that it wont be searched up.
That's it for me.
Again, this may not work for everybody but this worked for me.
Upvotes: 0
Reputation: 41
It was AWS CLI session issue with me, I relogged in by using gimme-aws-creds
command from command prompt and then tried. It worked.
Upvotes: 0
Reputation: 171
Even I had the same issue and tried with different command
terraform force-unlock -force and terraform force-unlock <process_id> but didn't worked for me. a quick workaround for the problem is kill that particular process id and run again
.
ps aux | grep terraform
and sudo kill -9 <process_id>
Upvotes: 17
Reputation: 19
GCP: In my case the issue is resolved after changing permission to "Storage Object Admin" in Google cloud storage.
Upvotes: 0
Reputation: 6600
This error usually appears when one process fails running terraform plan
or terraform apply
. For example if your network connection interrupts or the process is terminated before finishing. Then Terraform "thinks" that this process is still working on the infrastructure and blocks other processes from working with the same infrastructure and state at the same time in order to avoid conflicts.
As stated in the error message, you should make sure that there is really no other process still running (e.g. from another developer or from some build-automation). If you force-unlock in such a situation you might screw up your terraform state, making it hard to recover.
If there is no other process still running: run this command
terraform force-unlock 9db590f1-b6fe-c5f2-2678-8804f089deba
(where the numerical id should be replace by the one mentioned in the error message)
if you are not sure if there is another process running and you are worried that you might make things worse, I would recommend waiting for some time (like 1h), try again, then try again after maybe 30 min. If the error still persists it is likely that there really is no other process and it's safe to unlock as described above
Upvotes: 309
Reputation: 61
If terraform force-unlock is giving below error: "Local state cannot be unlocked by another process" then open the running process and kill the process to remove the lock. for Windows: open task manager and search for terraform console process For Linux: grep for terraform process and kill the terraform console process using kill -9
Upvotes: 6