Reputation: 2350
I am deploying a simple infrastructure to Azure with Terraform via a CI pipeline in DevOps.
terraform init
, terraform plan
, and terraform apply
run, and for the first run, everything works fine.
When I add a subsequent resource, apply
fails with similar errors as this:
│ Error: A resource with the ID "/subscriptions/578e0f86-0491-4137-9a4e-3a3c0ff28e91/resourceGroups/myResourceGroup/providers/Microsoft.ContainerService/managedClusters/stihldevlift-cluster" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_kubernetes_cluster" for more information.
Terraform just created this resource in the run before. What's causing it to forget that and to treat the resource like it already existed and needs to be imported?
Note: I am on Azure, and per security policy, we're required to have on skip_provider_registration = true.
I don't know if this is causing issues.
In similar questions the "fix" has been to simply destroy everything and start over. I do not have that luxury. I need to understand why it's happening and how to fix it in place. These are production resources.
Interestingly, I can spin all this up in a separate dev space, and I can reproduce the error. When I remove the resources in question, terraform apply
does recognize them and deletes them accordingly. So does that mean it's a false positive. even if it is, the errors are stopping by CI/CD process.
Here is a full dump of the pipeline errors.
Error: A resource with the ID "/subscriptions/0-0491-4137-9a4e-3a3c0ff28e91/resourceGroups/DEV-Lift_test-Dev_CentralUS/providers/Microsoft.OperationsManagement/solutions/ContainerInsights(testdevliftLogAnalyticsWorkspace-12879201083717606753)" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_log_analytics_solution" for more information.
with azurerm_log_analytics_solution.container_insights,
on 02-aks-container-insights.tf line 19, in resource "azurerm_log_analytics_solution" "container_insights":
19: resource "azurerm_log_analytics_solution" "container_insights" {
##[error]Terraform command 'apply' failed with exit code '1'.
##[error]╷
Error: A resource with the ID "/subscriptions/0-0491-4137-9a4e-3a3c0ff28e91/resourceGroups/DEV-Lift_test-Dev_CentralUS/providers/Microsoft.ContainerService/managedClusters/testdevlift-cluster" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_kubernetes_cluster" for more information.
with azurerm_kubernetes_cluster.k8s,
on 02-aks-cluster-definition.tf line 4, in resource "azurerm_kubernetes_cluster" "k8s":
4: resource "azurerm_kubernetes_cluster" "k8s" {
╷
Error: A resource with the ID "/subscriptions/0-0491-4137-9a4e-3a3c0ff28e91/resourceGroups/DEV-Lift_test-Dev_CentralUS/providers/Microsoft.OperationsManagement/solutions/ContainerInsights(testdevliftLogAnalyticsWorkspace-12879201083717606753)" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_log_analytics_solution" for more information.
with azurerm_log_analytics_solution.container_insights,
on 02-aks-container-insights.tf line 19, in resource "azurerm_log_analytics_solution" "container_insights":
19: resource "azurerm_log_analytics_solution" "container_insights" {
Upvotes: 0
Views: 669
Reputation: 3875
Without seeing the logs from the first run that created the resource it is hard to say.
One thing to note though is that behind the scenes some complex resources require multiple API calls to get created. If one of those API calls fails resources still get created in the cloud, but they may not make it into the state file. This typically happens when there's a permissions error (one of the API calls isn't allowed) or a timeout of some sort.
When this happens you can manually delete just the resource that is conflicting, or you can import that resource into the state and do another terraform apply to finish configuring it.
This should be a fairly rare occurrence though, and the logs from the initial run should help you identify what the root cause was.
Upvotes: 1