Reputation: 849
Created rds_cluster using below code,
module "rds_cluster_db" {
source = "./private-terraform-aws-rds-cluster"
name = "my-user-db"
db_user = "db_user"
db_name = "db_name"
instance_type = "db.t4g.small"
}
To delete this resource, I just removed module block. But, it got failed with the below error as source
module contains providers.tf
file.
Error:
│ Error: Provider configuration not present
│
│ To work with module.rds_cluster_db.postgresql_role.iam_user
│ (orphan) its original provider configuration at
│ module.rds_cluster_db.provider["registry.terraform.io/cyrilgdn/postgresql"]
│ is required, but it has been removed. This occurs when a provider
│ configuration is removed while objects created by that provider still exist
│ in the state. Re-add the provider configuration to destroy
│ module.rds_cluster_db.postgresql_role.iam_user (orphan),
│ after which you can remove the provider configuration again.
./private-terraform-aws-rds-cluster/main.tf:
module "rds_cluster" {
source = "cloudposse/rds-cluster/aws"
version = "1.9.0"
zone_id = var.zone_id
security_groups = var.security_groups
vpc_id = var.vpc_id
subnets = var.subnets
instance_type = var.instance_type
context = module.this.context
}
resource "postgresql_role" "iam_user" {
name = var.db_user
login = true
roles = ["rds_iam"]
create_database = true
create_role = true
inherit = true
depends_on = [
module.rds_cluster
]
}
./private-terraform-aws-rds-cluster/providers.tf:
provider "postgresql" {
host = module.rds_cluster.endpoint
port = var.db_port
database = var.db_name
username = var.admin_user
password = var.admin_password
superuser = false
expected_version = var.engine_version
}
terraform {
required_version = ">= 0.13.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 3.56"
}
postgresql = {
source = "cyrilgdn/postgresql"
version = ">= 1.16.0"
}
}
}
Terraform version:
Terraform v1.5.5
on darwin_arm64
+ provider registry.terraform.io/cyrilgdn/postgresql v1.22.0
+ provider registry.terraform.io/hashicorp/archive v2.4.2
+ provider registry.terraform.io/hashicorp/aws v5.42.0
+ provider registry.terraform.io/hashicorp/kubernetes v2.27.0
+ provider registry.terraform.io/hashicorp/null v3.2.2
+ provider registry.terraform.io/hashicorp/random v3.6.0
+ provider registry.terraform.io/hashicorp/tls v4.0.5
I have seen SO posts with similars error, but not really sure on how can I remove only terraform part without removing the reference to provider as removing the module rds_cluster_db
will remove both terraform and provider?
Upvotes: 1
Views: 352
Reputation: 74574
The situation you've encountered here is why the Terraform documentation recommends against declaring provider configurations in non-root modules, in Legacy Shared Modules with Provider Configurations.
In Terraform v0.10 and earlier there was no explicit way to use different configurations of a provider in different modules in the same configuration, and so module authors commonly worked around this by writing
provider
blocks directly inside their modules, making the module have its own separate provider configurations separate from those declared in the root module.However, that pattern had a significant drawback: because a provider configuration is required to destroy the remote object associated with a resource instance as well as to create or update it, a provider configuration must always stay present in the overall Terraform configuration for longer than all of the resources it manages. If a particular module includes both resources and the provider configurations for those resources then removing the module from its caller would violate that constraint: both the resources and their associated providers would, in effect, be removed simultaneously.
[...]
There are two main ways to get out of this "trap", both of which are a little fiddly.
If there's only a small number of resources declared in the child module then it might be easiest to just tell Terraform to "forget" those objects entirely (that is, remove them from the Terraform state without actually deleting the real objects) and then delete them manually in the remote system.
If you want to take this approach, the only step in Terraform is to tell it to forget about the affected resource instances:
terraform state rm module.rds_cluster_db.postgresql_role.iam_user
Once you've done that, you'll need to manually find the corresponding object in your Postgres server and delete it using Postgres commands.
This strategy is easiest if you are intending to entirely destroy the server, because deleting the entire Postgres server will implicitly delete everything that was created in it, and so you won't have to clean up individual objects manually.
If that doesn't seem like a good fit for your situation, refer to the next section instead.
The core problem here is that when your module contains both a resource
block and the provider
block that it belongs to there isn't any easy way to remove the resource
block while preserving the provider
block, but Terraform needs the provider
block to know which Postgres server to connect to in order to delete this object.
You can break this conflict by temporarily changing the configuration to include only the needed provider configuration.
To do that, make a directory with just a single empty .tf
file. For example, you could call it empty.tf
. The filename doesn't really matter, it just needs to be there so that Terraform will understand that you intend this directory to be used as a Terraform module. For the sake of example, I'm going to choose to place this module in a subdirectory called empty
, and so the empty file would be at empty/empty.tf
relative to the root module.
With that empty module set up, you can change the root module to call the empty module instead of the real rds_cluster_db
module, and also pass a configuration for the postgres provider so Terraform can know which provider configuration to use with the existing objects tracked in the state:
terraform {
required_version = ">= 0.13.0"
required_providers {
postgresql = {
source = "cyrilgdn/postgresql"
version = ">= 1.16.0"
}
}
}
provider "postgres" {
alias = "temp_cleanup"
# (configure here whatever minimal configuration
# is sufficient to connect to the Postgres server
# to run the destroy actions.)
}
module "rds_cluster_db" {
source = "./empty"
providers = {
postgresql = postgresql.temp_cleanup
}
}
This configuration resolves the conflict by moving the provider configuration up into the root module and then replacing the existing module with an empty one. Now Terraform can see both that there's a provider configuration to use and that all of the resources are removed, and so they need to be destroyed using that provider configuration. (Terraform tracks the nested objects by their containing module, and module.rds_cluster_db
now refers to the empty module instead of the original module.)
Once you've applied this configuration and thus the objects previously declared the module have been destroyed, you can remove the temporary provider configuration and the module "rds_cluster_db"
block.
Upvotes: 1