Reputation: 136
A simple command like
cdktf destroy DatasetCrawlerStack
says there is nothing to destroy
DatasetCrawlerStack Initializing the backend...
DatasetCrawlerStack Initializing provider plugins...
- Reusing previous version of hashicorp/aws from the dependency lock file
DatasetCrawlerStack - Using previously-installed hashicorp/aws v5.49.0
DatasetCrawlerStack Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
DatasetCrawlerStack No changes. No objects need to be destroyed.
Either you have not created any objects yet or the existing objects were
already deleted outside of Terraform.
DatasetCrawlerStack
Destroy complete! Resources: 0 destroyed.
but when I go to deploy
cdktf deploy DatasetCrawlerStack
I get an already exists error?
DatasetCrawlerStack Initializing the backend...
DatasetCrawlerStack Initializing provider plugins...
- Reusing previous version of hashicorp/aws from the dependency lock file
DatasetCrawlerStack - Using previously-installed hashicorp/aws v5.49.0
DatasetCrawlerStack Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
DatasetCrawlerStack Terraform used the selected providers to generate the following execution plan.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_glue_crawler.REDACTED-cdktf-crawler (REDACTED-cdktf-crawler) will be created
+ resource "aws_glue_crawler" "REDACTED-cdktf-crawler" {
+ arn = (known after apply)
+ database_name = "REDACTED"
+ id = (known after apply)
+ name = "REDACTED-cdktf-crawler"
+ role = "arn:aws:iam::822795565729:role/service-role/AWSGlueServiceRole-REDACTED-test-datasets"
+ tags_all = (known after apply)
+ s3_target {
+ path = "s3://REDACTED-lake/_datasets/_latest/"
}
}
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
DatasetCrawlerStack Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
DatasetCrawlerStack Enter a value: yes
DatasetCrawlerStack aws_glue_crawler.REDACTED-cdktf-crawler: Creating...
DatasetCrawlerStack ╷
│ Error: creating Glue Crawler (REDACTED-cdktf-crawler): AlreadyExistsException: 822795565729:REDACTED-cdktf-crawler already exists
│
│ with aws_glue_crawler.REDACTED-cdktf-crawler (REDACTED-cdktf-crawler),
│ on cdk.tf.json line 35, in resource.aws_glue_crawler.REDACTED-cdktf-crawler (REDACTED-cdktf-crawler):
│ 35: }
│
╵
0 Stacks deploying 1 Stack done 0 Stacks waiting
Invoking Terraform CLI failed with exit code 1
i think my code is simple-ish
import boto3
from cdktf import S3Backend, TerraformStack
from cdktf_cdktf_provider_aws import provider,glue_crawler
# from imports.aws.glue_crawler import GlueCrawler
from pydantic_settings import BaseSettings
class StackConfig(BaseSettings):
"""Settings configured via environment variables."""
aws_region: str = "us-east-2"
crawler_role: str = "arn:aws:iam::822795565729:role/service-role/AWSGlueServiceRole-REDACTED-test-datasets"
database_name: str = "REDACTED_datalake"
s3_target_path: str = "s3://REDACTED-lake/_datasets/_latest/"
class DatasetMetadataCrawlerStack(TerraformStack):
def __init__(self, scope, id, **kwargs):
super().__init__(scope, id, **kwargs)
config = StackConfig()
provider.AwsProvider(self, f"{id}-AwsProvider", region=config.aws_region)
boto3.client("sts").get_caller_identity()["Account"]
account_alias = boto3.client("iam").list_account_aliases()["AccountAliases"][0]
state_bucket = f"{account_alias}-tfstate-{config.aws_region}"
state_key = "a2p-commons/dataset_metadata_crawler/terraform.tfstate"
S3Backend(
scope,
bucket=state_bucket,
key=state_key,
encrypt=True,
region=config.aws_region,
)
glue_crawler.GlueCrawler(
self,
"REDACTED-cdktf-crawler",
name="REDACTED-cdktf-crawler",
role=config.crawler_role,
database_name=config.database_name,
s3_target=[
glue_crawler.GlueCrawlerS3Target(
path=config.s3_target_path,
)
],
)
Any help would be greatly appreciated. Am I thinking about terraform/cdktf the wrong way?
It is asking me to add more details. So I am putting text here. However, I think that the question has already been adequately described. Hopefully this is enough text to allow for posting. Thank you for your time.
Upvotes: 0
Views: 135