Reputation: 7082
What's the way to overcome the circular dependency problem in AWS CDK: Let's imagine I have a stack for ECS cluster and a stack for ECS Service (several of them):
export class EcsClusterStack extends cdk.Stack {
public readonly cluster: ecs.Cluster
...
}
and
export class EcsServiceStack extends cdk.Stack {
constructor(scope: cdk.Construct, id: string, cluster: ecs.ICluster) { }
}
Now, I can compose my app:
const app = new cdk.App();
const vpc = new VpcStack(app, 'vpc');
const cluster = new ClusterStack(app, 'ecs', vpc.vpc);
const service = new EcsServiceStack(app, 'ecs-service', cluster.cluster);
Let's assume, after that, I want to migrate my ECS service from one cluster to another. I would create another ECS Cluster stack and pass it to ECS Service, but here the thing: AWS CDK automatically generates Outputs (in Cluster stack there are outputs like cluster name, etc), and then, when I want to migrate my ECS Service to another cluster and if I pass another ICluster object down to ECS Service stack constructor AWS CDK tries to remove Outputs/Exports from my previous cluster definition and that's obviously going to fail upon deploy since it cannot remove exports from Cluster stack until there is the service that relies on it. Finally, I see an error like:
0 | 7:15:19 PM | UPDATE_IN_PROGRESS | AWS::CloudFormation::Stack | ecs User Initiated
0 | 7:15:26 PM | UPDATE_ROLLBACK_IN_P | AWS::CloudFormation::Stack | ecs Export ecs:ExportsOutputFnGetAttdefaultasgspotInstanceSecurityGroup2D2AFE98GroupId1084B7B2 cannot be deleted as it is in use by ecs-service
If there was a way to force ECS service stack to deploy first that would solve the problem, but it seems AWS CDK always deploys dependency first (ECS cluster in my case) and that fails deployment. So is there a way to overcome this?
Upvotes: 10
Views: 10852
Reputation: 34704
AWS added an official workaround. It allows you to manually create an export in the dependent stack. When you need to remove an export:
this.exportAttribute(this.bucket.bucketName)
One workaround I found is forcing CDK to split the deployment to two steps. First I deploy the stack that uses the export so it doesn't use it anymore. Then I deploy the stack that creates the export to remove the export after it's no longer used. Even if you specify the stack name on the command, it still deploys all stacks it depends on. So I have to use the --exclusively
flag.
cdk deploy --exclusively ecs-service
cdk deploy
In your case you would need a step before all that of creating the new cluster and deploying the stack so you have something new to import in ecs-service
.
There is an issue for this on GitHub.
I created the following script to help automate the process. It phases out the exports in two deployments. On the first deployment it restores any removed exports but marks them to be removed. This allows the first deployment to safely remove its usage. On the second deployment the script actually removes the exports after no other stack is using them.
To use the script you have to separate the synth and deploy steps and run the script in between them.
cdk synth && python phase-out-ref-exports.py && cdk deploy --app cdk.out --all
It requires permissions to read the stack, so it will probably not work well with cross-account deployments.
# phase-out-ref-exports.py
import json
import os
import os.path
from aws_cdk import cx_api
import boto3
import botocore.exceptions
def handle_template(stack_name, account, region, template_path):
# get outputs from existing stack (if it exists)
try:
# TODO handle different accounts
print(f"Checking exports of {stack_name}...")
stack = boto3.client("cloudformation", region_name=region).describe_stacks(StackName=stack_name)["Stacks"][0]
old_outputs = {
o["OutputKey"]: o
for o in stack.get("Outputs", [])
}
except botocore.exceptions.ClientError as e:
print(f"Unable to phase out exports for {stack_name} on {account}/{region}: {e}")
return
# load new template generated by CDK
new_template = json.load(open(template_path))
if "Outputs" not in new_template:
new_template["Outputs"] = {}
# get output names for both existing and new templates
new_output_names = set(new_template["Outputs"].keys())
old_output_names = set(old_outputs.keys())
# phase out outputs that are in old template but not in new template
for output_to_phase_out in old_output_names - new_output_names:
# if we already marked it for removal last deploy, remove the output
if old_outputs[output_to_phase_out].get("Description") == "REMOVE ON NEXT DEPLOY":
print(f"Removing {output_to_phase_out}")
continue
if not old_outputs[output_to_phase_out].get("ExportName"):
print(f"This is an export with no name, ignoring {old_outputs[output_to_phase_out]}")
continue
# add back removed outputs
print(f"Re-adding {output_to_phase_out}, but removing on next deploy")
new_template["Outputs"][output_to_phase_out] = {
"Value": old_outputs[output_to_phase_out]["OutputValue"],
"Export": {
"Name": old_outputs[output_to_phase_out]["ExportName"]
},
# mark for removal on next deploy
"Description": "REMOVE ON NEXT DEPLOY",
}
# replace template
json.dump(new_template, open(template_path, "w"), indent=4)
def handle_assembly(assembly):
for s in assembly.stacks:
handle_template(s.stack_name, s.environment.account, s.environment.region, s.template_full_path)
for a in assembly.nested_assemblies:
handle_assembly(a.nested_assembly)
def main():
assembly = cx_api.CloudAssembly("cdk.out")
handle_assembly(assembly)
if __name__ == "__main__":
main()
Upvotes: 14
Reputation: 1
You can use this.exportValue
, when you are removing dependency between two stacks to avoid this error:
this.exportValue(this.dynamodbTable.tableArn);
Take a look at my blog for a more detailed explanation.
Upvotes: -1
Reputation: 1642
Another workaround is to manually create the outputs needed by setting the same exportName
and value from the CloudFormation console in the cluster class:
new cdk.CfnOutput(this, `${this.id}clusterOutput`, { value: "sample-value", exportName: "ecs:ExportsOutputFnGetAttdefaultasgspotInstanceSecurityGroup2D2AFE98GroupId1084B7B2"});
Upvotes: 0
Reputation: 2777
If you are using AWS CDK for infra-as-code development, this blog CDK tips, part 3 – how to unblock cross-stack references written by Adam Ruka, who worked on CDK for almost 7 years in Amazon, could be a very good guidance.
If you are on CDK version 1.90.1
or later, the key is to use the exportValue function from the stack
interface and do two consecutive deployments. The first one would ensure the parent stack preserves the exported values for the child stack to use.
See Removing automatic cross-stack references - GitHub aws-cdk-lib for more details.
That being said, I don't deny that this is a shortfall or design flaw that CDK and CloudFormation should have resolved more gracefully.
Upvotes: 1
Reputation: 558
This was usefull for me. In my case I was using CDK and wanted to remove one stack (lets say stackA) with outputs referenced as input in another stacks (lets say stackB and stackC). So I broke all cross stack references puting the value manually on the cloudFormation template and then I deployed the template to every stack (stackB and stackC). Then I removed the stackA on CDK and deployed and it was successful.
Upvotes: 0
Reputation: 4278
I understood your issue like that: Create another cluster, migrate TaskDefinition with Service from old to the new cluster.
The thing is, that your old task is still is running as the error is telling you (SGs are still in use). Additionally, could it be the case that you are trying to re-use the security groups from the old cluster?
If not so, then you need to instantiate a new EcsServiceStack
, but with the new cluster argument.
Or if you don't care about a "manual blue/green deployment", then you can destroy the old EcsServiceStack
.
Then rerun the CDK commands with the modification of the code should run.
Upvotes: 0