Reputation: 5545
I have multi stacks CDK setup, the core stack contains the VPC and EKS. The "Tenant" stack that deploys some s3 buckets and k8s namespaces and some other tenant related deployments.
cdk ls
is displaying all the existing stacks as expected.
- eks-stack
- tenant-a
- tenant-b
If I want to deploy only a single tenant stack, I run cdk deploy tenant-a
. To my surprise, I see that in my k8s cluster, the manifest of tenant-1
and tenant-b
were deployed, and not just tenant-a
as I expected.
The CDK output on the CLI correctly outputs that tenant-a
was deployed. The CLI output doesn't mention tenant-b
. I also see that most of the changes did happen inside the eks stack and not in the tenant stack, as I am using the references.
# app.py
# ...
# EKS
efs_stack = EksStack(
app,
"eks-stack",
stack_log_level="INFO",
)
# Tenant Specific stacks
tenants = ['tenant-a', 'tenant-b']
for tenant in tenants:
tenant_stack = TenantStack(
app,
f"tenant-stack-{tenant}",
stack_log_level="INFO",
cluster=eks_cluster_stack.eks_cluster,
tenant=tenant,
)
--
#
# Inside TenantStack.py a manifest is applied to k8s
self.cluster.add_manifest(f'db-job-{self.tenant}', {
"apiVersion": 'v1',
"kind": 'Pod',
"metadata": {"name": 'mypod'},
"spec": {
"serviceAccountName": "bootstrap-db-job-access-ssm",
"containers": [
{
"name": 'hello',
"image": 'amazon/aws-cli',
"command": 'magic stuff ....'
}
]
}
})
I found out that when I import a cluster by its attributes and by reference eg.
self.cluster = Cluster.from_cluster_attributes(
self, 'cluster', cluster_name=cluster,
open_id_connect_provider=eks_open_id_connect_provider,
kubectl_role_arn=kubectl_role
I can deploy tenant stack a
and b
separately and my core eks
stack stays untouched. Now I have read It's recommended to use references as CDK can automatically can create dependencies and detect circular dependencies.
Upvotes: 3
Views: 1468
Reputation: 1902
There is an option to exclude dependencies. Use cdk deploy tenant-a --exclusively
to not deploy dependencies.
Upvotes: 7