Reputation: 121
I'm still a noob, so go gentle on me please!
I have an eks cluster running with this node group configs:
resource "aws_eks_node_group" "this" {
cluster_name = aws_eks_cluster.this.name
node_group_name = local.cluster_name
node_role_arn = aws_iam_role.eks_node.arn
subnet_ids = aws_subnet.this.*.id
instance_types = ["t2.micro"]
scaling_config {
desired_size = 2
max_size = 4
min_size = 2
}
# Optional: Allow external changes without Terraform plan difference
lifecycle {
ignore_changes = [scaling_config[0].desired_size]
}
depends_on = [
aws_iam_role_policy_attachment.eks_AmazonEKSWorkerNodePolicy,
aws_iam_role_policy_attachment.eks_AmazonEKS_CNI_Policy,
aws_iam_role_policy_attachment.eks_AmazonEC2ContainerRegistryReadOnly,
]
}
my scaling configs are:
scaling_config {
desired_size = 2
max_size = 4
min_size = 2
}
and I successfully can deploy 2
nginx replicas with the following configs:
resource "kubernetes_deployment" "nginx" {
metadata {
name = "nginx"
labels = {
App = "Nginx"
}
}
spec {
replicas = 2
selector {
match_labels = {
App = "Nginx"
}
}
template {
metadata {
labels = {
App = "Nginx"
}
}
spec {
container {
image = "nginx:1.7.8"
name = "nginx"
port {
container_port = 80
}
resources {
limits = {
cpu = "0.5"
memory = "512Mi"
}
requests = {
cpu = "250m"
memory = "50Mi"
}
}
}
}
}
}
}
but when I scale my replicas to 4
the pods are created but in a pending state with the following reason:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 18s (x2 over 108s) default-scheduler 0/2 nodes are available: 2 Too many pods.
I tried ignoring desired_size
in scaling_config
but didn't help to resolve the issue.
I believe I'm missing a crucial understanding working with scaling_config and the scaling group that it creates and k8s deployment replicas. Any guidance to help me understand what's going on, will be highly appreciated. Thanks a lot In advance.
https://github.com/ehabshaaban/deploy-nginx/tree/eks
Upvotes: 4
Views: 4653
Reputation: 132
according to the message 0/2 nodes are available: 2 Too many pods.
, you can find the node can not be placed any pods. In EKS, the number of max pods could be placed in the node would based on several things instance type
& cni
. By default, you can refer this document eni-max-pod
To solve your issue, you can increase the desired_size
from 2 to 3. So the pods would be placed to the new nodes.
Upvotes: 7