user630702
user630702

Reputation: 3097

AWS EKS - Only 2 pod can be launched - Too many pods error

Each t2.micro node should be able to run 4 pods according to this article and the command kubectl get nodes -o yaml | grep pods output.

But I have two nodes and I can launch only 2 pods. 3rd pod gets stuck with the following error message.

Could it be the application using too much resource and as a result its not launching more pods? If that was the case it could indicate Insufficient CPU or memory.

Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  33s (x2 over 33s)  default-scheduler  0/2 nodes are available: 2 Too many pods.

Upvotes: 16

Views: 16692

Answers (3)

I solved the problem by creating a farget profile for my app namespace(it's mandatory).

eksctl create fargateprofile --cluster your-cluster --region your-region --name example-profile --namespace your-namespace

This link help me: fargate-profile-issue

Upvotes: 0

jmcgrath207
jmcgrath207

Reputation: 2027

I had a similar problem, turns out I didn't have my new namespace in my eksctl file.

fargateProfiles:
  - name: fp-core
    selectors:
      - namespace: default
      - namespace: kube-system
      - namespace: flux-system
  - name: fp-airflow
    selectors:
      - namespace: airflow
  - name: fp-airflow2
    selectors:
      - namespace: airflow2

Then to update live configuration, use this:

eksctl create fargateprofile -f dev.yaml

Upvotes: 0

Jonas
Jonas

Reputation: 128787

According to the AWS documentation IP addresses per network interface per instance type the t2.micro only has 2 Network Interfaces and 2 IPv4 addresses per interface. So you are right, only 4 IP addresses.

But EKS deploys DaemonSets for e.g. CoreDNS and kube-proxy, so some IP addresses on each node is already allocated.

Upvotes: 22

Related Questions