Chris F
Chris F

Reputation: 16673

EKS pod stuck in "Pending" state in Fargate deployment?

So I created an EKS cluster for Fargate with the following manifest, and it created ok. I want to run applications in Fargate, not in EC2 worker nodes, so I didn't create node groups (correct?).

$ eksctl create -f cluster.yaml
cluster.yaml
------------
metadata:
  name: sandbox
  region: us-east-1
  version: "1.18"

fargateProfiles:
  - name: fp-default
    selectors:
      # All workloads in the "default" Kubernetes namespace will be
      # scheduled onto Fargate:
      - namespace: default
      # All workloads in the "kube-system" Kubernetes namespace will be
      # scheduled onto Fargate:
      - namespace: kube-system
  - name: fp-sandbox
    selectors:
      # All workloads in the "sandbox" Kubernetes namespace matching the
      # following label selectors will be scheduled onto Fargate:
      - namespace: sandbox
        labels:
          env: sandbox
          checks: passed

$ kubectl get nodes
NAME                                     STATUS   ROLES    AGE    VERSION
fargate-ip-192-168-100-23.ec2.internal   Ready    <none>   3h2m   v1.18.8-eks-7c9bda
fargate-ip-192-168-67-135.ec2.internal   Ready    <none>   3h2m   v1.18.8-eks-7c9bda

Then I created a namespace

$ kubectl create namespace sandbox

Now I create a deployment in the sandbox namespace to match the namespace in my fp-sandbox Fargate profile, and is stuck in Pending state

$ kubectl create deploy hello-world-node --image=redacted.dkr.ecr.us-east-1.amazonaws.com/hello-world-node:latest --namespace=sandbox
$ kubectl get po -n sandbox
NAME                                READY   STATUS    RESTARTS   AGE
hello-world-node-544748b68b-4bghr   0/1     Pending   0          18m

$ kubectl describe pod hello-world-node-544748b68b-4bghr -n sandbox
....
Events:
Type     Reason            Age                 From               Message
----     ------            ----                ----               -------
Warning  FailedScheduling  12s (x15 over 19m)  default-scheduler  0/2 nodes are available: 2 Too many pods.

Why does it say 0/2 nodes are available? What am I missing? Remember I want to run the application in Fargate, not in EC2 worker nodes.

NOTE: I can run the container locally. It's just a simple NodeJS app that echoes "Hello World."

UPDATE: I added a manageNodeGroups to my cluster, and the pod came up. However, why so? Why didn't the pods run in Fargate, without the node group?

Upvotes: 0

Views: 2538

Answers (2)

Chris F
Chris F

Reputation: 16673

Per help/advise from Meir (again, lol), I used a deployment manifest instead, and it worked. Here's it is

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-world-node
  namespace: sandbox
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hello-world-node
      env: sandbox
      checks: passed
  template:
    metadata:
      labels:
        app: hello-world-node
        env: sandbox
        checks: passed
    spec:
      containers:
        - name: hello-world-node
          image: redacted.dkr.ecr.us-east-1.amazonaws.com/hello-world-node:latest
          ports:
            - containerPort: 8080

Then apply it with

$ kubectl apply -f deployment.yaml
$ kubectl get po -n sandbox
NAME                                READY   STATUS    RESTARTS   AGE
hello-world-node-58f86974c4-7tnzb   1/1     Running   0          3m58s

Upvotes: 0

Meir Gabay
Meir Gabay

Reputation: 3296

@Chris, hi again :)

In your YAML file

      # All workloads in the "sandbox" Kubernetes namespace matching the
      # following label selectors will be scheduled onto Fargate:

There's an AND condition between the two sentences above, so your pods should be assigned with env=sandbox and checks=passed labels.

A quick fix

kubectl --namespace sandbox label pods --all env=sandbox
kubectl --namespace sandbox label pods --all checks=passed

A better way to fix it is by writing a deployment.yaml file and add the labels in the relevant place.

References

Upvotes: 1

Related Questions