Xavier Geerinck
Xavier Geerinck

Reputation: 636

Managing Eviction on Kubernetes for Node.js and Puppeteer

I am currently seeing a strange issue where I have a Pod that is constantly being Evicted by Kubernetes.

My Cluster / App Information:

What I tried:

8m17s       Normal    NodeHasSufficientMemory   node/node-1              Node node-1 status is now: NodeHasSufficientMemory
2m28s       Warning   EvictionThresholdMet      node/node-1              Attempting to reclaim memory
71m         Warning   FailedScheduling          pod/my-deployment     0/4 nodes are available: 1 node(s) had taint {node.kubernetes.io/memory-pressure: }, that the pod didn't tolerate, 3 node(s) didn't match node selector
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app-d
spec:
  replicas: 1
  template:
    spec:
      containers:
      - name: main
        image: my-image
        imagePullPolicy: Always
        resources:
          limits: 
            memory: "2Gi"

Current way of thinking:

A node has X memory total available, however from X memory only Y is actually allocatable due to reserved space. However when running os.totalmem() in node.js I am still able to see that Node is allowed to allocate the X memory.

What I am thinking here is that Node.js is allocating up to X due to its Garbage Collecting which should actually kick in at Y instead of X. However with my limit I actually expected it to see the limit instead of the K8S Node memory limit.

Question

Are there any other things I should try to resolve this? Did anyone have this before?

Upvotes: 0

Views: 818

Answers (1)

Vasilii Angapov
Vasilii Angapov

Reputation: 9042

You NodeJS app is not aware that it runs in container. It sees only the amount of memory that Linux kernel reports (which always reports the total node memory). You should make your app aware of cgroup limits, see https://medium.com/the-node-js-collection/node-js-memory-management-in-container-environments-7eb8409a74e8

With regard to Evictions: when you've set memory limits - did that solve your problems with evictions?

And don't trust kubectl top pods too much. It always shows data with some delay.

Upvotes: 1

Related Questions