Rodrigo Boos
Rodrigo Boos

Reputation: 121

What are the possible causes for a pod container to be restarted due to Out of Memory Killed?

I have the following deployment running in a pod from my system:

apiVersion: v1
kind: LimitRange
metadata:
  name: mem-limit-range
spec:
  limits:
  - default:
      memory: 1Gi
    defaultRequest:
      memory: 256Mi
    type: Container

Kubernetes is restarting this container some times, with this error code:

Last State:     Terminated
      Reason:       OOMKilled
      Exit Code:    137

According to the system monitoring (Grafana), the container was only consuming ~500Mb of memory at the time the kill signal was sent by Kubernetes.

Also, the node where the the pod is running has a lot of available memory (it was using around 15% of its capacity at the time the container has been restarted).

So is there any possible reason for Kubernetes to restart this container? This already happened ~5-7 over the last week.

Upvotes: 0

Views: 674

Answers (1)

Daniel Marques
Daniel Marques

Reputation: 1405

The LimitRange k8s object is used to "Limit Range is a policy to constrain resource by Pod or Container in a namespace." So the objects in the namespace that the object LimitRange is created are consuming more than the limit specified in your LimitRange object. To test if this is true, remove the LimitRange temporarily to check the real usage of you ALL namespace resources, not just one pod. After that will be able to find the best limit config to fit the namespace.

In the k8s docs, you can find a good explanation and a lot of examples of how to restrict limits in your namespace.

Upvotes: 2

Related Questions