aniztar
aniztar

Reputation: 2923

Checking Kubernetes pod CPU and memory utilization

I am trying to see how much memory and CPU is utilized by a kubernetes pod. I ran the following command for this:

kubectl top pod podname --namespace=default

I am getting the following error:

W0205 15:14:47.248366    2767 top_pod.go:190] Metrics not available for pod default/podname, age: 190h57m1.248339485s
error: Metrics not available for pod default/podname, age: 190h57m1.248339485s
  1. What do I do about this error? Is there any other way to get CPU and memory usage of the pod?
  2. I saw the sample output of this command which shows CPU as 250m. How is this to be interpreted?

  3. Do we get the same output if we enter the pod and run the linux top command?

Upvotes: 235

Views: 794970

Answers (19)

Darien Pardinas
Darien Pardinas

Reputation: 6196

In my use case I wanted to aggregate memory/cpu usage per namespace as I wanted to see how heavy or lightweight a Harbor system running in my small K3s cluster would be, so I wrote this Python script using the kubernetes Python client:

from kubernetes import client, config
import matplotlib.pyplot as plt
import pandas as pd

def cpu_n(cpu_str: str):
    if cpu_str == "0":
        return 0.0
    assert cpu_str.endswith("n")
    return float(cpu_str[:-1])

def mem_Mi(mem_str: str):
    if mem_str == "0":
        return 0.0
    assert mem_str.endswith("Ki") or mem_str.endswith("Mi")
    val = float(mem_str[:-2])
    if mem_str.endswith("Ki"):
        return val / 1024.0
    if mem_str.endswith("Mi"):
        return val

config.load_kube_config()
api = client.CustomObjectsApi()
v1 = client.CoreV1Api()
cpu_usage_pct = {}
mem_usage_mb = {}
namespaces = [item.metadata.name for item in v1.list_namespace().items]
for ns in namespaces:
    resource = api.list_namespaced_custom_object(group="metrics.k8s.io", version="v1beta1", namespace=ns, plural="pods")
    cpu_total_n = 0.0
    mem_total_Mi = 0.0
    for pod in resource["items"]:
        for container in pod["containers"]:
            usage = container["usage"]
            cpu_total_n += cpu_n(usage["cpu"])
            mem_total_Mi += mem_Mi(usage["memory"])
    if mem_total_Mi > 0:
        mem_usage_mb[ns] = mem_total_Mi
    if cpu_total_n > 0:
        cpu_usage_pct[ns] = cpu_total_n * 100 / 10**9

df_mem = pd.DataFrame({"ns": mem_usage_mb.keys(), "memory_mbi": mem_usage_mb.values()})
df_mem.sort_values("memory_mbi", inplace=True)

_, [ax1, ax2] = plt.subplots(2, 1, figsize=(12, 12))

ax1.barh("ns", "memory_mbi", data=df_mem)
ax1.set_ylabel("Namespace", size=14)
ax1.set_xlabel("Memory Usage [MBi]", size=14)
total_memory_used_Mi = round(sum(mem_usage_mb.values()))
ax1.set_title(f"Memory usage by namespace [{total_memory_used_Mi}Mi total]", size=16)

df_cpu = pd.DataFrame({"ns": cpu_usage_pct.keys(), "cpu_pct": cpu_usage_pct.values()})
df_cpu.sort_values("cpu_pct", inplace=True)
ax2.barh("ns", "cpu_pct", data=df_cpu)
ax2.set_ylabel("Namespace", size=14)
ax2.set_xlabel("CPU Usage [%]", size=14)
total_cpu_usage_pct = round(sum(cpu_usage_pct.values()))
ax2.set_title(f"CPU usage by namespace [{total_cpu_usage_pct}% total]", size=16)

plt.show()

Sample output looks like this: memory cpu usage per namespace

Of course, keep in mind that it is just a snapshot of your system's memory and CPU usage, it could vary a lot as workloads become more or less active.

Upvotes: 5

Tireuuuuu
Tireuuuuu

Reputation: 63

In completion of Dashrath Mundkar's answer, this execution is possible without entering the pod (using command prompt) :

kubectl exec pod_name -n namespace -- cat /sys/fs/cgroup/cpu/cpuacct.usage

Upvotes: 1

Ajit Surendran
Ajit Surendran

Reputation: 787

Metrics are available only if metric server is enabled or third party solutions like prometheus is configured. Otherwise you need to look at /sys/fs/cgroup/cpu/cpuacct.usage for cpu usage, which is the total cpu time occupied by this cgroup/container and /sys/fs/cgroup/memory/memory.usage_in_bytes for memory usage, which is total memory consumed by all the processes in the cgroup/container.

Also don't forget another beast called QOS, which can have values like Bursted, Guaranteed. If your pod appears Bursted, then it will be OOMKilled, even if it has not breached the CPU or Memory threshold.

Kubernetes is FUN!!!

Upvotes: 0

Dashrath Mundkar
Dashrath Mundkar

Reputation: 9212

CHECK WITHOUT METRICS SERVER or ANY THIRD PARTY TOOL


If you want to check pods cpu/memory usage without installing any third party tool then you can get memory and cpu usage of pod from cgroup.

  1. Go to pod's exec mode kubectl exec -it pod_name -n namespace -- /bin/bash
  2. Run cat /sys/fs/cgroup/cpu/cpuacct.usage for cpu usage
  3. Run cat /sys/fs/cgroup/memory/memory.usage_in_bytes for memory usage

Make Sure you have added the resources section (requests and limits) to deployment so that it can calculate the usage based on cgroup and container will respect the limits set on pod level

NOTE: This usage is in bytes. This can vary upon pod usage and these values changes frequently.

Upvotes: 231

Jose Sardinas
Jose Sardinas

Reputation: 9

I know this is an old thread, but I just found it trying to do something similar. In the end, I found I can just use the Visual Studio Code Kubernetes plugin. This is what I did:

  • Select the cluster and open the Workloads/Pods section, find the pod you want to monitor (you can reach the pod through any other grouping in the Workloads section)
  • Right-click on the pod and select "Terminal"
  • Now you can either cat the files described above or use the "top" command to monitor CPU and memory in real-time.

Hope it helps

Upvotes: 0

valyala
valyala

Reputation: 18056

If you use Prometheus operator or VictoriaMetrics operator for Kubernetes monitoring, then the following PromQL queries can be used for determining per-container, per-pod and per-node resource usage:

  • Per-container memory usage in bytes:
sum(container_memory_usage_bytes{container!~"POD|"}) by (namespace,pod,container)
  • Per-container CPU usage in CPU cores:
sum(rate(container_cpu_usage_seconds_total{container!~"POD|"}[5m])) by (namespace,pod,container)
  • Per-pod memory usage in bytes:
sum(container_memory_usage_bytes{container!=""}) by (namespace,pod)
  • Per-pod CPU usage in CPU cores:
sum(rate(container_cpu_usage_seconds_total{container!=""}[5m])) by (namespace,pod)
  • Per-node memory usage in bytes:
sum(container_memory_usage_bytes{container!=""}) by (node)
  • Per-node CPU usage in CPU cores:
sum(rate(container_cpu_usage_seconds_total{container!=""}[5m])) by (node)
  • Per-node memory usage percentage:
100 * (
  sum(container_memory_usage_bytes{container!=""}) by (node)
    / on(node)
  kube_node_status_capacity{resource="memory"}
)
  • Per-node CPU usage percentage:
100 * (
  sum(rate(container_cpu_usage_seconds_total{container!=""}[5m])) by (node)
    / on(node)
  kube_node_status_capacity{resource="cpu"}
)

Upvotes: 13

shimi_tap
shimi_tap

Reputation: 7962

Not sure why it's not here

  1. To see all pods with time alive - kubectl get pods --all-namespaces
  2. To see memory and CPU - kubectl top pods --all-namespaces

Upvotes: 19

Suvoraj Biswas
Suvoraj Biswas

Reputation: 683

A quick way to check CPU/Memory is by using the following kubectl command. I found it very useful.

kubectl describe PodMetrics <pod_name>

replace <pod_name> with the pod name you get by using

kubectl get pod

Upvotes: 48

drFunJohn
drFunJohn

Reputation: 319

You can use API as defined here:

For example:

kubectl -n default get --raw /apis/metrics.k8s.io/v1beta1/namespaces/default/pods/nginx-7fb5bc5df-b6pzh | jq

{
  "kind": "PodMetrics",
  "apiVersion": "metrics.k8s.io/v1beta1",
  "metadata": {
    "name": "nginx-7fb5bc5df-b6pzh",
    "namespace": "default",
    "selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/default/pods/nginx-7fb5bc5df-b6pzh",
    "creationTimestamp": "2021-06-14T07:54:31Z"
  },
  "timestamp": "2021-06-14T07:53:54Z",
  "window": "30s",
  "containers": [
    {
      "name": "nginx",
      "usage": {
        "cpu": "33239n",
        "memory": "13148Ki"
      }
    },
    {
      "name": "git-repo-syncer",
      "usage": {
        "cpu": "0",
        "memory": "6204Ki"
      }
    }
  ]
}

Where nginx-7fb5bc5df-b6pzh is pod's name.

Pay attention CPU is measured in nanoCPUs where 1x10E9 nanoCPUs = 1 CPU

Upvotes: 2

prasun
prasun

Reputation: 7343

An alternative approach without having to install the metrics server.

It requires you to currently install crictl into Worker Nodes where pods are installed. There is Kubernetes task defined in official doc.

Once, you have installed it properly you can use the below commands. (I had to use sudo in my case, but, probably may not be required depending on your Kubernetes Cluster install)

  1. Find your container id of the pod sudo crictl ps
  2. use stats to get CPU and RAM sudo crictl stats <CONTAINERID>

Sample output for reference:

CONTAINER           CPU %               MEM                 DISK                INODES
873f04b6cef94       0.50                54.16MB             28.67kB             8

Upvotes: 6

Ambir
Ambir

Reputation: 59

To check the usage of individual pods in Kubernetes type the following commands in terminal

$ docker ps | grep <pod_name>

This will give your list of running containers in Kubernetes To check CPU and memory utilization using

$ docker stats <container_id>

CONTAINER_ID  NAME   CPU%   MEM   USAGE/LIMIT   MEM%   NET_I/O   BLOCK_I/O   PIDS

Upvotes: 3

chetan mahajan
chetan mahajan

Reputation: 813

You need to run metric server to make below commands working with correct data:

  1. kubectl get hpa
  2. kubectl top node
  3. kubectl top pods

Without metric server: Go into the pod by running below command:

  1. kubectl exec -it pods/{pod_name} sh
  2. cat /sys/fs/cgroup/memory/memory.usage_in_bytes

You will get memory usage of pod in bytes.

Upvotes: 35

Ian Robertson
Ian Robertson

Reputation: 2812

If you exec into your pod, using sh or bash, you can run the top command which will give you some stats about resource utilisation that updates every few moments.

enter image description here

Upvotes: -1

Nick
Nick

Reputation: 12037

Use k9s for a super easy way to check all your resources' cpu and memory usage.

enter image description here

Upvotes: 84

Javier Aviles
Javier Aviles

Reputation: 10805

In case you are using minikube, you can enable the metrics-server addon; this will show the information in the dashboard.

Upvotes: 0

Umakant
Umakant

Reputation: 2685

kubectl top pod <pod-name> -n <fed-name> --containers

FYI, this is on v1.16.2

Upvotes: 227

Prafull Ladha
Prafull Ladha

Reputation: 13459

As heapster is deprecated and will not be releasing any future releases, you should go with installing metrics-server

You can install metrics-server in following way:

  1. Clone the metrics-server github repo: git clone https://github.com/kubernetes-incubator/metrics-server.git

Edit the deploy/1.8+/metrics-server-deployment.yaml file and add following section just after command section:

- command:
     - /metrics-server
     - --metric-resolution=30s
     - --kubelet-insecure-tls
     - --kubelet-preferred-address-types=InternalIP
  1. Run the following command: kubectl apply -f deploy/1.8+

It will install all the requirements you need for metrics server.

For more info, please have a look at my following answer:

How to Enable KubeAPI server for HPA Autoscaling Metrics

Upvotes: 7

Diego Mendes
Diego Mendes

Reputation: 11361

  1. As described in the docs, you should install metrics-server

  2. 250m means 250 milliCPU, The CPU resource is measured in CPU units, in Kubernetes, is equivalent to:

    • 1 AWS vCPU
    • 1 GCP Core
    • 1 Azure vCore
    • 1 Hyperthread on a bare-metal Intel processor with Hyperthreading

    Fractional values are allowed. A Container that requests 0.5 CPU is guaranteed half as much CPU as a Container that requests 1 CPU. You can use the suffix m to mean milli. For example 100m CPU, 100 milliCPU, and 0.1 CPU are all the same. Precision finer than 1m is not allowed.

    CPU is always requested as an absolute quantity, never as a relative quantity; 0.1 is the same amount of CPU on a single-core, dual-core, or 48-core machine.

  3. No, kubectl top pod podname shows metrics for a given pod, Linux top and free runs inside a Container and report metrics based on Linux system reporting based on the information stored in the virtual filesystem /proc/, they are not aware of the cgroup where it runs.

    There are more details on these links:

Upvotes: 56

P Ekambaram
P Ekambaram

Reputation: 17689

you need to deploy heapster or metric server to see the cpu and memory usage of the pods

Upvotes: 1

Related Questions