A.Wan
A.Wan

Reputation: 2058

How to get logs from all pods in a Kubernetes job?

I'm using a Kubernetes job to run a test; a shell script runs the job and has setup/teardown logic. The job has

which means that if the job fails, it'll make a new pod and try again once.

After the job completes, I'd like to dump the logs from all pods in the job. But when I do

kubectl logs job/my-test

I only get logs from one of the pods, prefixed with something like Found 2 pods, using pod/my-test-ntb4w.

The --all-containers=true flag doesn't give me logs from all pods.

How can I get logs from all pods in a job, in a shell script?

Upvotes: 5

Views: 8323

Answers (2)

Mark
Mark

Reputation: 4067

As an reference please take a look at kubectl logs --help:

   -- kubectl logs job/my-job
   # Return snapshot logs from first container of a job named hello
      kubectl logs job/hello -- will provide output only for one pod/container
    
   -- while using (either label or selector) it gives you more flexible way to deal with your resources
   # -l, --selector='': Selector (label query) to filter on.

Alternatively you can add custom labels or use an _label/selector from job description:

labels:
      controller-uid: 55d965d0-0016-42ba-b4f5-120c1a78798b
      job-name: pi

You can find fairly similar case in the docs: checking on the output of all jobs at once and Running an example Job.

While using bash you can try also:

pods=$(kubectl get pods --selector=job-name=pi --output=jsonpath='{.items[*].metadata.name}')

for pod in $pods ; do kubectl logs $pod ; done 

Very useful command for working with k8s objects:

kubectl get pods,jobs --show-labels

NAME           READY   STATUS      RESTARTS   AGE   LABELS
pod/pi-25vcd   0/1     Completed   0          97s   controller-uid=55d965d0-0016-42ba-b4f5-120c1a78798b,job-name=pi

Upvotes: 6

A.Wan
A.Wan

Reputation: 2058

Using --selector instead of just job/my-test seems to get logs from all pods:

kubectl logs --selector job-name=my-test

Upvotes: 2

Related Questions