Reputation: 21
I run three services in three different containers. The logs for these services are sent to the system so if I run these on a Linux server, I can see the logs with journalctl.
Also, if I run the services in Docker containers, I can gather the logs with docker logs <container_name> or from /var/lib/docker/containers directory. But when I move to Kubernetes (Microk8s), I cannot retrieve them with kubectl logs command, and there are also no logs in /var/log/containers or /var/log/pods.
If I login to the pods, I can see that the processes are running, but without logs I couldn't say if there are running correctly. Also, I tried to change the runtime of microk8s kubelet from containerd to docker, but still I can't get any logs.
# kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
amf-deployment-7785db9758-h24kz 1/1 Running 0 72s 10.1.243.237 ubuntu <none>
# kubectl describe po amf-deployment-7785db9758-h24kz
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 87s default-scheduler Successfully assigned default/amf-deployment-7785db9758-h24kz to ubuntu
Normal AddedInterface 86s multus Add eth0 [10.1.243.237/32]
Normal Pulled 86s kubelet Container image "amf:latest" already present on machine
Normal Created 86s kubelet Created container amf
Normal Started 86s kubelet Started container amf
# kubectl logs amf-deployment-7785db9758-h24kz
# kubectl logs -f amf-deployment-7785db9758-h24kz
^C
You can see in the following screenshot the difference of running the same container with Docker and running it with Kubernetes. The behaviour seems very strange, since the logs can be gathered if the application run as an independent Docker container, but not when it is running with Kubernetes.enter image description here
Upvotes: 0
Views: 12741
Reputation: 13858
In traditional server environments, application logs are written to a file such as /var/log/app.log
. However, when working with Kubernetes, you need to collect logs for multiple transient pods (applications), across multiple nodes in the cluster, making this log collection method less than optimal. Instead, the default Kubernetes logging framework recommends capturing the standard output (stdout
) and standard error output (stderr
) from each container on the node to a log file. If you can't see you apps logs when using kubectl logs
command it most likely means that your app is not writing logs in the right place. The official Logging Architecture docs explain this topic in more detail. There is also an example of Basic logging in Kubernetes:
This example uses a Pod specification with a container to write text to the standard output stream once per second.
apiVersion: v1 kind: Pod metadata: name: counter spec: containers: - name: count image: busybox args: [/bin/sh, -c, 'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done']
To run this pod, use the following command:
kubectl apply -f https://k8s.io/examples/debug/counter-pod.yaml
The output is:
pod/counter created
To fetch the logs, use the kubectl logs command, as follows:
kubectl logs counter
The output is:
0: Mon Jan 1 00:00:00 UTC 2001 1: Mon Jan 1 00:00:01 UTC 2001 2: Mon Jan 1 00:00:02 UTC 2001 ...
You can use
kubectl logs --previous
to retrieve logs from a previous instantiation of a container. If your pod has multiple containers, specify which container's logs you want to access by appending a container name to the command. See the kubectl logs documentation for more details.
You can compare it with your Pod
/app configs to see if there are any mistakes.
Having that knowledge in mind you now have several option to Debug Running Pods such as:
Debugging Pods by executing kubectl describe pods ${POD_NAME}
and checking the reason behind it's failure.
Examining pod logs: with kubectl logs ${POD_NAME} ${CONTAINER_NAME}
or kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}
Debugging with container exec: by running commands inside a specific container with kubectl exec
Debugging with an ephemeral debug container: Ephemeral containers are useful for interactive troubleshooting when kubectl exec
is insufficient because a container has crashed or a container image doesn't include debugging utilities, such as with distroless images.
Debugging via a shell on the node: If none of these approaches work, you can find the host machine that the pod is running on and SSH into that host
To sum up:
Make sure your logging is in place
Debug with the options listed above
Upvotes: 1
Reputation: 15
For kubernetes logs, you can try this command to see the logs:
kubectl logs -f <pod-name>
Upvotes: 0