Reputation: 117
A semi related question: Options for getting logs in kubernetes pods
I am running a tomcat application in Google Kubernetes Engine and that has output to log files like catalina.log, localhost.log and more. As these are not the usual stdout, I have several options and questions regarding the best way of pulling log files to a shared folder / volume in a Kubernetes environment.
Option 1:
Batch job that uses kubectl cp to move the log files to host, but I don't think this is advisable as pods die frequently and crucial log files will be lost.
Option 2:
I'm not sure if this is possible as I am still learning how persistent volumes work compared to docker, but is it possible to mount a PVC with the same mountPath as the tomcat/logs folder so that the logs gets written to the PVC directly?
In Docker, I used to supply the container run command with a mount-source to specify the volume used for log consolidation:
docker container run -d -it --rm --mount source=logs,target=/opt/tomcat/logs ....
I am wondering if this is possible in the Kubernetes environment, for example, in the deployment or pod manifest file:
volumeMounts:
- mountPath: /opt/tomcat/logs/
name: logs
volumes:
- name: logs
persistentVolumeClaim:
claimName: logs
Option 3:
I want to avoid complicating my setup for now, but if all options are exhausted, I would be setting up ElasticSearch, Kibana and Filebeat to ship my log files.
Upvotes: 0
Views: 835
Reputation: 117
The solution is actually quite simple after I figured out how everything works. Hopefully this helps someone. I went ahead for option #2.
First define a pvc for my tomcat log files:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: tomcat-logs
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
In my deployment.yaml, reference it to the PVC created:
...
volumeMounts:
- mountPath: opt/tomcat/logs
name: tomcat-logs
volumes:
- name: tomcat-logs
persistentVolumeClaim:
claimName: tomcat-logs
...
As noted, the PVC is mounted as root, the container will not be able to write to it if they do not have sufficient privileges. For my case, it was my DockerFile that defined my user, and changing it to root resolves it.
Edit: If running the DockerFile as root is not viable, you can escalate the privileges in the deployment by adding:
...
spec:
securityContext:
runAsUser: 0
....
securityContext:
privileged: true
A related question here: Allowing access to a PersistentVolumeClaim to non-root user
Upvotes: 1