Reputation: 22518
I am running a container on a VM. My container is writing logs by default to /var/lib/docker/containers/CONTAINER_ID/CONTAINER_ID-json.log file until the disk is full.
Currently, I have to delete manually this file to avoid the disk to be full. I read that in Docker 1.8 there will be a parameter to rotate the logs. What would you recommend as the current workaround?
Upvotes: 131
Views: 162333
Reputation: 2103
The limits can be set using the docker run
command also.
docker run -it -d -v /tmp:/tmp -p 49160:8080 --name web-stats-app --log-opt max-size=10m --log-opt max-file=5 mydocker/stats_app
Upvotes: 2
Reputation: 1757
Just in case you can't stop your container, I have created a script that performs the following actions (you have to run it with sudo):
Notes:
#!/bin/bash
set -ex
############################# Main Variables Definition:
CONTAINER_NAME="your-container-name"
SIZE_TO_TRUNCATE="10M"
############################# Other Variables Definition:
CURRENT_DATE=$(date "+%d-%b-%Y-%H-%M-%S")
RANDOM_VALUE=$(shuf -i 1-1000000 -n 1)
LOG_FOLDER="/opt/${CONTAINER_NAME}/logs"
CN=$(docker ps --no-trunc -f name=${CONTAINER_NAME} | awk '{print $1}' | tail -n +2)
LOG_DOCKER_FILE="$(docker inspect --format='{{.LogPath}}' ${CN})"
LOG_FILE_NAME="${CURRENT_DATE}-${RANDOM_VALUE}"
############################# Procedure:
mkdir -p "${LOG_FOLDER}"
cp ${LOG_DOCKER_FILE} "${LOG_FOLDER}/${LOG_FILE_NAME}.log"
cd ${LOG_FOLDER}
tar -cvzf "${LOG_FILE_NAME}.tar.gz" "${LOG_FILE_NAME}.log"
rm -rf "${LOG_FILE_NAME}.log"
truncate -s ${SIZE_TO_TRUNCATE} ${LOG_DOCKER_FILE}
You can create a cronjob to run the previous script every month. First run:
sudo crontab -e
Type a in your keyboard to enter edit mode. Then add the following line:
0 0 1 * * /your-script-path/script.sh
Hit the escape key to exit Edit mode. Save the file by typing :wq and hitting enter. Make sure the script.sh file has execution permissions.
Upvotes: 3
Reputation: 4654
version: "3.9"
services:
some-service:
image: some-service
logging:
driver: "json-file"
options:
max-size: "200k"
max-file: "10"
The example shown above would store log files until they reach a max-size of 200kB, and then rotate them. The amount of individual log files stored is specified by the max-file value. As logs grow beyond the max limits, older log files are removed to allow storage of new logs.
Logging options available depend on which logging driver you use
controlling log files and sizes
uses options specific to the json-file driver
. These particular options are not available on other logging drivers. For a full list of supported logging drivers and their options, refer to the logging drivers documentation.Note: Only the json-file
and journald
drivers make the logs available directly from docker-compose up and docker-compose logs. Using any other driver does not print any logs.
Source: https://docs.docker.com/compose/compose-file/compose-file-v3/
Upvotes: 23
Reputation: 263637
[This answer covers current versions of docker for those coming across the question long after it was asked.]
To set the default log limits for all newly created containers, you can add the following in /etc/docker/daemon.json:
{
"log-driver": "json-file",
"log-opts": {"max-size": "10m", "max-file": "3"}
}
Then reload docker with systemctl reload docker
if you are using systemd (otherwise use the appropriate restart command for your install).
You can also switch to the local logging driver with a similar file:
{
"log-driver": "local",
"log-opts": {"max-size": "10m", "max-file": "3"}
}
The local logging driver stores the log contents in an internal format (I believe protobufs) so you will get more log contents in the same size logfile (or take less file space for the same logs). The downside of the local driver is external tools like log forwarders, may not be able to parse the raw logs. Be aware the docker logs
only works when the log driver is set to json-file
, local
, or journald
.
The max-size
is a limit on the docker log file, so it includes the json or local log formatting overhead. And the max-file
is the number of logfiles docker will maintain. After the size limit is reached on one file, the logs are rotated, and the oldest logs are deleted when you exceed max-file
.
For more details, docker has documentation on all the drivers at: https://docs.docker.com/config/containers/logging/configure/
I also have a presentation covering this topic. Use P
to see the presenter notes: https://sudo-bmitch.github.io/presentations/dc2019/tips-and-tricks-of-the-captains.html#logs
Upvotes: 32
Reputation: 2789
Example for docker-compose version 1:
mongo:
image: mongo:3.6.16
restart: unless-stopped
log_opt:
max-size: 1m
max-file: "10"
Upvotes: 1
Reputation: 3450
Pass log options while running a container. An example will be as follows
sudo docker run -ti --name visruth-cv-container --log-opt max-size=5m --log-opt max-file=10 ubuntu /bin/bash
where --log-opt max-size=5m
specifies the maximum log file size to be 5MB and --log-opt max-file=10
specifies the maximum number of files for rotation.
Upvotes: 7
Reputation: 4587
CAUTION: This is for docker-compose version 2 only
Example:
version: '2'
services:
db:
container_name: db
image: mysql:5.7
ports:
- 3306:3306
logging:
options:
max-size: 50m
Upvotes: 56
Reputation: 2313
Caution: this post relates to docker versions < 1.8 (which don't have the --log-opt
option)
Why don't you use logrotate (which also supports compression)?
/var/lib/docker/containers/*/*-json.log {
hourly
rotate 48
compress
dateext
copytruncate
}
Configure it either directly on your CoreOs Node or deploy a container (e.g. https://github.com/tutumcloud/logrotate) which mounts /var/lib/docker to rotate the logs.
Upvotes: 10
Reputation: 22518
Docker 1.8 has been released with a log rotation option. Adding:
--log-opt max-size=50m
when the container is launched does the trick. You can learn more at: https://docs.docker.com/engine/admin/logging/overview/
Upvotes: 113