Reputation: 323
I start a docker service on a host and start a container for one test each time.
I used to use below cmd in the end of test to check whether the test is out of memory:
dmesg | grep -F -e 'Out of memory' -e 'invoked oom-killer: gfp_mask=0x' -e ': page allocation failure: order:'
But i notice if OOM during a test, all tests which run after it would become OOM since the OOM info has existed in dmesg
without shutdown or reboot.
It is hard for me to split dmesg info for each test, so above cmd can not help.
The cmd need to run in container as it is one step of finish a test.
Upvotes: 6
Views: 10465
Reputation: 323
I found that journalctl can limit start_time and end_time so that below command can work well:
journalctl -k \
--since "`date -r file "+%Y-%m-%d %H:%M:%S"`" \
--until "`date "+%Y-%m-%d %H:%M:%S"`" | grep -q -F \
-e 'Out of memory' \
-e 'invoked oom-killer: gfp_mask=0x' \
-e ': page allocation failure: order:'
I take the file mtime as start time so that i can get all kernel info during test.
Upvotes: 2
Reputation: 4134
You can use :
docker container inspect your-container-name | jq .[].State.OOMKilled
Returns true/false.
docker container inspect return json formated stuff about the container. jq is like 'sed for json' and with '.[].State.OOMKilled' you filter informations about your container to find is it was OOMKilled or not.
Update :
You can use -f to archive the same thing :
docker container inspect your-container-name -f '{{json .State.OOMKilled}}'
Upvotes: 2