Reputation: 1431
Since this morning I'm having troubles while updating services in AWS ECS. The tasks fails to start. The failed tasks shows this error:
open /var/lib/docker/devicemapper/metadata/.tmp928855886: no space left on device
I have checked disk space and there is.
/dev/nvme0n1p1 7,8G 5,6G 2,2G 73% /
Then I have checked the inodes usage, and I found that 100% are used:
/dev/nvme0n1p1 524288 524288 0 100% /
Narrowing the search I found that Docker volumes are the ones using the inodes.
I'm using the standard Centos AMI.
Does this mean that there is a maximum number of services that can run on a ECS cluster? (at this moment I'm running 18 services)
This can be solved? At this moment I can't do updates.
Thanks in advance
Upvotes: 5
Views: 7321
Reputation: 5531
You need to tweak the following environment variables on your EC2 hosts:
ECS_ENGINE_TASK_CLEANUP_WAIT_DURATION
ECS_IMAGE_CLEANUP_INTERVAL
ECS_IMAGE_MINIMUM_CLEANUP_AGE
ECS_NUM_IMAGES_DELETE_PER_CYCLE
You can find the full docs on all these settings here: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-agent-config.html
The default behavior is to check every 30 minutes, and only delete 5 images that are more than 1 hour old and unused. You can make this behavior more aggressive if you want to clean up more images more frequently.
Another thing to consider to save space is rather than squashing your image layers together make use of a common shared base image layer for your different images and image versions. This can make a huge difference because if you have 10 different images that are each 1 GB in size that takes up 10 GB of space. But if you have a single 1 GB base image layer, and then 10 small application layers that are only a few MB in size that only takes up a little more than 1 GB of disk space.
Upvotes: 7