Pand005
Pand005

Reputation: 1175

How to increase Docker container default size?

We have created docker image with default size of 10GB and we have loaded cassandra data now it is full means there is no space. Can anyone tell how to increase the docker container size to 40GB from 10GB without loss of existing data.

Upvotes: 10

Views: 37078

Answers (3)

Luc Demeester
Luc Demeester

Reputation: 345

In Docker Engine 19.03.6 on CentOS 7.7 :

I fixed this with the following steps:

# systemctl enable docker

# systemctl stop docker

# rm -rf /var/lib/docker

# vim /etc/systemd/system/multi-user.target.wants/docker.service

>     # Add storage option    
>     ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --storage-opt dm.basesize=20G

# systemctl daemon-reload

# systemctl start docker

Upvotes: 1

Ondrej Svejdar
Ondrej Svejdar

Reputation: 22094

I don't think it is possible (without the losing of the data). Here is how you enlarge the basesize:

  1. (optional) If you have already downloaded any image via docker pull you need to clean them first - otherwise they won't be resized

    docker rmi your_image_name

  2. Edit the storage config

    vi /etc/sysconfig/docker-storage

    There should be something like DOCKER_STORAGE_OPTIONS="...", change it to DOCKER_STORAGE_OPTIONS="... --storage-opt dm.basesize=100G"

  3. Restart the docker deamon

    service docker restart

  4. Pull the image

    docker pull your_image_name

  5. (optional) verification

    docker run -i -t your_image_name /bin/bash

    df -h

I was struggling with this a lot until I found out this link http://www.projectatomic.io/blog/2016/03/daemon_option_basedevicesize/ turns out you have to remove/pull image after enlarging the basesize.

Upvotes: 6

Ortomala Lokni
Ortomala Lokni

Reputation: 62783

The default basesize of a Docker container, using devicemapper, has been changed from 10GB to 100GB. Here is a link to the corresponding pull request in github.

Fixes issue #14678

Current default basesize is 10G. Change it to 100G. Reason being that for some people 10G is turning out to be too small and we don't have capabilities to grow it dyamically.

This is just overcommitting and no real space is allocated till container actually writes data. And this is no different then fs based graphdrivers where virtual size of a container root is unlimited.

Signed-off-by: Vivek Goyal [email protected]

Using the last version of docker should solve your problem.

Upvotes: 5

Related Questions