Reputation: 32306
I used the command mentioned on the official docker page.
https://hub.docker.com/r/continuumio/miniconda3/
official command:
docker run -i -t -p 8888:8888 continuumio/miniconda3 /bin/bash -c "/opt/conda/bin/conda install jupyter -y --quiet && mkdir /opt/notebooks && /opt/conda/bin/jupyter notebook --notebook-dir=/opt/notebooks --ip='*' --port=8888 --no-browser"
I mounted tmp folder within the container and used it as home directory because for some reason the root partition had only 10 GB disk allocated.
docker run -i -t -p 8888:8888 -v /tmp/:/tmp/ continuumio/miniconda3 /bin/bash -c "/opt/conda/bin/conda install jupyter -y --quiet && cd /tmp/ && /opt/conda/bin/jupyter notebook --ip='*' --port=8888 --no-browser --allow-root"
I will like to know if there is any better way of doing this. The size of root partition should be the same as host machine hard disk capacity.
Update:
I am using Dockerfile as suggested by shizhz. But here is the correct command if in case I need it:
docker run -i -t -p 8888:8888 -v /tmp:/tmp continuumio/miniconda3 /bin/bash -c "/opt/conda/bin/conda install jupyter -y --quiet && cd /tmp/ && /opt/conda/bin/jupyter notebook --notebook-dir=/tmp --ip='*' --port=8888 --no-browser --allow-root"
Upvotes: 1
Views: 1036
Reputation: 12501
This is not about your disk partition but I think you're doing too much things in startup shell, I'd like to suggest you to define your own Dockerfile
and build your own image. I test with the following Dockerfile
:
FROM continuumio/miniconda3
RUN /opt/conda/bin/conda install jupyter -y --quiet && mkdir /opt/notebooks
VOLUME /opt/notebooks
EXPOSE 8888
WORKDIR /opt/notebooks
CMD ["/opt/conda/bin/jupyter", "notebook", "--notebook-dir=/opt/notebooks", "--ip='*'", "--port=8888", "--no-browser", "--allow-root"]
Then build your own image:
docker build -t jupyter .
After than you can easily start your service like:
docker run -d -v `pwd`/notebooks:/opt/notebooks -p 8888:8888 jupyter
Of course you can mount any directory on your host to VOLUME /opt/notebooks
inside container.
Upvotes: 1
Reputation: 1901
Docker by default allocates 10G of the data irrespective of the size of your host computer.
There are some recent developments in this area and docker 3.18.0 has the best way of handling the things. Details of the same are available in docker documentation.
As per the documentation, you would have to use the following sequence of commands to increase the base size of the containers/ images.
$ sudo dockerd --storage-opt dm.basesize=50G
$ sudo service docker stop
$ sudo rm -rf /var/lib/docker
$ sudo service docker start
Also, please note that this will effect all the future images that you pull / build and future containers you create. Existing containers are not affected by this command. Hence, you might have to re-create your images and re-create your containers.
Upvotes: 1