Reputation: 8851
I am trying to integrate docker in to my django workflow and I have everything set up except one really annoying issue. If I want to add dependencies to my requirements.txt file I basically just have to rebuild the entire container image for those dependencies to stick.
For example, I followed the docker-compose example for django here. the yaml file is set up like this:
db:
image: postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
links:
- db
and the Docker file used to build the web container is set up like this:
FROM python:2.7
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code/
So when the image is built for this container requirements.txt is installed with whatever dependencies are initially in it.
If I am using this as my development environment it becomes very difficult to add any new dependencies to that requirements.txt file because I will have to rebuild the container for the changes in requirements.txt to be installed.
Is there some sort of best practice out there in the django community to deal with this? If not, I would say that docker looks very nice for packaging up an app once it is complete, but is not very good to use as a development environment. It takes a long time to rebuild the container so a lot of time is wasted.
I appreciate any insight . Thanks.
Upvotes: 10
Views: 9214
Reputation: 13486
I think the best way is to ignore what's currently the most common way to install python dependencies (pip install -r requirements.txt
) and specify your requirements directly in the Dockerfile, effectively getting rid of the requirements.txt
file. Additionally you get dockers layer caching for free.
FROM python:2.7
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
# make sure you install requirements before the ADD, since everything after ADD is not cached
RUN pip install flask==0.10.1
RUN pip install sqlalchemy==1.0.6
...
ADD . /code/
If the docker container is the only way your application is ever run, then I would suggest you do it this way. If you want to support other means of setting up your code (e.g. virtualenv) then this is of course not for you and you should fall back to either using a requirements file or use a setup.py routine. Either way, I found this way to be most simple and straightforward without dealing with all the messed up python package distribution issues.
Upvotes: -1
Reputation: 8851
So I changed the yaml file to this:
db:
image: postgres
web:
build: .
command: sh startup.sh
volumes:
- .:/code
ports:
- "8000:8000"
links:
- db
I made a simple shell script startup.sh:
#!/bin/bash
#restart this script as root, if not already root
[ `whoami` = root ] || exec sudo $0 $*
pip install -r dev-requirements.txt
python manage.py runserver 0.0.0.0:8000
and then made a dev-requirements.txt that is installed by the above shell script as sort of a dependency staging environment.
when I am satisfied with a dependency in dev-requirements.txt I will just move it over to the requirements.txt to be committed to the next build of the image. This gives me flexibility to play with adding and removing dependencies while developing.
Upvotes: 3
Reputation: 11026
You could mount requirements.txt
as a volume when using docker run
(untested, but you get the gist):
docker run container:tag -v /code/requirements.txt ./requirements.txt
Then you could bundle a script with your container which will run pip install -r requirements.txt
before starting your application, and use that as your ENTRYPOINT
. I love the custom entrypoint script approach, it lets me do a little extra work without needing to make a new container.
That said, if you're changing your dependencies, you're probably changing your application and you should probably make a new container and tag it with a later version, no? :)
Upvotes: 6