Reputation: 960
I am building a server, written in C++ and want to deploy it using Docker with docker-compose. What is the "right way" to do it? Should I invoke make
from Dockerfile or build manually, upload to some server and then COPY
binaries from Dockerfile?
Upvotes: 18
Views: 21258
Reputation: 13424
For anyone visiting this question after 2017, please see the answer by fuglede about using multi-stage Docker builds, that is really a better solution than my answer (below) from 2015, well before that was available.
The way I would do it is to run your build outside of your container and only copy the output of the build (your binary and any necessary libraries) into your container. You can then upload your container to a container registry (e.g., use a hosted one or run your own), and then pull from that registry onto your production machines. Thus, the flow could look like this:
Since it's important that you test before production deployment, you want to test exactly the same thing that you will deploy in production, so you don't want to extract or modify the Docker image in any way after building it.
I would not run the build inside the container you plan to deploy in prod, as then your container will have all sorts of additional artifacts (such as temporary build outputs, tooling, etc.) that you don't need in production and needlessly grow your container image with things you won't use for your deployment.
Upvotes: 7
Reputation: 9131
I had difficulties automating our build with docker-compose
, and I ended up using docker build
for everything:
Three layers for building
Run → develop → build
Then I copy the build outputs into the 'deploy' image:
Run → deploy
Four layers to play with:
RunFROM <projname>:run
FROM <projname>:develop
FROM <projname>:run
RUN
or ENTRYPOINT
used to launch the applicationThe folder structure looks like this:
.
├── run
│ └── Dockerfile
├── develop
│ └── Dockerfile
├── build
│ ├── Dockerfile
│ └── removeOldImages.sh
└── deploy
├── Dockerfile
└── pushImage.sh
Setting up the build server means executing:
docker build -f run -t <projName>:run
docker build -f develop -t <projName>:develop
Each time we make a build, this happens:
# Execute the build
docker build -f build -t <projName>:build
# Install build outputs
docker build -f deploy -t <projName>:version
# If successful, push deploy image to dockerhub
docker tag <projName>:<version> <projName>:latest
docker push <projName>:<version>
docker push <projName>:latest
I refer people to the Dockerfiles as documentation about how to build/run/install the project.
If a build fails and the output is insufficient for investigation, I can run /bin/bash
in <projname>:build
and poke around to see what went wrong.
I put together a GitHub repository around this idea. It works well for C++, but you could probably use it for anything.
I haven't explored the feature, but @TaylorEdmiston pointed out that my pattern here is quite similar to multi-stage builds, which I didn't know about when I came up with this. It looks like a more elegant (and better documented) way to achieve the same thing.
Upvotes: 29
Reputation: 10473
My recommendation would be to completely develop, build and test on the container itself. This ensures the Docker philosophy that the developer's environment is the same as the production environment, see The Modern Developer Workstation on MacOS with Docker.
Especially, in case of C++ applications where there are usually dependencies with shared libraries/object files.
I don't think there exists a standardized development process for developing, testing and deploying C++ applications on Docker, yet.
To answer your question, the way we do it as of now is, to treat the container as your development environment and enforce a set of practices on the team like:
docker diff
changes are as expected.Upvotes: 6
Reputation: 18201
While the solutions presented in the other answers -- and in particular the suggestion of Misha Brukman in the comments to this answer about using one Dockerfile for development and one for production -- would be considered idiomatic at the time the question was written, it should be noted that the problems they are trying to solve -- and in particular the issue of cleaning up the build environment to reduce image size while still being able to use the same container environment in development and production -- have effectively been solved by multi-stage builds, which were introduced in Docker 17.05.
The idea here would be to split up the Dockerfile into two parts, one that's based on your favorite development environment, such as a fully-fledged Debian base image, which is concerned with creating the binaries that you want to deploy at the end of the day, and another which simply runs the built binaries in a minimal environment, such as Alpine.
This way you avoid possible discrepancies between development and production environments as alluded to by blueskin in one of the comments, while still ensuring that your production image is not polluted with development tooling.
The documentation provides the following example of a multi-stage build of a Go application, which you would then adopt to a C++ development environment (with one gotcha being that Alpine uses musl so you have to be careful when linking in your development environment).
FROM golang:1.7.3
WORKDIR /go/src/github.com/alexellis/href-counter/
RUN go get -d -v golang.org/x/net/html
COPY app.go .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=0 /go/src/github.com/alexellis/href-counter/app .
CMD ["./app"]
Upvotes: 7