bli00
bli00

Reputation: 2787

Concurrent build within Docker with regards to multi staging

I have a monolithic repo that contains all of my projects. The current setup I have is to bring up a build container, mount my monolithic repo, and build my projects sequentially. Copy out the binaries, and build their respective runtime (production) containers sequentially.

I find this process quite slow and want to improve the speed. Two main approach I want to take is

  1. Within the build container, build my project binaries concurrently. Instead of sequentially.

  2. Like step 1, also build my runtime (production) containers concurrently.

I did some research and it seems like there are two Docker features that are of my interest:

  1. Multi-stage building. Which allows me to skip worrying about the build container and put everything into one Dockerfiles.

  2. --parallel option for docker-compose, which would solve approach #2, allowing me to build my runtime containers concurrently.

However, there's still two main issues:

  1. How do I glue the two features together?

  2. How do I build my binaries concurrently inside the build Docker? In other words, how can I achieve approach #1?

Clarifications

Regardless of whether multi-stage is used or not, there's two logical phases.

First is the binary building phase. During this phase, the artifacts are the compiled executables (binaries) from the build containers. Since I'm not using multi-stage build, I'm copying these binaries out to the host, so the host serves as an intermediate staging area. Currently, the binaries are being built sequentially, I want to build them concurrently inside the build container. Hence approach #1.

Second is the image building phase. During this phase, the binaries from the previous phase, which are now stored on the host, are used to built my production images. I also want to build these images concurrently, hence approach #2.

Multi-stage allows me to eliminate the need for an intermedia staging area (the host). And --parallel allows me to build the production images concurrently.

What I'm wondering is how I can achieve approach #1 & #2 using multi-stage and --parallel. Because for every project, I can define a separate multi-stage Dockerfiles and call --parallel on all of them to have their images built separately. This would achieve approach #2, but this would spawn a separate build container for each project and take up a lot of resource (I use the same build container for all my projects and it's 6 GB). On the other hand, I can write a script to build my project binaries concurrently inside the build container. This would achieve approach #1, but then I can't use multi-stage if I want to build the production images concurrently.

What I really want is a Dockerfiles like this:

FROM alpine:latest AS builder
RUN concurrent_build.sh binary_a binary_b

FROM builder AS prod_img_a
COPY binary_a .

FROM builder AS prod_img_b
COPY binary_b .

And be able to run a docker-compose command like this (I'm making this up):

docker-compose --parallel prod_img_a prod_img_b

Further clarifications

The run-time binaries and run-time containers are not separate things. I just want to be able to parallel build the binaries AND the production images.

--parallel does not use different hosts, but my build container is huge. If I use multi-stage build and running something like 15 of these build containers in parallel on my local dev machine could be bad.

I'm thinking about compiling the binary and run-time containers separately too but I'm not finding an easy way to do that. I have never used docker commit, would that sacrifice docker cache?

Upvotes: 2

Views: 7367

Answers (2)

bli00
bli00

Reputation: 2787

Results

My mono-repo containers 16 projects, some are micro services being a few MBs, some are bigger services that are about 300 to 500 MBs.

The build contains the compilation of two prerequisites, one is gRPC, and the other is XDR. both are trivially small, taking only 1 or 2 seconds to build.

The build contains a node_modules installation phase. NPM install and build is THE bottleneck of the project and by far the slowest.

The strategy I am using is to split the build into two stages:

  1. First stage is to spin up a monolithic build docker, mount the mono-repo to it with cache consistency as a binding volume. And build all of my container's binary dependencies inside of it in parallel using Goroutines. Each Goroutine is calling a build.sh bash script that does the building. The resulting binaries are written to the same mounted volume. There is cache being used in the form of a mounted docker volume, and the binaries are preserved across runs to a best effort.

  2. Second stage is to build the images in parallel. This is done using docker's Go SDK documented here. This is also done in parallel using Goroutines. Nothing else is special about this stage besides some basic optimizations.

I do not have performance data about the old build system, but building all 16 projects easily took the upper bound of 30 minutes. This build was extremely basic and did not build the images in parallel or use any optimizations.

The new build is extremely fast. If everything is cached and there's no changes, then the build takes ~2 minutes. In other words, the overhead of bring up the build system, checking the cache, and building the same cached docker images takes ~2 minutes. If there's no cache at all, the new build takes ~5 minutes. A HUGE improvement from the old build.

Thanks to @halfer for the help.

Upvotes: 4

halfer
halfer

Reputation: 20420

So, there are several things to try here. Firstly, yes, do try --parallel, it would be interesting to see the effect on your overall build times. It looks like you have no control over the number of parallel builds though, so I wonder if it would try to do them all in one go.

If you find that it does, you could write docker-compose.yml files that only contain a subset of your services, such that you only have five at a time, and then build against each one in turn. Indeed, you could write a script that reads your existing YAML config and splits it up, so that you do not need to maintain your overall config and your split-up configs separately.

I suggested in the comments that multi-stage would not help, but I think now that this is not the case. I was wondering whether the second stage in a Dockerfile would block until the first one is completed, but this should not be so - if the second stage starts from a known image then it should only block when it encounters a COPY --from=first_stage command, which you can do right at the end, when you copy your binary from the compilation stage.

Of course, if it is the case that multi-stage builds are not parallelised, then docker commit would be worth a try. You've asked whether this uses the layer cache, and the answer is I don't think it matters - your operation here would thus:

  • Spin up the binary container to run a shell or a sleep command
  • Spin up the runtime container in the same way
  • Use docker cp to copy the binary from the first one to the second one
  • Use docker commit to create a new runtime image from the new runtime container

This does not involve any network operations, and so should be pretty quick - you will have benefited greatly from the parallelisation already at this point. If the binaries are of non-trivial size, you could even try parallelising your copy operations:

docker cp binary1:/path/to/binary runtime1:/path/to/binary &
docker cp binary2:/path/to/binary runtime2:/path/to/binary &
docker cp binary3:/path/to/binary runtime3:/path/to/binary &

Note though these are disk-bound operations, so you may find there is no advantage over doing them serially.

Could you give this a go and report back on:

  • your existing build times per container
  • your existing build times overall
  • your new build times after parallelisation

Do it all locally to start off with, and if you get some useful speed-up, try it on your build infrastructure, where you are likely to have more CPU cores.

Upvotes: 2

Related Questions