Using 2 Dockerfiles in Cloud Build to re-use intermediary step image if CloudBuild fails
Cloud Build fails with Timeout Error (I'm trying to deploy CloudRun with Prophet). Therefore I'm trying to split the Dockerfile into two (saving the image in between in case it fails). I'd split the Dockerfile like this:
- Dockerfile_one: python + prophet's dependencies
- Dockerfile_two: image_from_Dockerfile_one + prophet + other dependencies
What should cloudbuild.yaml should look like to:
- if there is a previously image available skip the step, else run the step with the Dockerfile_one and save the image
- use the image from the step (1), add more dependencies to it and save the image for deploy
Here is how cloudbuild.yaml looks like right now
steps:
# create gcr source directory
- name: 'bash'
args:
- '-c'
- |
echo 'Creating gcr_source directory for ${_GCR_NAME}'
mkdir _gcr_source
cp -r cloudruns/${_GCR_NAME}/. _gcr_source
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/${_GCR_NAME}', '.']
dir: '_gcr_source'
# Push the container image to Container Registry
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$PROJECT_ID/${_GCR_NAME}']
# Deploy container image to Cloud Run
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: gcloud
args:
- run
- deploy
- ${_GCR_NAME}
- --image=gcr.io/$PROJECT_ID/${_GCR_NAME}
Thanks a lot!
Answers (3)
You need to have 2 pipelines
- The first one create the base image. Like that, you can trigger it everytime that you need to rebuild this base image, with, possibly a different lifecycle than your application lifecycle. Something similar to that
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/<PROJECT_ID>/base-image', '-f', 'DOCKERFILE_ONE', '.']
images: ['gcr.io/<PROJECT_ID>/base-image']
- Then, in your second dockerfile, start from the base image and use a second Cloud Build pipeline to build, push and deploy it (as you do in your 3 last steps in your question)
FROM gcr.io/<PROJECT_ID>/base-image
COPY .....
....
...
Why did your Cloud Build fail with Timeout Error ?
While building images in docker, it is important to keep the image size down. Often multiple dockerfiles are created to handle the image size constraint. In your case, you were not able to reduce the image size and include only what is needed.
What can be done to rectify it ?
- As per this documentation, multi-stage builds, (introduced in
Docker 17.05) allows you to build your app in a first "build"
container and use the result in another container, while using the
same Dockerfile.
- You use multiple FROM statements in your Dockerfile. Each FROM
instruction can use a different base, and each of them begins a new
stage of the build. You can selectively copy artifacts from one stage
to another, leaving behind everything you don’t want in the final
image. To show how this works, follow this link.
- You only need a single Dockerfile.
- The result is the same tiny production image as before, with a
significant reduction in complexity. You don’t need to create any
intermediate images and you don’t need to extract any artifacts to
your local system at all.
How does it work?
- You can name your build stages. By default, the stages are not
named, and you refer to them by their integer number, starting with 0
for the first FROM instruction. However, you can name your stages, by
adding an AS to the FROM instruction.
- When you build your image, you don’t necessarily need to build the
entire Dockerfile including every stage. You can specify a target
build stage.
- When using multi-stage builds, you are not limited to copying from
stages you created earlier in your Dockerfile. You can use the
COPY --from instruction to copy from a separate image,either
using the local image name, a tag available locally or on a Docker
registry, or a tag ID.
- You can pick up where a previous stage left off by referring
to it when using the FROM directive.
- In the Google documentation, there is an example of dockerfile
which uses multi-stage builds. The hello binary is built in a first
container and injected in a second one. Because the second container
is based on scratch, the resulting image contains only the hello
binary and not the source file and object files needed during the
build.
FROM golang:1.10 as builder
WORKDIR /tmp/go
COPY hello.go ./
RUN CGO_ENABLED=0 go build -a -ldflags '-s' -o hello
FROM scratch
CMD [ "/hello" ]
COPY --from=builder /tmp/go/hello /hello
- Here is a tutorial to understand how multi staging builds work.
Not the answer, but as a workaround. If anybody has the same issue, using Python3.8 instead 3.9 worked for Cloud Build.
This what the Dockerfile looks like:
RUN pip install --upgrade pip wheel setuptools
# Install pystan
RUN pip install Cython>=0.22
RUN pip install numpy>=1.7
RUN pip install pystan==2.19.1.1
# Install other prophet dependencies
RUN pip install -r requirements.txt
RUN pip install prophet
Though figuring out how to iteratively build images for CloudRun, would be really great.