Reputation: 1646
I have a gitlab-ci.yml
like this:
build and push docker image:
stage: publish
variables:
DOCKER_REGISTRY: amazon-registry
AWS_DEFAULT_REGION: ap-south-1
APP_NAME: sample-app
DOCKER_HOST: tcp://docker:2375
image:
name: amazon/aws-cli
entrypoint: [""]
services:
- docker:dind
before_script:
- amazon-linux-extras install docker
script:
- docker build -t $DOCKER_REGISTRY/$APP_NAME:master .
- aws ecr get-login-password | docker login --username AWS --password-stdin $DOCKER_REGISTRY
- docker push $DOCKER_REGISTRY/$APP_NAME:master
This step takes 19=8 minutes to complete since the docker image steps are not cached. I want to be able to cache the before_script amazon-linux-extras install docker
as well as the docker image I'm building. We are running on our own gitlab runners. I've searched for answers but found 4 years old solutions. Is there a way to figure this out ? Also, will switching away from docker:dind
help ?
Upvotes: 27
Views: 31242
Reputation: 7695
The Gitlab CI cache doesn't work quite like that. If you have a job that installs npm dependencies, as an example, you could cache the resulting node_modules
directory so npm install
doesn't need to be run again, but it won't help for things like installing system packages.
Regarding the docker:dind
service, you won't be able to run commands like docker build...
or docker push ...
without that service, even if you switched the image your job uses to docker:latest
. It's a bit counterintuitive, but the only way to be able to run those commands is to use the docker-in-docker service.
However, you're not out of luck. I would recommend that you move the step in your before_script
stage to your own docker image that extends amazon/aws-cli
. As long as you have access to docker hub, Gitlab's included Registry (if using gitlab.com it's available, otherwise an admin has to enable/configure it), Amazon's registry (ECR I think?), or a privately run registry, you can create your own custom images and use them in Gitlab CI pipelines.
Here's an example Dockerfile:
FROM amazon/aws-cli
RUN amazon-linux-extras install docker
That's all you need to extend the existing amazon/aws-cli
image and move your before_script
installations to Docker. Once the file is done, run
docker build /path/to/dockerfile-directory -t my_tag:latest
After that, you'll need to login to your registry, docker login my.registry.example.com
, and push up the image docker push my_tag:latest
. If you're not using Gitlab's registry or the public docker hub, you'll need to configure your jobs or runners (either) so they can authenticate with your registry. You can read about that here: https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#define-an-image-from-a-private-container-registry
Next you just have to use it in your pipelines:
build and push docker image:
stage: publish
variables:
DOCKER_REGISTRY: amazon-registry
AWS_DEFAULT_REGION: ap-south-1
APP_NAME: sample-app
DOCKER_HOST: tcp://docker:2375
image:
name: my_tag:latest
entrypoint: [""]
services:
- docker:dind
script:
- docker build -t $DOCKER_REGISTRY/$APP_NAME:master .
- aws ecr get-login-password | docker login --username AWS --password-stdin $DOCKER_REGISTRY
- docker push $DOCKER_REGISTRY/$APP_NAME:master
One other thing you can do to save pipeline time (if it applies) is to only run this step when your Dockerfile has changed. That way if it hasn't but other jobs rely on it, they can just reuse the last image created. You can do that with the rules
keyword along with changes
:
build and push docker image:
stage: publish
variables:
DOCKER_REGISTRY: amazon-registry
AWS_DEFAULT_REGION: ap-south-1
APP_NAME: sample-app
DOCKER_HOST: tcp://docker:2375
image:
name: my_tag:latest
entrypoint: [""]
services:
- docker:dind
when: never
rules:
- changes:
- Dockerfile
when: always
script:
- docker build -t $DOCKER_REGISTRY/$APP_NAME:master .
- aws ecr get-login-password | docker login --username AWS --password-stdin $DOCKER_REGISTRY
- docker push $DOCKER_REGISTRY/$APP_NAME:master
The root-level when: never
sets the default for the job to never run, but the rules
section checks to see if there are changes to the Dockerfile (accepts multiple files if needed). If there are changes, the job will always be run.
You can see details about the rules
keyword here: https://docs.gitlab.com/ee/ci/yaml/#rules
You can see details about custom docker images for Gitlab CI here: https://docs.gitlab.com/ee/ci/docker/using_docker_images.html
Upvotes: 13
Reputation: 480
One thing I have tried is to use cache layer in docker build.
You could pull exist image from your registry, and then build with --cache-from
parameter.
The job shell would be like this:
variables:
IMAGE_TAG: $DOCKER_REGISTRY/$APP_NAME:master
script:
- aws ecr get-login-password | docker login --username AWS --password-stdin $DOCKER_REGISTRY
- docker pull $IMAGE_TAG || true
- docker build --cache-from $IMAGE_TAG -t $IMAGE_TAG .
- docker push $IMAGE_TAG
This method is mentioned in gitlab-ci official document as well.
Upvotes: 30