mr haven
mr haven

Reputation: 1644

Docker Compose as a CI pipeline

So we use Gitlab CI. The issue was the pain of having to commit each time we want to test wether or not our build pipeline was configured correctly. Unfortunately no way to easily test Gitlab CI locally when our containers/pipeline ain't workin right.

Our solution, use docker-compose.yml as a CI pipeline runner for local testing of containerized build steps, why not ya know . . . ? Basically Gitlab CI, and most others, have each section spawn a container to run a command and won't continue until the preceding steps complete, i.e. the first step must fully complete and then the next step happens.

Here is a simple .gitlab-ci.yml file we use:

stages:
  - install
  - test

cache:
  untracked: true
  key: "$CI_COMMIT_REF_SLUG"
  paths:
    - node_modules/

install:
  image: node:10.15.3
  stage: install
  script: npm install

test:
  image: node:10.15.3
  stage: test
  script: 
    - npm run test
  dependencies:
    - install

Here is the docker-compose.yml file we converted it to:

version: "3.7"
services:
  install:
    image: node:10.15.3
    working_dir: /home/node
    user: node
    entrypoint: npm
    command:
      - install
    volumes:
      - .:/home/node:Z
  test:
    image: node:10.15.3
    working_dir: /home/node
    user: node
    entrypoint: npm
    command:
      - run
      - test
    volumes:
      - .:/home/node:Z
    depends_on:
      - install

OK, now for the real issue here. The depends_on part of the compose file doesn't wait for the install container to finish, it just waits for the npm command to be running. Therefore, once the npm command is officially loaded up and running, the test container will start running and complain there are no node_modules yet. This happens because npm is running does not mean the npm command has actually finished.

Anyone know any tricks to better control what docker considers to be done. All the solutions I looked into where using some kind of wrapper script which watched some port on the internal docker network to wait for a service, like a db, to be fully turned on and ready.

When using k8s I can setup a readiness probe which is super dope, doesn't seem to be a feature of Docker Compose though. Am I wrong here? Would be nice to just write a command which docker uses to determine what done means.

For now we must run each step manually and then run the next when the preceding step is complete like so:

docker-compose up install

wait ....

docker-compose up test

We really just want to say:

docker-compose up

and have all the steps complete in correct order by waiting for preceding steps.

Upvotes: 3

Views: 982

Answers (2)

mr haven
mr haven

Reputation: 1644

This question was many years ago. I now use this project: https://github.com/firecow/gitlab-ci-local

It runs your Gitlab Pipeline locally using docker just as you would expect it to run.

Upvotes: 1

SomeRandomGuy
SomeRandomGuy

Reputation: 11

I went through the same issue, this is a permission related thing when you are mapping from your local machine to docker.

volumes:
  - .:/home/node:Z

Create a file inside the container, and check the permission of this same file in your local machine, if you see the root user or anything else is the owner, instead of your current user, you have to run first

export DOCKER_USER="$(id -u):$(id -g)"

and change

user: node

by

user: $DOCKER_USER

PS: I'm assuming you can run docker without having to use sudo, just mentioning this bc this is the scenario I have.

Upvotes: 1

Related Questions