mcode
mcode

Reputation: 544

Gitlab CI - Deploy docker-compose.yml on dev-server, test and deploy to PD

I'm completely new to Gitlab CI but started to read the documentation. Before actual building it, I want to check if it is a good idea to proceed as planned:

I have a docker-compose file and several Dockerfiles in a Gitlab repository. The docker-compose file consists of many images which are dependent on each other. We have two docker servers, one prod server and one dev server. We want to achieve the following:

  1. By a trigger (manually or a commit), we want to spin-down (via docker-compose down) the containers on the dev server
  2. Checkout the latest / current version of the repository (containing the docker-compose.yml and the Dockerfiles)
  3. Start all containers on dev server (via docker-compose up -d)
  4. [for later, needs to be defined] Start a test
  5. If test was successful or by manual interaction (click on a button), the environment should be deployed on the prod server (meaning step 1, 2 and 3 on prod server).

Does anything speak against this approach? The main issue I currently have is, that I don't know how to "use" / "reference" my existing servers. So I don't want to have the usual approach (creating a new isolated docker container, test the software and throw away) but I want to do it as described above.

Thanks for your help!

Edit

After doing some additional research, I feel the need to add some things: From what I understand, normally a Docker container is spinned up during CI/CD pipeline to test your application. Since I'm actually testing a whole stack of containers / a docker-compose file which has certain requirements to the docker host system, I would need to use something like docker in docker and deploy my stack there. However, in the first stage I would like to use the existing docker server since my "stack" needs to be adjusted in order to be created from scratch dynamically.

The reason why the containers have requirements to the host system is the fact, that we use Docker in this scenario as an infrastructure tool. So instead of VMs, we use Docker containers. The result is a complete environment of an enterprise application where the different services (management interfaces, repositories etc.) are individual containers.

Hope this helps. If something is unclear, just ask.

Upvotes: 3

Views: 4605

Answers (2)

Ryabchenko Alexander
Ryabchenko Alexander

Reputation: 12470

I have made this in next way:

1) gitlab runner with connection to docker on host

sudo gitlab-runner register -n \
  --url https://gitlab.YOURSITE.com/ \
  --registration-token YOUR_TOKEN \
  --executor docker \
  --description "runner #1" \
  --docker-image "docker:stable" \
  --docker-privileged \
  --docker-volumes /var/run/docker.sock:/var/run/docker.sock \
  --docker-volumes /home/gitlab-runner/docker-cache \

The last two lines with volumes allow to share cache between launches and to start container on same server where works gitlab runner

2) for tests/integration

integration:
  stage: integration
  when: manual
  script:
    - docker-compose -p integration -f docker-compose.integration.yml down -v
    - docker-compose -p integration -f docker-compose.integration.yml build --compress 
    - docker-compose -p integration -f docker-compose.integration.yml up -d

Note that dovn -v will remove volumes and with up they will be recreated with default data

3) for production i use docker swarm/stack. It allow to launch container on server different from server with gitlab runner

deploy-production:
  when: manual
  stage: production
  script:
    - docker login registry.MYSITE.com -u USER -p PASSWORD
    - docker-compose -f docker-compose.release.yml build
    - docker-compose -f docker-compose.release.yml push
    - docker stack deploy preprod -c deploy/production/service.yml --with-registry-auth

I use --with-registry-auth since i store images in private registry

Upvotes: 0

Rumen Kyusakov
Rumen Kyusakov

Reputation: 161

The setup you describe is quite typical for running integration tests where you have more or less complete system spun up for testing. There are different ways to solve this but here is my take on it:

1) Use a separate GitLab CI build server (gitlab-ci-runner) and not the dev server. It can be of any type: shell, docker etc. In this way you separate the deployment environment from your build servers.

2) In your CI pipeline, after all the code is build, unit tested etc. add a manual job (https://docs.gitlab.com/ee/ci/yaml/README.html#when-manual) to start the integration tests to the dev/staging server

3) The manual job would simply ssh to the dev server with credential in secret variables (https://docs.gitlab.com/ee/ci/variables/README.html#secret-variables). Then it will execute docker-compose down, docker-compose pull, docker-compose up assuming the latest docker images were already built in the build stage and deployed to a private docker registry.

4) Another job in the pipeline starts the tests

5) Once the test completes you can have another stage which is only triggered manually or if a certain git tag is pushed e.g., release/v* (https://docs.gitlab.com/ee/ci/yaml/README.html#only-and-except-simplified). In this job you ssh to the prod server and execute docker-compose down, docker-compose pull, docker-compose up again assuming the release docker images were already built. That is, you do not build and tag your docker images on the deployment machines - only run the containers there.

For building the docker images on your build server you can use shell executor, docker-in-docker or docker socket binding: https://docs.gitlab.com/ee/ci/docker/using_docker_build.html

with the shell approach being the simplest.

Upvotes: 4

Related Questions