Jananath Banuka
Jananath Banuka

Reputation: 3953

How to run Gitlab CI jobs in the same instance

I have autoscaled the gitlab-runner on AWS spot instances. And it works fine.

And I have an issue when running the jobs. Below is my .gitlab-ci.yml and it has two stages.

stages:
 - build
 - dev1:build

build:
 stage: build
 script: 
  - docker build --rm -t broker-connect-dev1-${CI_COMMIT_SHORT_SHA} -f BrokerConnect/Dockerfile .
 only:
  - dev1/release
 tags:
  - itela-spot-runner     

build-dev1:
 stage: dev1:build
 script: 
  - docker tag broker-connect-dev1-${CI_COMMIT_SHORT_SHA}:latest 19950818/broker-connect:${DEV1_TAG} 
 only:
  - dev1/release
 tags:
  - itela-spot-runner  

And here comes the problem, since I am using spot instances to run the jobs sometimes the build stage happens in one spot instance and the dev1:build stage happens in another spot instance. When this happens dev1:build fails as it cannot find the image broker-connect-dev1-${CI_COMMIT_SHORT_SHA} because it has been built in a separate spot instance. In gitlab, or in gitlab-runner, is there a way to control this behavior and run these two jobs build and dev1:build in the same spot instance?

Upvotes: 7

Views: 7753

Answers (2)

olahouze
olahouze

Reputation: 21

I have exactly the same problem as you. There is no real solution to this need in gitlabci because the solution has been designed to work with "perennial" runners and not "ephemeral" instances like AWS SPOT. And in the case of a perennial runner this problem does not arise because the "stages" that follow can reuse the configurations made by the previous "stages" on the same machine

In my case I found 2 possible workarounds

  1. Reproduce the steps (implemented in my company) This method consists in reproducing actions that we have already done in previous "courses". Advantage: Teams are not lost in the pipeline GUI and can see each "stage" as a separate job Disadvantage: We take more time during deployment because the last job of the last "stage" redoes all the actions of the previous job on the runner it uses Here is a code example to illustrate the solution (using the !reference system)
.scriptCheckHelm: 
  script: 
    - 'helm dependency build'
    - 'helm lint .'
    
stages: 
  - lint
  - build

Check_Conf: 
  stage: 'lint' 
  script:
    - !reference [.scriptCheckHelm, script]
  rules: 
    - if: '($CI_PIPELINE_SOURCE == "push")'
      when: 'always'
      allow_failure: false
  extends: .tags

Build_Package:
  stage: 'build'
  script:
    - !reference [.scriptCheckHelm, script]
    - 'helm package .'
  rules:
    - if: '($CI_PIPELINE_SOURCE == "push")&&($CI_COMMIT_TITLE == "DEPLOYMENT")'
      when: 'on_success'
      allow_failure: false
  extends: .tags

In this case, when we make commit with title "DEPLOYMENT", we had : PIPELINE with multiple job

  1. Run a single job This method consists in grouping all the actions in a single job Advantage : No time loss during a deployment, the runner executes all the actions one after the other Disadvantage: Users see only one job and have to look in the job log to identify the error
.scriptCheckHelm: 
  script: 
    - 'helm dependency build'
    - 'helm lint .'
    
stages: 
  - lint
  - build

Check_Conf: 
  stage: 'lint' 
  script:
    - !reference [.scriptCheckHelm, script]
  rules: 
    - if: '($CI_PIPELINE_SOURCE == "push")&&($CI_COMMIT_TITLE != "DEPLOYMENT")'
      when: 'always'
      allow_failure: false
  extends: .tags

Build_Package:
  stage: 'build'
  script:
    - !reference [.scriptCheckHelm, script]
    - 'helm package .'
  rules:
    - if: '($CI_PIPELINE_SOURCE == "push")&&($CI_COMMIT_TITLE == "DEPLOYMENT")'
      when: 'on_success'
      allow_failure: false
  extends: .tags

In this case, when we make commit with title "DEPLOYMENT", we had : PIPELINE with 1 job

Upvotes: 2

Adam Marshall
Adam Marshall

Reputation: 7695

The best way to control which jobs run on which runners is by using tags. You could tag a runner something like builds-images, then on any jobs that build images, or need to use images built by a previous step, use the same tag.

For example:

stages:
 - build
 - dev1:build

build:
 stage: build
 script: 
  - docker build --rm -t broker-connect-dev1-${CI_COMMIT_SHORT_SHA} -f BrokerConnect/Dockerfile .
 only:
  - dev1/release
 tags:
  - itela-spot-runner
  - builds-images   

build-dev1:
 stage: dev1:build
 script: 
  - docker tag broker-connect-dev1-${CI_COMMIT_SHORT_SHA}:latest 19950818/broker-connect:${DEV1_TAG} 
 only:
  - dev1/release
 tags:
  - itela-spot-runner
  - builds-images

Now you just need to have a runner (or runners) tagged with builds-images. If you're using gitlab.com or are self-hosted and have at least Gitlab version 13.2, you can edit a runner's details in the Runners page for a project (details here: https://docs.gitlab.com/ee/ci/runners/#view-and-manage-group-runners). Otherwise, tags can be set while registering a runner. For your use case, without further changing your .gitlab-ci.yml file, I'd only tag one runner.

The other option is to push the built image to either docker hub (https://docs.docker.com/docker-hub/), Gitlab's registry (https://docs.gitlab.com/ee/user/packages/container_registry/), or another registry that can support docker images (https://aws.amazon.com/ecr/). Then on any jobs that need the image, pull it down from the registry and use it.

For your example:

stages:
 - build
 - dev1:build

build:
 stage: build
 before_script:
   - docker login [registry_url] #...
 script: 
  - docker build --rm -t broker-connect-dev1-${CI_COMMIT_SHORT_SHA} -f BrokerConnect/Dockerfile .
  - docker push broker-connect-dev1-${CI_COMMIT_SHORT_SHA}
 only:
  - dev1/release
 tags:
  - itela-spot-runner     

build-dev1:
 stage: dev1:build
 before_script:
   - docker login [registry_url] #...
 script: 
  - docker pull broker-connect-dev1-${CI_COMMIT_SHORT_SHA}
  - docker tag broker-connect-dev1-${CI_COMMIT_SHORT_SHA}:latest 19950818/broker-connect:${DEV1_TAG} 
 only:
  - dev1/release
 tags:
  - itela-spot-runner

Upvotes: 0

Related Questions