ptk
ptk

Reputation: 7653

How to use multiple Docker containers to set up Jenkins agent in Jenkins pipeline

The following snippet is an example provided by Cypress, a Javascript testing framework that I'm using. Here is the link to the Github page.

pipeline {
  agent {
    // this image provides everything needed to run Cypress
    docker {
      image 'cypress/base:10'
    }
  }

  stages {
    // first stage installs node dependencies and Cypress binary
    stage('build') {
      steps {
        // there a few default environment variables on Jenkins
        // on local Jenkins machine (assuming port 8080) see
        // http://localhost:8080/pipeline-syntax/globals#env
        echo "Running build ${env.BUILD_ID} on ${env.JENKINS_URL}"
        sh 'npm ci'
        sh 'npm run cy:verify'
      }
    }

    stage('start local server') {
      steps {
        // start local server in the background
        // we will shut it down in "post" command block
        sh 'nohup npm run start:ci &'
      }
    }

    // this stage runs end-to-end tests, and each agent uses the workspace
    // from the previous stage
    stage('cypress parallel tests') {
      environment {
        // we will be recording test results and video on Cypress dashboard
        // to record we need to set an environment variable
        // we can load the record key variable from credentials store
        // see https://jenkins.io/doc/book/using/using-credentials/
        CYPRESS_RECORD_KEY = credentials('cypress-example-kitchensink-record-key')
        // because parallel steps share the workspace they might race to delete
        // screenshots and videos folders. Tell Cypress not to delete these folders
        CYPRESS_trashAssetsBeforeRuns = 'false'
      }

      // https://jenkins.io/doc/book/pipeline/syntax/#parallel
      parallel {
        // start several test jobs in parallel, and they all
        // will use Cypress Dashboard to load balance any found spec files
        stage('tester A') {
          steps {
            echo "Running build ${env.BUILD_ID}"
            sh "npm run e2e:record:parallel"
          }
        }

        // second tester runs the same command
        stage('tester B') {
          steps {
            echo "Running build ${env.BUILD_ID}"
            sh "npm run e2e:record:parallel"
          }
        }
      }

    }
  }

  post {
    // shutdown the server running in the background
    always {
      echo 'Stopping local server'
      sh 'pkill -f http-server'
    }
  }
}

My goal is to have a Jenkinsfile that is very similar to the above because I want to have parallel Cypress testing as shown in the above snippet. In the example above, the Jenkins agent is simply the official Cypress Docker image cypress/base:10.

  agent {
    // this image provides everything needed to run Cypress
    docker {
      image 'cypress/base:10'
    }
  }

However, for me to run all my tests with my own database, I need to spin up two separate Docker containers. One container contains the front-end portion of my web app and the other container contains the back-end portion of my web app.

Below is the Dockerfile for my front-end container, which is located in my-app/docker/combined/Dockerfile.

FROM cypress/included:3.4.1

WORKDIR /usr/src/app

COPY package*.json ./

RUN npm install

COPY . .

EXPOSE 5000

RUN npm install -g history-server nodemon

RUN npm run build-test

EXPOSE 8080

Below is the Dockerfile for my back-end container, which is located in my-app/docker/db/Dockerfile. All it is doing is copying some local data into the Docker container and then initialising my MongoDB database with this data.

FROM  mongo:3.6.14-xenial

COPY ./dump/ /tmp/dump/

COPY mongo_restore.sh /docker-entrypoint-initdb.d/

RUN chmod 777 /docker-entrypoint-initdb.d/mongo_restore.sh

Usually, I would use docker-compose and the following docker-compose.yml file to spin up these two containers. As you can see, the front-end container called "combined" is dependent on the back-end container called "db".

version: '3'
services:
    db:
        build:
            context: .
            dockerfile: ./docker/db/Dockerfile
        container_name: b-db
        restart: unless-stopped
        volumes:     
            - dbdata:/data/db
        ports:
            - "27017:27017"
        networks:
            - app-network

    combined:
        build:
            context: .
            dockerfile: ./docker/combined/Dockerfile
        container_name: b-combined
        restart: unless-stopped
        env_file: .env
        ports:
            - "5000:5000"
            - "8080:8080"
        networks:
            - app-network
        depends_on:
            - db

Below is the docker-compose command I would use.

docker-compose up --build

I would like my Jenkins agent to be the combined container; however, I need the combined container to connect to my db container, which needs to be spun up. My question is, how do I achieve this in Jenkins pipelines? I've read this documentation; however, it doesn't mention anything about using multiple Dockerfiles to create a Jenkins agent. Is something like this possible and could someone please show me what my Jenkinsfile should look like in order to achieve my goal?

Upvotes: 2

Views: 5659

Answers (2)

y_ug
y_ug

Reputation: 1124

Consider running "sidecar" containers: https://jenkins.io/doc/book/pipeline/docker/#running-sidecar-containers

Upvotes: 1

gkpln3
gkpln3

Reputation: 1499

I don't think you can communicate between the parallel builds, the thing is, that stages running in different parallel stages on the Jenkinsfile can theoretically be run on different Jenkins slaves, thus not being able to communicate with one another.

What I would recommend doing is to run both the server and the frontend application in parallel on the same stage using & and then calling wait to wait for all processes to exit.

You can even discard the usage of the docker keyword in the Jenkinsfile, and instead, call docker inside the stage itself.

Upvotes: 0

Related Questions