Reputation: 16287
I am configuring jenkins + jenkins agents in kubernetes using this guide:
https://akomljen.com/set-up-a-jenkins-ci-cd-pipeline-with-kubernetes/
which gives the below example of a jenkins pipeline using multiple/different containers for different stages:
def label = "worker-${UUID.randomUUID().toString()}"
podTemplate(label: label, containers: [
containerTemplate(name: 'gradle', image: 'gradle:4.5.1-jdk9', command: 'cat', ttyEnabled: true),
containerTemplate(name: 'docker', image: 'docker', command: 'cat', ttyEnabled: true),
containerTemplate(name: 'kubectl', image: 'lachlanevenson/k8s-kubectl:v1.8.8', command: 'cat', ttyEnabled: true),
containerTemplate(name: 'helm', image: 'lachlanevenson/k8s-helm:latest', command: 'cat', ttyEnabled: true)
],
volumes: [
hostPathVolume(mountPath: '/home/gradle/.gradle', hostPath: '/tmp/jenkins/.gradle'),
hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock')
]) {
node(label) {
def myRepo = checkout scm
def gitCommit = myRepo.GIT_COMMIT
def gitBranch = myRepo.GIT_BRANCH
def shortGitCommit = "${gitCommit[0..10]}"
def previousGitCommit = sh(script: "git rev-parse ${gitCommit}~", returnStdout: true)
stage('Test') {
try {
container('gradle') {
sh """
pwd
echo "GIT_BRANCH=${gitBranch}" >> /etc/environment
echo "GIT_COMMIT=${gitCommit}" >> /etc/environment
gradle test
"""
}
}
catch (exc) {
println "Failed to test - ${currentBuild.fullDisplayName}"
throw(exc)
}
}
stage('Build') {
container('gradle') {
sh "gradle build"
}
}
stage('Create Docker images') {
container('docker') {
withCredentials([[$class: 'UsernamePasswordMultiBinding',
credentialsId: 'dockerhub',
usernameVariable: 'DOCKER_HUB_USER',
passwordVariable: 'DOCKER_HUB_PASSWORD']]) {
sh """
docker login -u ${DOCKER_HUB_USER} -p ${DOCKER_HUB_PASSWORD}
docker build -t namespace/my-image:${gitCommit} .
docker push namespace/my-image:${gitCommit}
"""
}
}
}
stage('Run kubectl') {
container('kubectl') {
sh "kubectl get pods"
}
}
stage('Run helm') {
container('helm') {
sh "helm list"
}
}
}
}
But why would you bother with this level of granularity? E.g. why not just have one container that have all you need, jnlp, helm, kubectl, java etc. and use that for all your stages?
I know from a purist perspective its good to keep container/images as small as possible but if that's the only argument I would rather have it one container + not having to bother my end users (developers writing jenkinsfiles) with picking the right container - they should not have to worry about stuff at this level instead they you need to be able to get an agent and that's it.
Or am I missing some functional reason for this multiple container setup?
Upvotes: 1
Views: 1650
Reputation: 794
Using one single image to handle all process is funtionally feasible, but it adds burden to your operation.
We don't always find an image that fulfills all our needs, i.e. desired tools with desired version. Most likely, you are going to build one.
To achieve this, you need to build docker images for different arch (amd/arm) and maintain/use a docker registry to store your built image, this process can be time consuming as your image gets more complicated. More importantly, it is very likely that some of your tools 'favour' some particular linus distro, you will find it difficult and not always functionally ok.
Imagine you need to use a newer version of docker image in on of your pipeline's step, you will have you repeat the whole process of building and uploading images. Alternatively, you only need to change the image version in your pipeline, it minimises your operation effort.
Upvotes: 1