Reputation: 3861
I'm having issues getting a Jenkins pipeline script to work that uses the Docker Pipeline plugin to run parts of the build within a Docker container. Both Jenkins server and slave run within Docker containers themselves.
docker:1.12-dind
imagedocker run --link=docker-daemon:docker --link=jenkins:master -d --name protokube-jenkins-slave -e EXTRA_PARAMS="-username xxx -password xxx -labels docker" simulogics/protokube-jenkins-slave
Basic Docker operations (pull, build and push images) are working just fine with this setup.
inside
function provided by the Docker pipeline plugin within a generic Docker slave.This is a representative build step that's causing the issue:
image.inside {
stage ('Install Ruby Dependencies') {
sh "bundle install"
}
}
This would cause an error like this in the log:
sh: 1: cannot create /workspace/repo_branch-K5EM5XEVEIPSV2SZZUR337V7FG4BZXHD4VORYFYISRWIO3N6U67Q@tmp/durable-98bb4c3d/pid: Directory nonexistent
Previously, this warning would show:
71f4de289962-5790bfcc seems to be running inside container 71f4de28996233340c2aed4212248f1e73281f1cd7282a54a36ceeac8c65ec0a but /workspace/repo_branch-K5EM5XEVEIPSV2SZZUR337V7FG4BZXHD4VORYFYISRWIO3N6U67Q could not be found among []
Interestingly enough, exactly this problem is described in CloudBees documentation for the plugin here https://go.cloudbees.com/docs/cloudbees-documentation/cje-user-guide/index.html#docker-workflow-sect-inside:
For inside to work, the Docker server and the Jenkins agent must use the same filesystem, so that the workspace can be mounted. The easiest way to ensure this is for the Docker server to be running on localhost (the same computer as the agent). Currently neither the Jenkins plugin nor the Docker CLI will automatically detect the case that the server is running remotely; a typical symptom would be errors from nested sh commands such as
cannot create /…@tmp/durable-…/pid: Directory nonexistent or negative exit codes.
When Jenkins can detect that the agent is itself running inside a Docker container, it will automatically pass the --volumes-from argument to the inside container, ensuring that it can share a workspace with the agent.
Unfortunately, the detection described in the last paragraph doesn't seem to work.
Since both my server and slave are running in Docker containers, what kid of volume mapping do I have to use to make it work?
Upvotes: 4
Views: 4871
Reputation: 580
I've seen variations of this issue, also with the agents
powered by the kubernetes-plugin
.
I think that for it to work the agent/jnlp
container needs to share workspace with the build
container.
By build
container I am referring to the one that will run the bundle install
command.
This could be possibly work via withArgs
The question is why would you want to do that? Most of the pipeline steps are being executed on master anyway and the actual build will run in the build
container. What is the purpose of also using an agent
?
Upvotes: 1