BoomShaka
BoomShaka

Reputation: 1801

Files written to docker bind mount not visibile on jenkins host

UPDATE From the log I see...

21:05:51  Running on ip-10-0-4-23.eu-west-2.compute.internal in /home/jenkins/workspace/myproj
...
... docker stuff happens here
...
21:05:54  Running on ip-10-0-4-9.eu-west-2.compute.internal in /home/jenkins/workspace/myproj

So I assume because the host is different (note the diff IP) the bind mount is on the first one, not the second.

Is there a way to force it to remain on the same host/agent?

OLD QUESTION

I have a jenkins pipeline job, that uses a docker container to do most of the heavy lifting. I want to "do some stuff" in the docker container, that will end up generating a bunch of files. I want these files available on the "parent" agent after the docker container has exited (and been removed by jenkins) so that I can zip them up and archive them or whatever.

Since jenkins will launch docker and use a bind mount to link the jenkins workspace to a folder inside the container, I would have thought writing a file to this folder within the docker container would cause the file to exist on the host, but this does not seem to work.

The ls -la in the Generate artifact block does NOT show the file-from-build-${env.BUILD_NUMBER}.txt. I've verified with pwd that I'm in the correct directory when writing/listing the files.

pipeline {
    agent any
    environment {
        userId = sh(script: "id -u ${USER}", returnStdout: true).trim()
    }
    stages {
        stage('Prep') {
            // do some stuff on the "any" agent
        }
        stage('Build') {
            agent {
                dockerfile {
                    filename 'Dockerfile.build'
                    additionalBuildArgs "--build-arg UID=${userId}"
                }
            }
            steps {
                sh "touch file-from-build-${env.BUILD_NUMBER}.txt"
                sh 'ls -la'
            }
        }
        stage('Generate artifact') {
            steps {
                sh "ls -la"
            }
        }
    }
}

From the logs, jenkins launches docker with:

docker run -t -d -u 1001:1001 -w /home/jenkins/workspace/myproj -v /home/jenkins/workspace/myproj:/home/jenkins/workspace/myproj:rw,z -v /home/jenkins/workspace/myproj@tmp:/home/jenkins/workspace/myproj@tmp:rw,z

My Dockerfile.build if it helps:

FROM php:7.4-alpine

RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer --2

RUN apk update && apk upgrade && \
    apk add --no-cache git openssh zip

# create a jenkins user inside the docker image (so that I can put some ssh keys for use in it's home folder)
ARG UID=1001
RUN adduser -D -g jenkins -s /bin/sh -u $UID jenkins \
    && mkdir -p /home/jenkins/.ssh \
    && touch /home/jenkins/.ssh/id_rsa \
    && chown -R jenkins:jenkins /home/jenkins/.ssh \
    && chmod 700 /home/jenkins/.ssh/ \
    && chmod 600 /home/jenkins/.ssh/id_rsa
    
USER jenkins

If it matters, I'm doing the heavy lifting in the docker container as I need to install some tools that I dont have on the normal jenkins agent.

Upvotes: 0

Views: 393

Answers (1)

Kaj Hejer
Kaj Hejer

Reputation: 1040

The reuseNode true directive as I understand it is ment to do what you want, see https://www.jenkins.io/doc/book/pipeline/syntax/.

In your case it would be

            agent {
                dockerfile {
                    filename 'Dockerfile.build'
                    additionalBuildArgs "--build-arg UID=${userId}"
                    reuseNode true
                }
            }

I have to say that I have seen reuseNode true not always work as intended, but I havn't had time to dig into this issue.

Upvotes: 1

Related Questions