HalfBloodPrince7
HalfBloodPrince7

Reputation: 163

Bind Volumes to Docker container in a pipeline job

So, I have this pipeline job that builds completely inside a Docker container. The Docker image used is pulled from a local repository before build and has almost all the dependencies required to run my project.

The problem is I need an option to define volumes to bind mound from Host to container so that I can perform some analysis using a tool available on my Host system but not in the container.

Is there a way to do this from inside a Jenkinsfile (Pipeline script)?

Upvotes: 0

Views: 7480

Answers (2)

lvthillo
lvthillo

Reputation: 30791

I'm not fully clear if this is what you mean. But if it isn't. Let me know and I'll try to figure out.

What I understand of mounting from host to container is mounting the content of the Jenkins Workspace inside the container.

For example in this pipeline:

pipeline {
    agent { node { label 'xxx' } }

    options {
        buildDiscarder(logRotator(numToKeepStr: '3', artifactNumToKeepStr: '1'))
    }

    stages {
        stage('add file') {
            steps {
                sh 'touch myfile.txt'
                sh 'ls'
            }
        }

        stage('Deploy') {
            agent {
                docker {
                    image 'lvthillo/aws-cli'
                    args '-v $WORKSPACE:/project'
                    reuseNode true
                }
            }
            steps {
                sh 'ls'
                sh 'aws --version'
            }
        }
    }

    post {
        always {
            cleanWs()
        }
    }
}

In the first stage I just add a file to the workspace. just in Jenkins. Nothing with Docker.

In the second stage I start a docker container which contains the aws CLI (this is not installed on our jenkins slaves). We will start the container and mount the workspace inside the /project folder of my container. Now I can execute AWS CLI command's and I have access to the text file. In a next stage (not in the pipeline) you can use the file again in a different container or jenkins slave itself.

Output:

[Pipeline] {
[Pipeline] stage
[Pipeline] { (add file)
[Pipeline] sh
[test] Running shell script
+ touch myfile.txt
[Pipeline] sh
[test] Running shell script
+ ls
myfile.txt
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Deploy)
[Pipeline] getContext
[Pipeline] sh
[test] Running shell script
+ docker inspect -f . lvthillo/aws-cli
.
[Pipeline] withDockerContainer
FJ Arch Slave 7 does not seem to be running inside a container
$ docker run -t -d -u 201:201 -v $WORKSPACE:/project -w ... lvthillo/aws-cli cat
$ docker top xx -eo pid,comm
[Pipeline] {
[Pipeline] sh
[test] Running shell script
+ ls
myfile.txt
[Pipeline] sh
[test] Running shell script
+ aws --version
aws-cli/1.14.57 Python/2.7.14 Linux/4.9.78-1-lts botocore/1.9.10

[Pipeline] }
$ docker stop --time=1 3652bf94e933cbc888def1eeaf89e1cf24554408f9e4421fabfd660012a53365
$ docker rm -f 3652bf94e933cbc888def1eeaf89e1cf24554408f9e4421fabfd660012a53365
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Declarative: Post Actions)
[Pipeline] cleanWs
[WS-CLEANUP] Deleting project workspace...[WS-CLEANUP] done
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS

In your case you can mount your data in the container. Perform the stuff and in a later stage you can do your analysis on your code on your jenkins slave itself (without docker)

Upvotes: 1

Fuevo
Fuevo

Reputation: 152

Suppose you are under Linux, run the following code

docker run -it --rm -v /local_dir:/image_root_dir/mount_dir image_name

Here is some detail: -it: interactive terminal --rm: remove container after exit the container -v: volume or say mount your local directory to a volume.

Since the mount function will 'cover' the directory in your image, your should alway make a new directory under your images root directory.

Visit Use bind mounts to get more information.

ps:

run

sudo -s

and tpye the password before you run docker, that saves you a lot of time, since you don't have to type sudo in front of docker every time you run docker.

ps2:

suppose you have an image with a long name and the image ID is 5ed6274db6ce, you can simply run at least the first three digits, or more

docker run [options] 5ed

if you have more image have the same first three digits, you can use four or more. For example, you have following two images

REPOSITORY                         IMAGE ID
My_Image_with_very_long_name       5ed6274db6ce
My_Image_with_very_long_name2      5edc819f315e

you can simply run

docker run [options] 5ed6

to run the image My_Image_with_very_long_name.

Upvotes: 0

Related Questions