Exotic Cut 5276
Exotic Cut 5276

Reputation: 255

How can we send aws address in my local machine to jenkins image run in docker container?

I am trying to send the path of aws in my host machine to jenkins that will be run in a docker container. So I downloaded jenkins image and I am trying to use aws cli command in jenkins pipeline in order to build nodejs application and then deploy it to s3 bucket. For the I need aws cli in jenkins image that I am running through docker. As far as I know, once you run any image in docker container, then it will be a seprate environemnt in itself so jenkins will not know that I have aws installed in my mac unless I send it address of aws in my mac which is what I am trying to do with

-v $(which aws): $(which aws)

command.

docker run -d -p 8080:8080 -p 50000:50000 -v ~/jenkins_directory:/var/jenkins_home -v $(which aws):$(which aws) jenkins/jenkins:2.190.2

However after I run this container in command line, it shows the following error response

docker: Error response from daemon: Mounts denied: The path /usr/local/bin/aws is not shared from OS X and is not known to Docker.

According to some of the answers I found in stackoverflow I then tried to add the address of aws in Docker file sharing panel. When I added the address of aws in docker, it again shows that

The path /usr is reserved by Docker however it may be possible to export specific subdirectories.

I have been able to get around this. I tried adding the whole

usr/local/bin/aws

in docker file sharing panel but still it shows the same problem. Does anyone have any idea what other things we can do in order to send the address of aws in my local container to jenkins image that I am trying to run in docker container?

Upvotes: 0

Views: 267

Answers (1)

Adiii
Adiii

Reputation: 59946

You need to install aws-cli in your docker image, and then you will able to use aws-cli inside your container.

FROM jenkins/jenkins:2.190.2
USER root
RUN apt-get update && \
    apt-get install awscli  -y
USER jenkins

-v or volumes are not designed to bind the host executable, but they are designed for files and folders for persistent storage. If you need executable you need to add in your docker image.

To be able to save (persist) data and also to share data between containers, Docker came up with the concept of volumes. Quite simply, volumes are directories (or files) that are outside of the default Union File System and exist as normal directories and files on the host filesystem.

understanding-volumes-docker

For this question

I am trying to use aws CLI command in jenkins pipeline in order to build nodejs application and then deploy it to s3 bucket.

If you are inside AWS, you can assign the IAM role to Jenkins server and you will not be required to bind host keys.

Or if you are outside AWS, then you just need bind host aws config and credentials ,

docker run -d -p 8080:8080 -p 50000:50000 -v ~/jenkins_directory:/var/jenkins_home -v $HOME/.aws/:/var/jenkins_home/.aws/ jenkins/jenkins:2.190.2

Upvotes: 1

Related Questions