Franz Ebner
Franz Ebner

Reputation: 5096

Jenkins: Push to ECR from slave

I'm building a docker container with spotify's maven plugin and try to push to ecr afterwards.

This happens using cloudbees Build and Publish plugin after managing to login with the Amazon ECR plugin.

This works like a charm on the jenkins master. But on the slave I get:

no basic auth credentials

Build step 'Docker Build and Publish' marked build as failure

Is pushing from slaves out of scope for the ECR Plugin or did I miss something?

Upvotes: 3

Views: 4306

Answers (3)

Chen A.
Chen A.

Reputation: 11280

The answers here didn't work for my pipeline. I find this solution working, and also clean:

withCredentials([[$class: 'AmazonWebServicesCredentialsBinding', accessKeyVariable: 'AWS_ACCESS_KEY_ID', credentialsId: 'myCreds', secretKeyVariable: 'AWS_SECRET_ACCESS_KEY']]) {
    
    sh '''
    aws ecr get-login-password --region ${AWS_REGION} | docker login --username AWS --password-stdin ${REGISTRY}

    ..
    '''
}

This solution doesn't require aws cli v2.

Upvotes: 1

Tom Manterfield
Tom Manterfield

Reputation: 7053

You might be falling foul of the bug reported in the ECR plugin here: https://issues.jenkins-ci.org/browse/JENKINS-44143

Various people in that thread are describing slightly different symptoms, but the common theme is that docker was failing to use the auth details that had been correctly generated by the ECR plugin.

I found in my case this was because the ECR plugin was saving to one docker config and the docker-commons plugin (which handles the actual work of the docker API) was reading from another. Docker changed config formats and locations in an earlier version which caused the conflict.

The plugin author offers a workaround which is to essentially just nuke both config files first:

node {
        //cleanup current user docker credentials
        sh 'rm  ~/.dockercfg || true'
        sh 'rm ~/.docker/config.json || true'

        //configure registry
        docker.withRegistry('https://ID.ecr.eu-west-1.amazonaws.com', 'ecr:eu-west-1:86c8f5ec-1ce1-4e94-80c2-18e23bbd724a') {

            //build image
            def customImage = docker.build("my-image:${env.BUILD_ID}")

            //push image
            customImage.push()
}

You might want to try that purely as a debugging step and quick fix (if it works you can be confident this bug is your issue).

My permanent fix was to simply create the new style dockercfg manually with a sensible default, and then set the environment variable to point to it.

I did this in my Dockerfile which creates my Jenkins instance like so:

RUN mkdir -p $JENKINS_HOME/.docker/ && \
    echo '{"auths":{}}' > $JENKINS_HOME/.docker/config.json
ENV DOCKER_CONFIG $JENKINS_HOME/.docker

Upvotes: 0

Guel135
Guel135

Reputation: 788

You have not credentials in the slave, that is the problem you have. I fix this problem injecting this credentials in every pipeline that run in the on demand slaves.

 withCredentials([[$class: 'AmazonWebServicesCredentialsBinding', accessKeyVariable: 'AWS_ACCESS_KEY_ID', credentialsId: 'AWS_EC2_key', secretKeyVariable: 'AWS_SECRET_ACCESS_KEY']]) {
                    sh "aws configure set aws_access_key_id ${AWS_ACCESS_KEY_ID}"
                    sh "aws configure set aws_secret_access_key ${AWS_SECRET_ACCESS_KEY}"
                    sh '$(aws ecr get-login --no-include-email --region eu-central-1)'


                sh "docker push ${your_ec2_repo}/${di_name}:image_name${newVersion}"

Of course you need to have installed the aws-cli in the slave

Upvotes: 0

Related Questions