Katana24
Katana24

Reputation: 8959

Unable to locate credentials - Gitlab Pipeline for S3

I'm deploying the front-end of my website to amazon s3 via Gitlab pipelines. My previous deployments have worked successfully but the most recent deployments do not. Here's the error:

Completed 12.3 MiB/20.2 MiB (0 Bytes/s) with 1 file(s) remaining
upload failed: dist/vendor.bundle.js.map to s3://<my-s3-bucket-name>/vendor.bundle.js.map Unable to locate credentials

Under my secret variables I have defined four. They are S3 credential variables (AWS_ACCESS_KEY_ID & AWS_SECRET_ACCESS_KEY) for two different buckets. One pair is for the testing branch and the other is for the production branch.

Not - the production environment variables are protected and the other variables are not. Here's the deploy script that I run:

#/bin/bash
#upload files
aws s3 cp ./dist s3://my-url-$1 --recursive --acl public-read

So why am I getting this credential location error? Surely it should just pick up the environment variables automatically (the unprotected ones) and deploy them. Do I need to define the variables in the job and refer to them?

Upvotes: 18

Views: 11806

Answers (4)

CromulentCoder
CromulentCoder

Reputation: 63

For anyone still struggling after reading the other answers, make sure your branch is protected if you are trying to read protected variables. Gitlab doesn't expose those variables to non-protected branches due to which you might see the error. https://docs.gitlab.com/ee/ci/variables/index.html#add-a-cicd-variable-to-a-project

To protect your branch go to Settings -> Repository -> Protected Branches and mark your required branch as protected. (This step might change in future)

Upvotes: 4

Katana24
Katana24

Reputation: 8959

The actual problem was a collision to do with naming the variables. For both branches the variables were called AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. However the problem wasn't just to rename them as the pipeline still didn't pick them up.

I printed the password to the logs to determine which password was being picked up by which branch but found that neither was being taken up. The solution was to have a unique name for each password for each branch (e.g. PRODUCTION_ACCESS_KEY_ID and TESTING_ACCESS_KEY_ID) and in the build script refer to them:

deploy_app_production:
  environment: 
    name: production
    url: <url>
  before_script:
    - echo "Installing ruby & dpl"
    - apt-get update && apt-get install -y ruby-full
    - gem install dpl
  stage: deploy
  tags:
    - nv1
  script:
    - echo "Deploying to production"
    - sh deploy.sh production $PRODUCTION_ACCESS_KEY_ID $PRODUCTION_SECRET_ACCESS_KEY
  only:
    - master

And in the deploy.sh I referred to the passed in variables (though I did end up switching to dpl):

dpl --provider=s3 --access-key-id=$2 --secret-access-key=$3 --bucket=<my-bucket-name>-$1 --region=eu-west-1 --acl=public_read --local-dir=./dist --skip_cleanup=true

Upvotes: 4

Rotem jackoby
Rotem jackoby

Reputation: 22098

(I encountered this issue many times - Adding another answer for people that have the same error - from other reasons).

A quick checklist.

Go to Setting -> CI/CD -> Variables and check:

  1. If both AWS_ACCESS_KEY_ID & AWS_SECRET_ACCESS_KEY environment variables exist.
  2. If both names are spelled right.
  3. If their state is defined as protected - they can only be ran against protected branches (like master).

If error still occurs:

  1. Make sure the access keys still exist and active on your account.
  2. Delete current environment variables and replace them with new generated access keys and make sure AWS_SECRET_ACCESS_KEY doesn't contain any special characters (can lead to strange errors).

Upvotes: 39

D&#225;niel Kis-Nagy
D&#225;niel Kis-Nagy

Reputation: 2505

Have you tried running the Docker image you use in your GitLab pipelines script locally, in interactive mode?

That way, you could verify that the environment variables not being picked up are indeed the problem. (I.e. if you set the same environment variables locally, and it works, then yes, that's the problem.)

I'm suspecting that maybe the credentials are picked up just fine, and maybe they just don't have all the required permissions to perform the requested operation. I know the error message says otherwise, but S3 error messages tend to be quite misleading when it comes to permission problems.

Upvotes: 0

Related Questions