Shawn
Shawn

Reputation: 11391

Why is my access denied on s3 (using the aws-sdk for Node.js)?

I'm trying to read an existing file from my s3 bucket, but I keep getting "Access Denied" with no explanation or instructions on what to do about it. Here is the code I am using:

'use strict'

var AWS = require('aws-sdk')
const options = {
  apiVersion: '2006-03-01',
  params: {
    Bucket: process.env['IMAGINATOR_BUCKET']
  },
  accessKeyId: process.env['IMAGINATOR_AWS_ACCESS_KEY_ID'],
  secretAccessKey: process.env['IMAGINATOR_AWS_SECRET_ACCESS_KEY'],
  signatureVersion: 'v4'
}
console.log('options', options)
var s3 = new AWS.S3(options)

module.exports = exports = {
  get (name, cb) {
    const params = {
      Key: name + '.json'
    }
    console.log('get params', params)
    return s3.getObject(params, cb)
  },
  set (name, body, cb) {
    const params = {
      Key: name + '.json',
      Body: body
    }
    console.log('set params', params)
    return s3.putObject(params, cb)
  }
}

And this is what I'm getting as output when using the get method and logging the error provided in the callback (with sensitive information censored out):

options { apiVersion: '2006-03-01',
  params: { Bucket: CENSORED_BUT_CORRECT },
  accessKeyId: CENSORED_BUT_CORRECT,
  secretAccessKey: CENSORED_BUT_CORRECT,
  signatureVersion: 'v4' }
get params { Key: 'whitelist.json' }
err { [AccessDenied: Access Denied]
  message: 'Access Denied',
  code: 'AccessDenied',
  region: null,
  time: Wed Sep 21 2016 11:17:50 GMT-0400 (EDT),
  requestId: CENSORED,
  extendedRequestId: CENSORED,
  cfId: undefined,
  statusCode: 403,
  retryable: false,
  retryDelay: 20.084538962692022 }
/Users/shawn/git/vigour-io/imaginate/node_modules/aws-sdk/lib/request.js:31
            throw err;
            ^

AccessDenied: Access Denied
    at Request.extractError (/Users/shawn/git/vigour-io/imaginate/node_modules/aws-sdk/lib/services/s3.js:538:35)
    at Request.callListeners (/Users/shawn/git/vigour-io/imaginate/node_modules/aws-sdk/lib/sequential_executor.js:105:20)
    at Request.emit (/Users/shawn/git/vigour-io/imaginate/node_modules/aws-sdk/lib/sequential_executor.js:77:10)
    at Request.emit (/Users/shawn/git/vigour-io/imaginate/node_modules/aws-sdk/lib/request.js:668:14)
    at Request.transition (/Users/shawn/git/vigour-io/imaginate/node_modules/aws-sdk/lib/request.js:22:10)
    at AcceptorStateMachine.runTo (/Users/shawn/git/vigour-io/imaginate/node_modules/aws-sdk/lib/state_machine.js:14:12)
    at /Users/shawn/git/vigour-io/imaginate/node_modules/aws-sdk/lib/state_machine.js:26:10
    at Request.<anonymous> (/Users/shawn/git/vigour-io/imaginate/node_modules/aws-sdk/lib/request.js:38:9)
    at Request.<anonymous> (/Users/shawn/git/vigour-io/imaginate/node_modules/aws-sdk/lib/request.js:670:12)
    at Request.callListeners (/Users/shawn/git/vigour-io/imaginate/node_modules/aws-sdk/lib/sequential_executor.js:115:18)

Now I'm not sure what to do beacuse I think I'm doing things correctly according to the docs, but it's not working and the error message doesn't say why my access is denied... Any idea what the next step should be to get this working?

Upvotes: 47

Views: 99398

Answers (10)

user3701026
user3701026

Reputation: 1

I was facing the same issue, turns out it was an ACL permission one. It solved!

Upvotes: 0

Kevin franklin
Kevin franklin

Reputation: 1

In your IAM user settings if you have enabled AmazonS3FullAccess and make sure that you should not enable any other S3 policies. It overwrites your AmazonS3FullAccess. And check if you have received any email from the amazon.

Upvotes: 0

Nacho
Nacho

Reputation: 388

I had the same error and it was because the file I was trying to access was not in the bucket. So make sure you are using the right bucket name and that the name of the file you are looking for is exactly the same as the one that exists in that bucket. https://www.diffchecker.com/diff is a good tool to look for differences in strings

Upvotes: 0

Firoj Siddiki
Firoj Siddiki

Reputation: 1941

In my case, I updated the variable holding s3 bucket name in .env file but didn't update the variable in the program, thus the program was receiving undefined value for the bucket name variable which caused my program throw access denied error.

So do make sure you are using correct bucket name or correct variable name if you are storing bucket name in a variable

Upvotes: 2

Shashwat Gupta
Shashwat Gupta

Reputation: 5264

Steps

1: click on Users in IAM (in AWS)
2: click on permission tab
3: click on add permission then click on add group
4: search s3fullaccess in searchbar
5: select AmazonS3FullAccess and type any group name then click on create
6: perform action through your API again
7: done

Upvotes: 8

Anderson Clayton
Anderson Clayton

Reputation: 191

FullAccess in your policy is not required. You can try something like this:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:Put*",
                "s3:Get*",
                "s3:List*",
                "s3:Delete*"
            ],
            "Resource": [
                "arn:aws:s3:::bucket/*",
                "arn:aws:s3:::bucket"
            ]
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": "s3:ListAllMyBuckets",
            "Resource": "*"
        }
    ]
}

Upvotes: 19

valdeci
valdeci

Reputation: 15237

These errors can occur when the object which you are trying to read does not exist. From what I understood the AWS errors are not so clear in these situations.

Validate if your key/bucket is correct and if you are sending the correct params on the API method.

I already got this problem two times:

  • when I was replacing the key param with the bucket param and vice versa and I was trying to read an s3 object using the getObject() method.
  • when I was trying to copy a file to a location that did not exist using the copyObject() method.

Upvotes: 9

porkbrain
porkbrain

Reputation: 792

This could also happen if you're trying to set ACL to "public-read" but the bucket is blocking public access. For example if you mean to upload static assets to a misconfigured S3 bucket. You can change it in your bucket settings.

Upvotes: 20

nbs
nbs

Reputation: 319

"code":"AccessDenied","region":null,"time":"2020-05-24T05:20:56.219Z","requestId": ... 

Applied the below policy in s3 aws console > Bucket policy editor of 'Permissions' tab to get rid of the above error,

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::<IAM-user-ID>:user/testuser"
            },
            "Action": [
                "s3:ListBucket",
                "s3:ListBucketVersions",
                "s3:GetBucketLocation",
                "s3:Get*",
                "s3:Put*"
            ],
            "Resource": "arn:aws:s3:::srcbucket"
        }
    ]
}

Upvotes: 3

Shawn
Shawn

Reputation: 11391

The problem was that my new IAM user didn't have a policy attached to it. I assigned it the AmazonS3FullAccess policy and now it works.

As pointed out in the comments, a more restrictive policy would be much safer

Upvotes: 44

Related Questions