Ilproff_77
Ilproff_77

Reputation: 206

AWS Lambda and S3 with R and aws.s3 package

I'm trying to access an AWS S3 bucket within a lambda function with a custom docker image. I've setup a script to get a list of files with this code:

Sys.setenv("AWS_ACCESS_KEY_ID" = aws_s3_key,
           "AWS_SECRET_ACCESS_KEY" = aws_s3_secret,
           "AWS_DEFAULT_REGION" = "eu-west-2")

aws_s3_bucket_name <- "my-bucket"

f <- aws.s3::get_bucket(bucket = aws_s3_bucket_name, url_style = "path")

I've got no problem in running my script within docker (fedora locally) but, as soon as I try the same function in lambda (using the same image that I've configured in my system) I got this error:

..$ Code : chr "InvalidToken"
..$ Message : chr "The provided token is malformed or otherwise invalid." 
..$ Token-0 : chr "IQoJb3JpZ/BoNzJXuwChJ2R5p"| __truncated__ 
..$ RequestId: chr "CP2CBX9PA95G3X2X" 2021-06-15T00:24:17.848+02:00 
..$ HostId : chr "pTlkIhvhCTLPTiXwSBHL/qq7Y=" 2021-06-15T00:24:17.848+02:00 - attr(*, "headers")=List of 8 
..$ x-amz-bucket-region: chr "eu-west-2" 
..$ x-amz-request-id : chr "CP2CBX9PA95G3X2X"  
..$ x-amz-id-2 : chr "pTlkIhvhCTLPTiXwW4vMexnz/qq7Y="  
..$ content-type : chr "application/xml" 
..$ transfer-encoding : chr "chunked" 
..$ date : chr "Mon, 14 Jun 2021 22:24:17 GMT"
..$ server : chr "AmazonS3"
..$ connection : chr "close" 
... 
...

Maybe it's just my fault as I'm quite new to was-lambda but I still not get why the script works in my local docker.

I suspect that the problem is related to some encoding problem as the error I get seems to include some "\n" and the message refer to "Bad Request (HTTP 400)" :

- attr(*, "request_canonical")= chr "GET\n/xxxxxxxx-x-xxx-counter/\n\nhost:s3-eu-west-2.amazonaws.com"| __truncated__
- attr(*, "request_canonical")= chr "GET\n/xxxxxxxx-x-xxx-/\n\nhost:s3-eu-west-2.amazonaws.com"| __truncated__

- attr(*, "request_string_to_sign")= chr "AWS4-HMAC-SHA256\n" __truncated__

ERROR [2021-06-15 10:55:06] Error in parse_aws_s3_response(r, Sig, verbose = verbose): Bad Request (HTTP 400).

Upvotes: 3

Views: 567

Answers (2)

Fahad Khanzada
Fahad Khanzada

Reputation: 11

As far as I know, when you have a custom runtime (R, in your case) in a Lambda environment, you do not need to specify AWS credentials at all. Just allow Lambda to interact with S3 with appropriate IAM Policies.

So, instead of using your credentials at all, just attach S3 access policy with Lambda IAM role or Lambda permissions and you will be good to go.

Upvotes: 1

agelosnm
agelosnm

Reputation: 173

What I did was to set the aws_access_key_id & aws_secret_access_key inside the Dockerfile by using the below commands:

RUN aws configure set aws_access_key_id <<aws_access_key_id>>
RUN aws configure set aws_secret_access_key <<aws_secret_access_key>>

With this way the aws-cli credentials are being stored inside the container hence this can write to S3 and I suppose to other services as well. If you set them inside your .R script, then those are being stored inside it and the container is not aware of any AWS instance.

EDIT: A more "elegant" and secure way for the above approach would be to set arguments & envrinonment variables inside Dockerfile as per below:

ARG AWS_KEY
ARG AWS_SECRET

ENV AWS_KEY ${AWS_KEY}
ENV AWS_SECRET ${AWS_SECRET}

Then, you can use "docker build" with build arguments by specifying the keys.

docker build --build-arg AWS_KEY=xxxx --build-arg AWS_SECRET=yyyy .

Upvotes: 1

Related Questions