JMV12
JMV12

Reputation: 1045

EKS Pod S3 Access Denied

I'm trying to create an EKS pod to use as a service for mlflow. The issue that I'm having though is that I'm unable to connect to s3 to store the mlflow run artifacts. I have tried connecting to the pod using kubectl exec -it <pod_name> -- /bin/bash and setting the aws credentials there as well. When doing this, I'm able to ls the s3 bucket.

But when I attempt to save an mlflow artifact to the same s3 location, I get the following error:

An error occurred (AccessDenied) when calling the AssumeRoleWithWebIdentity operation: Not authorized to perform sts:AssumeRoleWithWebIdentity

What would be the issue causing this? Is there an IAM that needs to be set with the EKS pod or something along those lines?

Upvotes: 3

Views: 14106

Answers (1)

Aposhian
Aposhian

Reputation: 869

Yes. Your code running on the pod will need to have the correct IAM permissions to access S3 and perform the API calls that you need.

There are multiple ways you can achieve this.

Option 1: Attaching an IAM policy to the node(s) role

EKS nodes are actually EC2 instances, so you can attach the proper IAM policy to the IAM role that your nodes belong to. This is not true if you are using AWS Fargate. In that case see Option 3. The downside of this approach is that it grants those permissions to all pods running on that node. If you want more granular control, see Option 2.

If you have setup your cluster with eksctl, then this is fairly straightforward.

This example fetches the IAM role name for each nodegroup in your cluster, and then attaches the AmazonS3FullAccess managed policy to each of them.

#!/bin/bash

for STACK_NAME in $(eksctl get nodegroup --cluster $CLUSTER_NAME -o json | jq -r '.[].StackName')
do
  ROLE_NAME=$(aws cloudformation describe-stack-resources --stack-name $STACK_NAME | jq -r '.StackResources[] | select(.ResourceType=="AWS::IAM::Role") | .PhysicalResourceId')

  aws iam attach-role-policy \
    --role-name $ROLE_NAME \
    --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess
done

Then that policy will be applied to all nodes created in that nodegroup. Note that if there are multiple nodegroups, this needs to be done for each one.

Option 2: Assigning an IAM role to a service account

A less heavy-handed, (but more involved) alternative is assigning IAM roles to service accounts. This allows you to isolate permissions of different pods.

This option is a little more complicated since it involves creating an OIDC identity provider on your cluster.

Option 3: (AWS Fargate only) Pod execution roles

If you are using AWS Fargate to run your pods, then you should be able to add permissions to your pod execution role.

Upvotes: 11

Related Questions