null
null

Reputation: 3517

Crontab cannot find AWS Credentials - linuxbox EC2

I've created a linux box that has a very simple make bucket command : was s3 mb s3://bucket running this from the prompt works fine.

I've run AWS configure as both the user I'm logged in as and sudo. The details are definitely correct as the above wouldn't create the bucket.

The error message I'm getting from cron is :make_bucket failed: s3://cronbucket/ Unable to locate credentials

I've tried various things thus far with the crontab in trying to tell it where the credentials are, some of this is an amalgamation of other solutions which may be a cause of the issue.

My crontab look like :

AWS_CONFIG_FILE="/home/ec2-user/.aws/config"
SHELL=/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binx
0 0 * * * /usr/bin/env bash /opt/foo.sh &>> /tmp/foo.log

* * * * * /usr/bin/uptime > /tmp/uptime

* * * * * /bin/scripts/script.sh >> /bin/scripts/cronlogs/cronscript.log 2>&1

initially I just had the two jobs that were making the bucket and then creating the uptime (as a sanity check), the rest of the crontab are solutions from other posts that do not seem to be working.

Any advice is much appreciated, thank you.

Upvotes: 1

Views: 2471

Answers (3)

sergpank
sergpank

Reputation: 988

In my case it was much trickier, because I was running a CRON job in Fargate instance, and I could access S3 from shell, but it did not work from CRON.

  1. In Dockerfile configure the CRON job

     RUN echo -e \                                                                
     "SHELL=/bin/bash\n\                                                          
     BASH_ENV=/app/cron/container.env\n\n\                                        
     30 0 * * * /app/cron/log_backup.sh >> /app/cron/cron.log 2>&1" | crontab -
    
  2. In entrypoint script configure AWS credentials

    creds=`curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI`
    AWS_ACCESS_KEY_ID=`echo $creds | jq .'AccessKeyId' | tr -d '"'`
    AWS_SECRET_ACCESS_KEY=`echo $creds | jq '.SecretAccessKey' | tr -d '"'`
    AWS_SESSION_TOKEN=`echo $creds | jq '.Token' | tr -d '"'`
    
  3. After that in same entrypoint script create container.env file as @Tailor Devendra suggested in previous solution:

    declare -p | grep -Ev 'BASHOPTS|BASH_VERSINFO|EUID|PPID|SHELLOPTS|UID' > /app/cron/container.env
    

I can't say that I am happy with this solution, but it works.

Upvotes: 1

Tailor Devendra
Tailor Devendra

Reputation: 449

If you have attached IAM role for ECS Fargate task role then this solution will work Add the following line in the entrypoint.sh

declare -p | grep -Ev 'BASHOPTS|BASH_VERSINFO|EUID|PPID|SHELLOPTS|UID' > /container.env

Add below line in crontab or cron file

SHELL=/bin/bash
BASH_ENV=/container.env

It worked for me.

Upvotes: 2

Shimon Tolts
Shimon Tolts

Reputation: 1692

The issue is that cron doesn't get your env. There are several ways of approaching this. Either running a bash script that includes your profile. Or a nice simple solution would be to include it with crontab. (change profile to whatever you are using)

0 5 * * * . $HOME/.profile; /path/to/command/to/run

check out this thread

Upvotes: 4

Related Questions