bukzor
bukzor

Reputation: 38462

awscli s3: upload via stdin

I'd like to upload a tarball to s3 without incurring the cost of a temporary file (we're low on disk). With other utilities, I'd pipe the output of the tar command to the uploading command, but it doesn't Just Work with awscli:

$ echo ok | aws s3 cp /dev/stdin s3://my-bucket/test
upload failed: /dev/stdin to s3://my-bucket/test [Errno 29] Illegal seek

Is there any way to do what I want?

Upvotes: 8

Views: 8260

Answers (2)

Mike Cantrell
Mike Cantrell

Reputation: 726

This should do the trick:

tar cv aws-test|aws s3 cp - s3://bucket/aws-test.tar.gz

Edit: make sure you're using a newer verison of the aws cli. I've verified that it works with 1.7.3 and it does NOT work with 1.2.9

Note if the file exceeds 50GB, you should pass in an estimate of the file size in bytes using expected-size, e.g.:

tar cv /home/folder | gzip | s3 cp --expected-size=1234567890 - s3://bucket/folder.tar.gz

source: s3 cp reference docs

Upvotes: 31

bukzor
bukzor

Reputation: 38462

This simple script seems to do the job, but I'd much rather not invent my own tool for this.

#!/bin/bash
configfile="aws.ini"
file="/test"
bucket="my-bucket"
resource="/${bucket}${file}"
contentType="application/x-compressed-tar"

# Grab the config values
eval $(cat aws.ini  | grep -P "^\w+=[-'/\w]+$")

# Calculate the signature.
dateValue=$(date -R)
stringToSign="PUT\n\n${contentType}\n${dateValue}\n${resource}"
signature=$(
    echo -en "${stringToSign}" |
    openssl sha1 -hmac "${aws_secret_access_key}" -binary |
    base64
)

# PUT!
curl \
    -X PUT \
    --data @- \
    -H "Host: ${bucket}.s3.amazonaws.com" \
    -H "Date: ${dateValue}" \
    -H "Content-Type: ${contentType}" \
    -H "Authorization: AWS ${aws_access_key_id}:${signature}" \
    "https://${bucket}.s3.amazonaws.com${file}"

Upvotes: 7

Related Questions