Sam Proschansky
Sam Proschansky

Reputation: 41

Uploading file to an s3 bucket path longer than 63 characters

I am writing a lambda function to upload a file from one s3 bucket to another, when the former is updated. I am running into an invalid parameter exception when uploading a file to the s3 path, which is longer than 63 characters. Is there a way to get around this?

import boto3
import datetime
import sys
import os
from os import getenv
import json
import csv

REPORT_BUCKET = getenv('REPORT_BUCKET', 'origin-bucket-name')
now = datetime.datetime.now() - datetime.timedelta(days=1)
today = now.strftime("%m/%d/%y")
today_iso = now.strftime('%Y-%m-%d')


def read_attachment(bucket, key):
    print(f'Bucket: {bucket}, Key: {key}')
    s3 = boto3.resource('s3')
    obj = s3.Object(bucket, key)
    return obj.get()['Body'].read()


def upload_file(data, new_file, bucket_name):
    temp = '/tmp/tmp-{}.csv'.format(today_iso)
    with open(temp, 'w', newline='') as outfile:
        writer = csv.writer(outfile)
        writer.writerows(data)

    s3 = boto3.resource('s3')
    bucket = s3.Bucket(bucket_name)
    bucket.delete_objects(
        Delete={
            'Objects': [
                {'Key': new_file},
            ]
        }
    )
    bucket.upload_file(temp, new_file)
    bucket.Object(new_file).Acl().put(ACL='authenticated-read')
    os.remove(temp)
    print(bucket)
    print('Uploaded: %s/%s' % (bucket_name, new_file))


def lambda_handler(event, context):
    data = read_attachment(REPORT_BUCKET, f'{today_iso}.csv')
    attachment = data.split()
    arr = []
    arr2 = []

    for item in range(len(attachment)):
        attachment[item] = attachment[item].decode('utf-8')
        arr.append(attachment[item].split(','))
        arr2.append(arr[item])

    upload_file(arr2, f'{today_iso}.csv', 'accountname-useast1-dl-common-0022-in/sub- 
    folder/org=inc/f=csv/v=1.0/staging/')

    return True


if __name__ == '__main__':
    lambda_handler({}, None)

Upvotes: 0

Views: 823

Answers (1)

Xanthos Symeou
Xanthos Symeou

Reputation: 694

In s3 , the bucketname max size is 63 characters long. (https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-s3-bucket-naming-requirements.html)

In your code you are calling:

upload_file(arr2, f'{today_iso}.csv', 'accountname-useast1-l-common-0022-in/sub-folder/org=inc/f=csv/v=1.0/staging/')

which means that you are passing

accountname-useast1-l-common-0022-in/sub-folder/org=inc/f=csv/v=1.0/staging/' 

as the bucketname. This parameter is longer than 63 characters that's why it throws an error.

In order to resolve this pass as bucket name a shorter name and then name whatever you live your actual object.

For example:

bucketname: accountname-useast1-l-common-0022-in

object name: sub-folder/org=inc/f=csv/v=1.0/staging/

so your line of code that needs to be changed is:

upload_file(arr2, /sub-folder/org=inc/f=csv/v=1.0/staging/f'{today_iso}.csv', 'accountname-useast1-dl-common-0022-in')

Upvotes: 2

Related Questions