fnisi
fnisi

Reputation: 1233

python boto - AWS S3 access without a bucket name

I have credentials ('aws access key', 'aws secret key', and a path) for a dataset stored on AWS S3. I can access the data by using CyberDuck or FileZilla Pro.

I would like to automate the data fetch stage and using Python/Anaconda, which comes with boto2, for this purpose.

I do not have a "bucket" name, just a path in the form of /folder1/folder2/folder3 and I could not find a way to access the data without a "bucket name" with the API.

Is there a way to access S3 programatically without having a "bucket name", i.e. with a path instead?

Thanks

Upvotes: 1

Views: 881

Answers (1)

mandar munagekar
mandar munagekar

Reputation: 89

s3 does not have a typical native directory/folder structure, instead, it is defined with keys. If the URL starts with s3://dir_name/folder_name/file_name, it means dir_name is nothing but a bucket name. If you are not sure about bucket name but have s3 access parameters and path, then you can

  1. List all the s3_buckets available -

    s3 = boto3.client('s3')

    response = s3.list_buckets()

  2. Use s3.client.head_object() method recursively for each bucket with your path as key.

Upvotes: 2

Related Questions