Mohammad Sadoughi
Mohammad Sadoughi

Reputation: 1169

Reading Data from AWS S3

I have some data with very particular format (e.g., tdms files generated by NI systems) and I stored them in a S3 bucket. Typically, for reading this data in python if the data was stored in my local computer, I would use npTDMS package. But, how should is read this tdms files when they are stored in a S3 bucket? One solution is to download the data for instance to the EC2 instance and then use npTDMS package for reading the data into python. But it does not seem to be a perfect solution. Is there any way that I can read the data similar to reading CSV files from S3?

Upvotes: 1

Views: 9629

Answers (3)

xxyjoel
xxyjoel

Reputation: 591

boto3 is the default option, however, as an alternative awswrangler provides some nice wrappers.

Upvotes: 0

Guy
Guy

Reputation: 12901

Some Python packages (such as Pandas) support reading data directly from S3, as it is the most popular location for data. See this question for example on the way to do that with Pandas.

If the package (npTDMS) doesn't support reading directly from S3, you should copy the data to the local disk of the notebook instance.

The simplest way to copy is to run the AWS CLI in a cell in your notebook

!aws s3 cp s3://bucket_name/path_to_your_data/ data/

This command will copy all the files under the "folder" in S3 to the local folder data

You can use more fine-grained copy using the filtering of the files and other specific requirements using the boto3 rich capabilities. For example:

s3 = boto3.resource('s3')
bucket = s3.Bucket('my-bucket')
objs = bucket.objects.filter(Prefix='myprefix')
for obj in objs:
   obj.download_file(obj.key)

Upvotes: 3

Khakhar Shyam
Khakhar Shyam

Reputation: 499

import boto3
s3 = boto3.resource('s3')
bucketname = "your-bucket-name"
filename = "the file you want to read"
obj = s3.Object(bucketname, filename)
body = obj.get()['Body'].read()

Upvotes: 0

Related Questions