user121
user121

Reputation: 931

how to write .npy file to s3 directly?

I would like to know if there is any way to write an array as a numpy file(.npy) to an AWS S3 bucket directly. I can use np.save to save a file locally as shown below. But I am looking for a solution to write it directly to S3, without saving locally first.

a = np.array([1, 2, 3, 4])
np.save('/my/localfolder/test1.npy', a)

Upvotes: 12

Views: 11558

Answers (3)

Wesley Cheek
Wesley Cheek

Reputation: 1696

I've recently had issues with s3fs dependency conflicts with boto3, so I try to avoid using it. This solution only depends on boto3, does not write to disk, and does not explicitly use pickle.

Saving:

from io import BytesIO
import numpy as np
from urllib.parse import urlparse
import boto3
client = boto3.client("s3")

def to_s3_npy(data: np.array, s3_uri: str):
    # s3_uri looks like f"s3://{BUCKET_NAME}/{KEY}"
    bytes_ = BytesIO()
    np.save(bytes_, data, allow_pickle=True)
    bytes_.seek(0)
    parsed_s3 = urlparse(s3_uri)
    client.upload_fileobj(
        Fileobj=bytes_, Bucket=parsed_s3.netloc, Key=parsed_s3.path[1:]
    )
    return True

Loading:

def from_s3_npy(s3_uri: str):
    bytes_ = BytesIO()
    parsed_s3 = urlparse(s3_uri)
    client.download_fileobj(
        Fileobj=bytes_, Bucket=parsed_s3.netloc, Key=parsed_s3.path[1:]
    )
    bytes_.seek(0)
    return np.load(bytes_, allow_pickle=True)

Upvotes: 5

PraAnj
PraAnj

Reputation: 929

You can also use s3fs which is a file system interface to s3, a wrapper around boto. This solution also uses pickle, so make sure to allow_pickle=True at np.load. Refer functions below to both write and read.

import numpy as np
import pickle
from s3fs.core import S3FileSystem
s3 = S3FileSystem()

def saveLabelsToS3(npyArray, name):
    with s3.open('{}/{}'.format(bucket, name), 'wb') as f:
        f.write(pickle.dumps(npyArray))

def readLabelsFromS3(name):
    return np.load(s3.open('{}/{}'.format(bucket, name)), allow_pickle=True)

# Use as below
saveLabelsToS3(labels, 'folder/filename.pkl')
labels = readLabelsFromS3('folder/filename.pkl')

Upvotes: 2

Emile
Emile

Reputation: 1235

If you want to bypass your local disk and upload directly the data to the cloud, you may want to use pickle instead of using a .npy file:

import boto3
import io
import pickle

s3_client = boto3.client('s3')

my_array = numpy.random.randn(10)

# upload without using disk
my_array_data = io.BytesIO()
pickle.dump(my_array, my_array_data)
my_array_data.seek(0)
s3_client.upload_fileobj(my_array_data, 'your-bucket', 'your-file.pkl')

# download without using disk
my_array_data2 = io.BytesIO()
s3_client.download_fileobj('your-bucket', 'your-file.pkl', my_array_data2)
my_array_data2.seek(0)
my_array2 = pickle.load(my_array_data2)

# check that everything is correct
numpy.allclose(my_array, my_array2)

Documentation:

Upvotes: 6

Related Questions