Alexsander
Alexsander

Reputation: 663

How to write parquet file from pandas dataframe in S3 in python

I have a pandas dataframe. i want to write this dataframe to parquet file in S3. I need a sample code for the same.I tried to google it. but i could not get a working sample code.

Upvotes: 55

Views: 120624

Answers (5)

andreas
andreas

Reputation: 256

First ensure that you have pyarrow or fastparquet installed with pandas.

Then install boto3 and aws cli. Use aws cli to set up the config and credentials files, located at .aws folder.

Here is a simple script using pyarrow, and boto3 to create a temporary parquet file and then send to AWS S3.

Sample code excluding imports:

import pyarrow as pa
import pyarrow.parquet as pq

def main():
    data = {0: {"data1": "value1"}}
    df = pd.DataFrame.from_dict(data, orient='index')
    write_pandas_parquet_to_s3(
        df, "bucket", "folder/test/file.parquet", ".tmp/file.parquet")


def write_pandas_parquet_to_s3(df, bucketName, keyName, fileName):
    # dummy dataframe
    table = pa.Table.from_pandas(df)
    pq.write_table(table, fileName)

    # upload to s3
    s3 = boto3.client("s3")
    BucketName = bucketName
    with open(fileName) as f:
       object_data = f.read()
       s3.put_object(Body=object_data, Bucket=BucketName, Key=keyName)

Upvotes: 22

gurjarprateek
gurjarprateek

Reputation: 499

the below function gets parquet output in a buffer and then write buffer.values() to S3 without any need to save parquet locally

Also, since you're creating an s3 client you can create credentials using aws s3 keys that can be either stored locally, in an airflow connection or aws secrets manager

def dataframe_to_s3(s3_client, input_datafame, bucket_name, filepath, format):

        if format == 'parquet':
            out_buffer = BytesIO()
            input_datafame.to_parquet(out_buffer, index=False)

        elif format == 'csv':
            out_buffer = StringIO()
            input_datafame.to_parquet(out_buffer, index=False)

        s3_client.put_object(Bucket=bucket_name, Key=filepath, Body=out_buffer.getvalue())

S3_client is nothing but a boto3 client object.Hope this helps!

courtesy- https://stackoverflow.com/a/40615630/12036254

Upvotes: 37

Andrew Waites
Andrew Waites

Reputation: 66

Just to provide a further example using kwargs to force an overwrite.

My use case is that the partition structure ensures that if I reprocess an input file the output parquet should overwrite whatever is in the partition. To do that I am using kwargs passed through to pyarrow:

s3_url = "s3://<your-bucketname>/<your-folderpath>/"
df.to_parquet(s3_url, 
              compression='snappy', 
              engine = 'pyarrow',
              partition_cols = ["GSDate","LogSource", "SourceDate"],
              existing_data_behavior = 'delete_matching')

That last argument (existing_data_behaviour) is part of **kwargs passed through to underlying pyarrow write_dataset. (https://arrow.apache.org/docs/python/generated/pyarrow.dataset.write_dataset.html#pyarrow.dataset.write_dataset)

Without that a rerun would create duplicate data. As noted above, this requires s3fs

Upvotes: 2

Vincent Claes
Vincent Claes

Reputation: 4768

For python 3.6+, AWS has a library called aws-data-wrangler that helps with the integration between Pandas/S3/Parquet

to install do;

pip install awswrangler

if you want to write your pandas dataframe as a parquet file to S3 do;

import awswrangler as wr
wr.s3.to_parquet(
    dataframe=df,
    path="s3://my-bucket/key/my-file.parquet"
)

Upvotes: 13

Wai Kiat
Wai Kiat

Reputation: 859

For your reference, I have the following code works.

s3_url = 's3://bucket/folder/bucket.parquet.gzip'
df.to_parquet(s3_url, compression='gzip')

In order to use to_parquet, you need pyarrow or fastparquet to be installed. Also, make sure you have correct information in your config and credentials files, located at .aws folder.

Edit: Additionally, s3fs is needed. see https://stackoverflow.com/a/54006942/1862909

Upvotes: 65

Related Questions