Reputation: 6023
After some searching I failed to find a thorough comparison of fastparquet
and pyarrow
.
I found this blog post (a basic comparison of speeds).
and a github discussion that claims that files created with fastparquet
do not support AWS-athena (btw is it still the case?)
when/why would I use one over the other? what are the major advantages and disadvantages ?
my specific use case is processing data with dask
writing it to s3 and then reading/analyzing it with AWS-athena.
Upvotes: 100
Views: 77658
Reputation: 33938
In 2024 the decision should be obvious: use pyarrow instead of fastparquet:
In our recent parquet benchmarking and resilience testing we generally found the pyarrow engine would scale to larger datasets better than the fastparquet engine, and more test cases would complete successfully when run with pyarrow than with fastparquet.
The pyarrow library has a larger development team maintaining it and seems to have more community buy-in going forward.
Upvotes: 54
Reputation: 558
However, since the question lacks concrete criteria, and I came here for a good "default choice", I want to state that pandas default engine for DataFrame objects is pyarrow (see pandas docs).
Upvotes: 20
Reputation: 428
This question may be a bit old, but I happen to be working on the same issue and I found this benchmark https://wesmckinney.com/blog/python-parquet-update/ . According to it, pyarrow is faster than fastparquet, little wonder it is the default engine used in dask.
Update:
An update to my earlier response. I have been more lucky writing with pyarrow and reading with fastparquet in google cloud storage.
Upvotes: 3
Reputation: 500
I used both fastparquet and pyarrow for converting protobuf data to parquet and to query the same in S3 using Athena. Both worked, however, in my use-case, which is a lambda function, package zip file has to be lightweight, so went ahead with fastparquet. (fastparquet library was only about 1.1mb, while pyarrow library was 176mb, and Lambda package limit is 250mb).
I used the following to store a dataframe as parquet file:
from fastparquet import write
parquet_file = path.join(filename + '.parq')
write(parquet_file, df_data)
Upvotes: 32
Reputation: 41
I just used fastparquet for a case to get out data from Elasticsearch and to store it in S3 and query with Athena and had no issue at all.
I used the following to store a dataframe in S3 as parquet file:
import s3fs
import fastparquet as fp
import pandas as pd
import numpy as np
s3 = s3fs.S3FileSystem()
myopen = s3.open
s3bucket = 'mydata-aws-bucket/'
# random dataframe for demo
df = pd.DataFrame(np.random.randint(0,100,size=(100, 4)), columns=list('ABCD'))
parqKey = s3bucket + "datafile" + ".parq.snappy"
fp.write(parqKey, df ,compression='SNAPPY', open_with=myopen)
My table look similar like this in Athena:
CREATE EXTERNAL TABLE IF NOT EXISTS myanalytics_parquet (
`column1` string,
`column2` int,
`column3` DOUBLE,
`column4` int,
`column5` string
)
STORED AS PARQUET
LOCATION 's3://mydata-aws-bucket/'
tblproperties ("parquet.compress"="SNAPPY")
Upvotes: 4
Reputation: 28684
I would point out that the author of the speed comparison is also the author of pyarrow :) I can speak about the fastparquet case.
From your point of view, the most important thing to know is compatibility. Athena is not one of the test targets for fastparquet (or pyarrow), so you should test thoroughly before making your choice. There are a number of options that you may want to envoke (docs) for datetime representation, nulls, types, that may be important to you.
Writing to s3 using dask is certainly a test case for fastparquet, and I believe pyarrow should have no problem with that either.
Upvotes: 8