Reputation: 21644
Both are columnar (disk-)storage formats for use in data analysis systems. Both are integrated within Apache Arrow (pyarrow package for python) and are designed to correspond with Arrow as a columnar in-memory analytics layer.
How do both formats differ?
Should you always prefer feather when working with pandas when possible?
What are the use cases where feather is more suitable than parquet and the other way round?
Appendix
I found some hints here https://github.com/wesm/feather/issues/188, but given the young age of this project, it's possibly a bit out of date.
Not a serious speed test because I'm just dumping and loading a whole Dataframe but to give you some impression if you never heard of the formats before:
# IPython
import numpy as np
import pandas as pd
import pyarrow as pa
import pyarrow.feather as feather
import pyarrow.parquet as pq
import fastparquet as fp
df = pd.DataFrame({'one': [-1, np.nan, 2.5],
'two': ['foo', 'bar', 'baz'],
'three': [True, False, True]})
print("pandas df to disk ####################################################")
print('example_feather:')
%timeit feather.write_feather(df, 'example_feather')
# 2.62 ms ± 35.8 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
print('example_parquet:')
%timeit pq.write_table(pa.Table.from_pandas(df), 'example.parquet')
# 3.19 ms ± 51 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
print()
print("for comparison:")
print('example_pickle:')
%timeit df.to_pickle('example_pickle')
# 2.75 ms ± 18.8 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
print('example_fp_parquet:')
%timeit fp.write('example_fp_parquet', df)
# 7.06 ms ± 205 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
print('example_hdf:')
%timeit df.to_hdf('example_hdf', 'key_to_store', mode='w', table=True)
# 24.6 ms ± 4.45 ms per loop (mean ± std. dev. of 7 runs, 100 loops each)
print()
print("pandas df from disk ##################################################")
print('example_feather:')
%timeit feather.read_feather('example_feather')
# 969 µs ± 1.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
print('example_parquet:')
%timeit pq.read_table('example.parquet').to_pandas()
# 1.9 ms ± 5.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
print("for comparison:")
print('example_pickle:')
%timeit pd.read_pickle('example_pickle')
# 1.07 ms ± 6.21 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
print('example_fp_parquet:')
%timeit fp.ParquetFile('example_fp_parquet').to_pandas()
# 4.53 ms ± 260 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
print('example_hdf:')
%timeit pd.read_hdf('example_hdf')
# 10 ms ± 43.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
# pandas version: 0.22.0
# fastparquet version: 0.1.3
# numpy version: 1.13.3
# pandas version: 0.22.0
# pyarrow version: 0.8.0
# sys.version: 3.6.3
# example Dataframe taken from https://arrow.apache.org/docs/python/parquet.html
Upvotes: 215
Views: 88333
Reputation: 581
I made a systematic comparison of the pandas file formats, compression methods and compression levels. The comparison was based on the compression rate and the save/load times.
If you consider the compression method together with the compression level, zstd is the best option. This is especially true for compression levels 10 to 12.
In terms of data format, Feather seems to be the best choice. Feather has a better compression ratio than Parquet. Up to a compression level of 12, the storage times of Parquet and Feather are practically the same. The loading times of Feather are definitely and significantly better than those of Parquet.
For these reasons, Feather seems to be the best choice in combination with zstd and a compression level of 10 to 12.
Full article can be found here: https://philipmay.org/blog/2024/pandas-data-format-and-compression.html
Upvotes: 2
Reputation: 695
I would also include in the comparison between parquet and feather different compression methods to check for importing/exporting speeds and how much storage it uses.
I advocate for 2 options for the average user who wants a better csv alternative:
Both are better options that just normal csv files in all categories (I/O speed and storage).
I analysed the following formats:
import zipfile
import pandas as pd
folder_path = (r"...\\intraday")
zip_path = zipfile.ZipFile(folder_path + "\\AAPL.zip")
test_data = pd.read_csv(zip_path.open('AAPL.csv'))
# EXPORT, STORAGE AND IMPORT TESTS
# ------------------------------------------
# - FORMAT .csv
# export
%%timeit
test_data.to_csv(folder_path + "\\AAPL.csv", index=False)
# 12.8 s ± 399 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
# storage
# AAPL.csv exported using python.
# 169.034 KB
# import
%%timeit
test_data = pd.read_csv(folder_path + "\\AAPL.csv")
# 1.56 s ± 14.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
# ------------------------------------------
# - FORMAT zipped .csv
# export
%%timeit
test_data.to_csv(folder_path + "\\AAPL.csv")
# 12.8 s ± 399 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
# OBSERVATION: this does not include the time I spent manually zipping the .csv
# storage
# AAPL.csv zipped with .zip "normal" compression using 7-zip software.
# 36.782 KB
# import
zip_path = zipfile.ZipFile(folder_path + "\AAPL.zip")
%%timeit
test_data = pd.read_csv(zip_path.open('AAPL.csv'))
# 2.31 s ± 43.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
# ------------------------------------------
# - FORMAT .feather using "zstd" compression.
# export
%%timeit
test_data.to_feather(folder_path + "\\AAPL.feather", compression='zstd')
# 460 ms ± 13.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
# storage
# AAPL.feather exported with python using zstd
# 54.924 KB
# import
%%timeit
test_data = pd.read_feather(folder_path + "\\AAPL.feather")
# 310 ms ± 11.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
# ------------------------------------------
# - FORMAT .feather using "lz4" compression.
# Only works installing with pip, not with conda. Bad sign.
# export
%%timeit
test_data.to_feather(folder_path + "\\AAPL.feather", compression='lz4')
# 392 ms ± 14.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
# storage
# AAPL.feather exported with python using "lz4"
# 79.668 KB
# import
%%timeit
test_data = pd.read_feather(folder_path + "\\AAPL.feather")
# 255 ms ± 4.79 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
# ------------------------------------------
# - FORMAT .parquet using compression "snappy"
# export
%%timeit
test_data.to_parquet(folder_path + "\\AAPL.parquet", compression='snappy')
# 2.82 s ± 47.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
# storage
# AAPL.parquet exported with python using "snappy"
# 62.383 KB
# import
%%timeit
test_data = pd.read_parquet(folder_path + "\\AAPL.parquet")
# 701 ms ± 19.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
# ------------------------------------------
# - FORMAT .parquet using compression "gzip"
# export
%%timeit
test_data.to_parquet(folder_path + "\\AAPL.parquet", compression='gzip')
# 10.8 s ± 77.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
# storage
# AAPL.parquet exported with python using "gzip"
# 37.595 KB
# import
%%timeit
test_data = pd.read_parquet(folder_path + "\\AAPL.parquet")
# 1.18 s ± 80.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
# ------------------------------------------
# - FORMAT .parquet using compression "brotli"
# export
%%timeit
test_data.to_parquet(folder_path + "\\AAPL.parquet", compression='brotli')
# around 5min each loop. I did not run %%timeit on this one.
# storage
# AAPL.parquet exported with python using "brotli"
# 29.425 KB
# import
%%timeit
test_data = pd.read_parquet(folder_path + "\\AAPL.parquet")
# 1.04 s ± 72 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Observations:
conda
for the "zstd" compression method.Upvotes: 27
Reputation: 105471
Parquet format is designed for long-term storage, where Arrow is more intended for short term or ephemeral storage (Arrow may be more suitable for long-term storage after the 1.0.0 release happens, since the binary format will be stable then)
Parquet is more expensive to write than Feather as it features more layers of encoding and compression. Feather is unmodified raw columnar Arrow memory. We will probably add simple compression to Feather in the future.
Due to dictionary encoding, RLE encoding, and data page compression, Parquet files will often be much smaller than Feather files
Parquet is a standard storage format for analytics that's supported by many different systems: Spark, Hive, Impala, various AWS services, in future by BigQuery, etc. So if you are doing analytics, Parquet is a good option as a reference storage format for query by multiple systems
The benchmarks you showed are going to be very noisy since the data you read and wrote is very small. You should try compressing at least 100MB or upwards 1GB of data to get some more informative benchmarks, see e.g. http://wesmckinney.com/blog/python-parquet-multithreading/
Upvotes: 270