Reputation: 4553
I am converting large CSV files into Parquet files for further analysis. I read in the CSV data into Pandas and specify the column dtypes
as follows
_dtype = {"column_1": "float64",
"column_2": "category",
"column_3": "int64",
"column_4": "int64"}
df = pd.read_csv("data.csv", dtype=_dtype)
I then do some more data cleaning and write the data out into Parquet for downstream use.
_parquet_kwargs = {"engine": "pyarrow",
"compression": "snappy",
"index": False}
df.to_parquet("data.parquet", **_parquet_kwargs)
But when I read the data into Pandas for further analysis using from_parquet
I can not seem to recover the category dtypes. The following
df = pd.read_parquet("data.parquet")
results in a DataFrame
with object
dtypes in place of the desired category
.
The following seems to work as expected
import pyarrow.parquet as pq
_table = (pq.ParquetFile("data.parquet")
.read(use_pandas_metadata=True))
df = _table.to_pandas(strings_to_categorical=True)
however I would like to know how this can be done using pd.read_parquet
.
Upvotes: 14
Views: 14925
Reputation: 1332
We are having a similar problem. When working with a multi file parquet are work around is as follows: Using the Table.to_pandas() documentation the following code may be relevant:
import pyarrow.parquet as pq
dft = pq.read_table('path/to/data_parquet/', use_pandas_metadata=True)
df = dft.to_pandas(categories=['column_2'] )
the use_pandas_metadata
works for the dtype datetime64[ns]
Upvotes: 3
Reputation: 3497
This is fixed in Arrow 0.15
, now the next code keeps the columns as categories (and the performance is significantly faster):
import pandas
df = pandas.DataFrame({'foo': list('aabbcc'),
'bar': list('xxxyyy')}).astype('category')
df.to_parquet('my_file.parquet')
df = pandas.read_parquet('my_file.parquet')
df.dtypes
Upvotes: 16