Della
Della

Reputation: 1614

How to read an ORC file stored locally in Python Pandas?

Can I think of an ORC file as similar to a CSV file with column headings and row labels containing data? If so, can I somehow read it into a simple pandas dataframe? I am not that familiar with tools like Hadoop or Spark, but is it necessary to understand them just to see the contents of a local ORC file in Python?

The filename is someFile.snappy.orc

I can see online that spark.read.orc('someFile.snappy.orc') works, but even after import pyspark, it is throwing error.

Upvotes: 15

Views: 33310

Answers (6)

Rafal Janik
Rafal Janik

Reputation: 309

I haven't been able to find any great options, there are a few dead projects trying to wrap the java reader. However, pyarrow does have an ORC reader that won't require you using pyspark. It's a bit limited but it works.

import pandas as pd
import pyarrow.orc as orc

with open(filename, 'rb') as file:
    data = orc.ORCFile(file)
    df = data.read().to_pandas()

Upvotes: 12

Sam_Ste
Sam_Ste

Reputation: 454

Easiest way is using pyorc:

import pyorc
import pandas as pd

with open(r"my_orc_file.orc", "rb") as orc_file:
    reader = pyorc.Reader(orc_file)
    orc_data = reader.read()
    orc_schema = reader.schema

columns = list(orc_schema.fields)
df = pd.DataFrame(data=orc_data, columns=columns)

Upvotes: 2

success malla
success malla

Reputation: 90

I did not want to submit a spark job to read local ORC files or have pandas. This worked for me.

import pyarrow.orc as orc
data_reader = orc.ORCFile("/path/to/orc/part_file.zstd.orc")
data = data_reader.read()
source = data.to_pydict()

Upvotes: 0

Gabe
Gabe

Reputation: 6045

Starting from Pandas 1.0.0, there is a built in function for Pandas.

https://pandas.pydata.org/docs/reference/api/pandas.read_orc.html

import pandas as pd
import pyarrow.orc 

df = pd.read_orc('/tmp/your_df.orc')

Be sure to read this warning about dependencies. This function might not work on Windows https://pandas.pydata.org/docs/getting_started/install.html#install-warn-orc

If you want to use read_orc(), it is highly recommended to install pyarrow using conda

Upvotes: 2

Duy Tran
Duy Tran

Reputation: 322

In case import pyarrow.orc as orc does not work (did not work for me in Windows 10), you can read them to Spark data frame then convert to pandas's data frame

import findspark
from pyspark.sql import SparkSession

findspark.init()
spark = SparkSession.builder.getOrCreate()
df_spark = spark.read.orc('example.orc')
df_pandas = df_spark.toPandas()

Upvotes: 4

Andrea
Andrea

Reputation: 4473

ORC, like AVRO and PARQUET, are format specifically designed for massive storage. You can think about them "like a csv", they are all files containing data, with their particular structure (different than csv, or a json of course!).

Using pyspark should be easy reading an orc file, as soon as your environment grants the Hive support. Answering your question, I'm not sure that in a local environment without Hive you will be able to read it, I've never done it (you can do a quick test with the following code):

Loads ORC files, returning the result as a DataFrame.

Note: Currently ORC support is only available together with Hive support.

>>> df = spark.read.orc('python/test_support/sql/orc_partitioned')

Hive is a data warehouse system, that allows you to query your data on HDFS (distributed file system) through Map-Reduce like a traditional relational database (creating queries SQL-like, doesn't support 100% all the standard SQL features!).

Edit: Try the following to create a new Spark Session. Not to be rude, but I suggest you to follow one of many PySpark tutorial in order to understand the basics of this "world". Everything will be much clearer.

import findspark
findspark.init()
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('Test').getOrCreate()

Upvotes: 1

Related Questions