LearneR
LearneR

Reputation: 2531

How to get the schema (columns and their types) of ORC files stored in HDFS?

I have ORC files stored in different folders on HDFS as follows:

/DATA/UNIVERSITY/DEPT/STUDENT/part-00000.orc
/DATA/UNIVERSITY/DEPT/CREDIT/part-00000.orc

I do not know how many columns are present in each of those tables (STUDENT, CREDIT, etc). Is there a way to get the schemas from these files? I'm looking for obtaining columns names and their data types so that I could write CREATE statements for Hive EXTERNAL tables.

Upvotes: 2

Views: 7681

Answers (3)

Ravikumar K
Ravikumar K

Reputation: 11

/DATA/UNIVERSITY/DEPT/STUDENT/part-00000.orc

you can create data frame using this file.

val df=spark.read.option("header","true").orc("/DATA/UNIVERSITY/DEPT/STUDENT/part-00000.orc")

df.printSchema()

Upvotes: 1

Strick
Strick

Reputation: 1642

Hive orc dump command will solve your purpose

hive --orcfiledump /DATA/UNIVERSITY/DEPT/STUDENT/part-00000

you will get column, their types, Min, Max values,count or records and other many more stats as shown below

Rows: 6 .
Compression: ZLIB .
Compression size: 262144 .
Type: struct<_col0:string,_col1:string> .

Stripe Statistics:
Stripe 1:
Column 0: count: 6 .
Column 1: count: 6 min: abc max: mno sum: 17 .
Column 2: count: 6 min: def max: tre sum: 18 .

File Statistics:
Column 0: count: 6 .
Column 1: count: 6 min: abc max: mno sum: 17 .
Column 2: count: 6 min: def max: tre sum: 18 .

Stripes:
Stripe: offset: 3 data: 58 rows: 6 tail: 49 index: 67 .
Stream: column 0 section ROW_INDEX start: 3 length 9 .
Stream: column 1 section ROW_INDEX start: 12 length 29 .
Stream: column 2 section ROW_INDEX start: 41 length 29 .
Stream: column 1 section DATA start: 70 length 20 .
Stream: column 1 section LENGTH start: 90 length 12 .
Stream: column 2 section DATA start: 102 length 21 .
Stream: column 2 section LENGTH start: 123 length 5 .
Encoding column 0: DIRECT .
Encoding column 1: DIRECT_V2 .
Encoding column 2: DIRECT_V2 .

Upvotes: 2

LearneR
LearneR

Reputation: 2531

Found a way to get the details (column names, data types) I wanted via Spark

data = sqlContext.sql("SELECT * FROM orc.`<HDFS_path>`");
data.printSchema()

This will print the output in below format, which is exaclty the information I want to extract from the ORC files on HDFS:

root
 |-- <column_name1>: <datatype> (nullable = <true/false>)
 |-- <column_name2>: <datatype> (nullable = <true/false>)
 |-- <column_name3>: <datatype> (nullable = <true/false>)
 |-- <column_name4>: <datatype> (nullable = <true/false>)
 |-- <column_name5>: <datatype> (nullable = <true/false>)

Upvotes: 1

Related Questions