Aditya
Aditya

Reputation: 2301

Creating PySpark data frame without any alterations in column names

I am creating tables using SparkSQL with below CTAS command.

CREATE TABLE TBL2
STORED AS ORC 
LOCATION "dbfs:/loc"  
TBLPROPERTIES("orc.compress" = "SNAPPY")
AS
SELECT Col1
       , ColNext2
       , ColNext3
       , ... 
FROM TBL1  

After that, I am reading files underlying above newly created location (TBL2) using below PySpark code. However, the data frame below is getting created with all column names in lowercase only. Whereas the expected result is in camel case as I am doing with CTAS above.

df = spark.read.format('ORC') \
     .option('inferSchema',True) \
     .option('header',True) \
     .load('dbfs:/loc')

data_frame.show()

Actual output:

col1 colnext2 colnext3 ...

Expected Output:

Col1 ColNext2 ColNext2 ...

Upvotes: 1

Views: 276

Answers (1)

Steven
Steven

Reputation: 15258

In version 2.3 and earlier, when reading from a Parquet data source table, Spark always returns null for any column whose column names in Hive metastore schema and Parquet schema are in different letter cases, no matter whether spark.sql.caseSensitive is set to true or false. Since 2.4, when spark.sql.caseSensitive is set to false, Spark does case insensitive column name resolution between Hive metastore schema and Parquet schema, so even column names are in different letter cases, Spark returns corresponding column values. An exception is thrown if there is ambiguity, i.e. more than one Parquet column is matched. This change also applies to Parquet Hive tables when spark.sql.hive.convertMetastoreParquet is set to true. source

Upvotes: 4

Related Questions