Shasu
Shasu

Reputation: 501

How to increase the default precision and scale while loading data from oracle using spark-sql

Trying to load a data from oracle table where I have few columns hold floating point values , some times it holds upto DecimalType(40,20) i.e. 20 digits after point. Currently when I load its columns using

var local_ora_df: DataFrameReader = ora_df;
      local_ora_df.option("partitionColumn", "FISCAL_YEAR")  
       local_ora_df
          .option("schema",schema)
          .option("dbtable", query)
          .load()

It is holding 10 digits after point i.e. decimal(38,10) (nullable = true) If I want to increase digits after point while reading from oracle using spark-sql what should I do ?

Upvotes: 1

Views: 1166

Answers (1)

Shasu
Shasu

Reputation: 501

We can use .option("customSchema", "data DECIMAL(38, 15)) to increase it to 15 digits after point.

Upvotes: 1

Related Questions