Reputation: 453
I am trying to define schema for a csv file using case class like this:
final case class AadharData(date:String,registrar:String,agency:String,state:String,district:String,subDistrict:String,pinCode:String,gender:String,age:String,aadharGenerated:String,rejected:String,mobileNo:Double,email:String);
And an extra column is adding automatically while assigning the schema to csv file:
val colNames = classOf[AadharData].getDeclaredFields.map(x=>x.getName)
val df = spark.read.option("header", false).csv("/home/harsh/Hunny/HadoopPractice/Spark/DF/AadharAnalysis/aadhaar_data.csv").toDF(colNames:_*).as[AadharData]
This is what I ma getting for colNames:
val df = spark.read.option("header", false).csv("/home/harsh/Hunny/HadoopPractice/Spark/DF/AadharAnalysis/aadhaar_data.csv").toDF(colNames:_*).as[AadharData]
And error for df variable:
java.lang.IllegalArgumentException: requirement failed: The number of columns doesn't match.
Old column names (13): _c0, _c1, _c2, _c3, _c4, _c5, _c6, _c7, _c8, _c9, _c10, _c11, _c12
New column names (14): date, registrar, agency, state, district, subDistrict, pinCode, gender, age, aadharGenerated, rejected, mobileNo, email, $outer
at scala.Predef$.require(Predef.scala:224)
at org.apache.spark.sql.Dataset.toDF(Dataset.scala:376)
... 54 elided
Upvotes: 1
Views: 111
Reputation: 7336
It looks that the schema you specified in colNames is different in comparison to the schema your original dataframe has. You can try the following:
toDF(colNames:_*)
with df.printSchemaGood luck
Upvotes: 1