Karam
Karam

Reputation: 691

How to load CSV file with records on multiple lines in spark scala?

I have a multi line field csv ,which i am try to load through spark as a data frame.

Cust_id, cust_address, city,zip
1, "1289 cobb parkway
Bufford", "ATLANTA",34343
2, "1234 IVY lane
Decatur", "ATLANTA",23435


val df = Spark.read.format("csv")
              .option("multiLine", true)
              .option("header", true)
              .option("escape", "\"")
              .load("/home/SPARK/file.csv")

    df.show()

This shows me the data frame like -

+--------+-------------------+-----+----+
| id     | address           | city| zip|
+--------+-------------------+-----+----+
|       1| "1289 cobb parkway| null|null|
|Bufford"|          "ATLANTA"|34343|null|
|       2|     "1234 IVY lane| null|null|
|Decatur"|          "ATLANTA"|23435|null|
+--------+-------------------+-----+----+

i want output like-

+---+--------------------+-------+-----+
| id|             address|   city|  zip|
+---+--------------------+-------+-----+
|  1|1289 cobb parkway...|ATLANTA|34343|
|  2|1234 IVY lane Dec...|ATLANTA|23435|
+---+--------------------+-------+-----+

Upvotes: 2

Views: 1378

Answers (1)

Karam
Karam

Reputation: 691

val File = sqlContext.read.format("com.databricks.spark.csv")
.option("delimiter", delimiter)
.option("header",true)
.option("quote", "\"")
.option("multiLine", "true")
.option("inferSchema", "true")
.option("parserLib", "UNIVOCITY")
.option("ignoreTrailingWhiteSpace","true")
.option("ignoreLeadingWhiteSpace", true)
.load(file_name) 

Upvotes: 2

Related Questions