Baptiste Merliot
Baptiste Merliot

Reputation: 861

Spark add column to dataframe when reading csv

I have a csv with data shaped like this :

0,0;1,0;2,0;3,0;4,0;6,0;8,0;9,1
4,0;2,1;2,0;1,0;1,0;0,1;3,0;1,0;"BC"
4,0;2,1;2,0;1,0;1,0;0,1;4,0;1,0;"BC"
4,0;2,1;2,0;1,0;1,0;0,1;5,0;1,0;"BC"
4,0;2,1;2,0;1,0;1,0;0,1;6,0;1,0;"BC"

I want to convert it into a dataframe with the last column named "value". I already wrote this code in Scala :

val rawdf = spark.read.format("csv")
                 .option("header", "true")
                 .option("delimiter", ";")
                 .load(CSVPATH)

But I get this result with a rawdf.show(numRows = 4) :

+---+---+---+---+---+---+---+---+
|0,0|1,0|2,0|3,0|4,0|6,0|8,0|9,1|
+---+---+---+---+---+---+---+---+
|4,0|2,1|2,0|1,0|1,0|0,1|3,0|1,0|
|4,0|2,1|2,0|1,0|1,0|0,1|4,0|1,0|
|4,0|2,1|2,0|1,0|1,0|0,1|5,0|1,0|
|4,0|2,1|2,0|1,0|1,0|0,1|6,0|1,0|
+---+---+---+---+---+---+---+---+

How can I add the last column on spark? Should I just write it on the csv file?

Upvotes: 1

Views: 4269

Answers (3)

Simon
Simon

Reputation: 6363

Here's a way to do it without changing the CSV file, you set the schema in your code:

val schema = StructType(
    Array(
        StructField("0,0", StringType),
        StructField("1,0", StringType),
        StructField("2,0", StringType),
        StructField("3,0", StringType),
        StructField("4,0", StringType),
        StructField("6,0", StringType),
        StructField("8,0", StringType),
        StructField("9,1", StringType), 
        StructField("X", StringType)
    )
)

val rawdf = 
    spark.read.format("csv")
        .option("header", "true")
        .option("delimiter", ";")
        .schema(schema)
        .load("tmp.csv")

Upvotes: 4

Ramesh Maharjan
Ramesh Maharjan

Reputation: 41957

If you don't know the length of lines of data then you can read it as rdd, do some parsings and then create a schema to form a dataframe as below

//read the data as rdd and split the lines 
val rddData = spark.sparkContext.textFile(CSVPATH)
    .map(_.split(";", -1))

//getting the max length from data and creating the schema
val maxlength = rddData.map(x => (x, x.length)).map(_._2).max
val schema = StructType((1 to maxlength).map(x => StructField(s"col_${x}", StringType, true)))

//parsing the data with the maxlength and populating null where no data and using the schema to form dataframe
val rawdf = spark.createDataFrame(rddData.map(x => Row.fromSeq((0 to maxlength-1).map(index => Try(x(index)).getOrElse("null")))), schema)

rawdf.show(false)

which should give you

+-----+-----+-----+-----+-----+-----+-----+-----+-----+
|col_1|col_2|col_3|col_4|col_5|col_6|col_7|col_8|col_9|
+-----+-----+-----+-----+-----+-----+-----+-----+-----+
|0,0  |1,0  |2,0  |3,0  |4,0  |6,0  |8,0  |9,1  |null |
|4,0  |2,1  |2,0  |1,0  |1,0  |0,1  |3,0  |1,0  |"BC" |
|4,0  |2,1  |2,0  |1,0  |1,0  |0,1  |4,0  |1,0  |"BC" |
|4,0  |2,1  |2,0  |1,0  |1,0  |0,1  |5,0  |1,0  |"BC" |
|4,0  |2,1  |2,0  |1,0  |1,0  |0,1  |6,0  |1,0  |"BC" |
+-----+-----+-----+-----+-----+-----+-----+-----+-----+

I hope the answer is helpful

Upvotes: 0

Constantine
Constantine

Reputation: 1406

Spark tries to map the data columns based on available number of header columns that you have if you set :

.option("header", "true")

You can resolve this issue in one of the below 2 ways :

  1. setting header = false
  2. Adding the header column for the last data column or by just adding a semicolon(;) at the end of the header line.

eg:

0,0;1,0;2,0;3,0;4,0;6,0;8,0;9,1;
4,0;2,1;2,0;1,0;1,0;0,1;3,0;1,0;"BC"
4,0;2,1;2,0;1,0;1,0;0,1;4,0;1,0;"BC"
4,0;2,1;2,0;1,0;1,0;0,1;5,0;1,0;"BC"
4,0;2,1;2,0;1,0;1,0;0,1;6,0;1,0;"BC"

OR

0,0;1,0;2,0;3,0;4,0;6,0;8,0;9,1;col_end
4,0;2,1;2,0;1,0;1,0;0,1;3,0;1,0;"BC"
4,0;2,1;2,0;1,0;1,0;0,1;4,0;1,0;"BC"
4,0;2,1;2,0;1,0;1,0;0,1;5,0;1,0;"BC"
4,0;2,1;2,0;1,0;1,0;0,1;6,0;1,0;"BC"

Upvotes: 0

Related Questions