Arvind
Arvind

Reputation: 117

How to split in Spark?

I have data in one RDD and the data is as follows:

scala> c_data
res31: org.apache.spark.rdd.RDD[String] = /home/t_csv MapPartitionsRDD[26] at textFile at <console>:25

scala> c_data.count()
res29: Long = 45212                                                             

scala> c_data.take(2).foreach(println)
age;job;marital;education;default;balance;housing;loan;contact;day;month;duration;campaign;pdays;previous;poutcome;y
58;management;married;tertiary;no;2143;yes;no;unknown;5;may;261;1;-1;0;unknown;no

I want to split the data into another rdd and I am using:

scala> val csv_data = c_data.map{x=>
 | val w = x.split(";")
 | val age = w(0)
 | val job = w(1)
 | val marital_stat = w(2)
 | val education = w(3)
 | val default = w(4)
 | val balance = w(5)
 | val housing = w(6)
 | val loan = w(7)
 | val contact = w(8)
 | val day = w(9)
 | val month = w(10)
 | val duration = w(11)
 | val campaign = w(12)
 | val pdays = w(13)
 | val previous = w(14)
 | val poutcome = w(15)
 | val Y = w(16)
 | }

that returns :

csv_data: org.apache.spark.rdd.RDD[Unit] = MapPartitionsRDD[28] at map at <console>:27

when I query csv_data it returns Array((),....). How can I get the data with first row as header and rest as data ? Where I am doing wrong ?

Thanks in Advance.

Upvotes: 0

Views: 589

Answers (1)

Harald Gliebe
Harald Gliebe

Reputation: 7564

Your mapping function returns Unit, so you map to an RDD[Unit]. You can get a tuple of your values by changing your code to

 val csv_data = c_data.map{x=>
   val w = x.split(";")
   ...
   val Y = w(16)
   (w, age, job, marital_stat, education, default, balance, housing, loan, contact, day, month, duration, campaign, pdays, previous, poutcome, Y)
}

Upvotes: 1

Related Questions