Tomáš Tonhajzer
Tomáš Tonhajzer

Reputation: 25

Spark transform RDD

I have csv file like this on imput:

time,col1,col2,col3  
0,5,8,9 
1,6,65,3 
2,5,8,465,4 
3,85,45,8

number of columns is unknown and I expect result RDD in format:

(constant,column,time,value) 

that means: ((car1,col1,0,5),(car1,col2,1,8)..)

I have RDDs time, rows and header

class SimpleCSVHeader(header:Array[String]) extends Serializable {
    val index = header.zipWithIndex.toMap
    def apply(array:Array[String], key:String):String = array(index(key))
  }
  val constant = "car1"

  val csv = sc.textFile("C:\\file.csv")  

  val data = csv.map(line => line.split(",").map(elem => elem.trim)) 

  val header = new SimpleCSVHeader(data.take(1)(0)) // we build our header with the first line
  val rows = data.filter(line => header(line,"time") != "time") // filter the header out
  val time = rows.map(row => header(row,"time"))

but I'm not sure how to create result RDD from that

Upvotes: 1

Views: 265

Answers (1)

Balaji Reddy
Balaji Reddy

Reputation: 5700

My suggetion is to use DataFrame rather than RDD for your scenario. But I tired to give you working solution which is subjected to volume of data.

      val lines = Array("time,col1,col2,col3", "0,5,8,9", "1,6,65,3", "2,5,8,465,4")

        val sc = prepareConfig()    
        val baseRDD = sc.parallelize(lines)    
        val columList = baseRDD.take(1)

//Prepare column list. this code can be avoided if you use DataFrames
        val map = scala.collection.mutable.Map[Int, String]()
        columList.foreach { x =>
          {

        var index: Int = 0
            x.split(",").foreach { x =>
              {
                index += 1
                map += (index -> x)

              }
            }

          }
        }

        val mapRDD = baseRDD.flatMap { line =>
          {
            val splits = line.split(",")

//Replace Tuples with your case classes 
            Array(("car1", map(2), splits(0), splits(1)), ("car1", map(3), splits(0), splits(2)), ("car1", map(4), splits(0), splits(3)))
          }
        }

        mapRDD.collect().foreach(f => println(f))

Result:

(car1,col1,0,5) (car1,col2,0,8) (car1,col3,0,9) (car1,col1,1,6) (car1,col2,1,65) (car1,col3,1,3) (car1,col1,2,5) (car1,col2,2,8) (car1,col3,2,465)

Upvotes: 0

Related Questions