Naveen
Naveen

Reputation: 687

Spark: Process multiline input blob

I'm new to Hadoop/Spark and trying to process a multiple line input blob into a csv or tab delimited format for further processing.

Example Input

------------------------------------------------------------------------
AAA=someValueAAA1
BBB=someValueBBB1
CCC=someValueCCC1
DDD=someValueDDD1
EEE=someValueEEE1
FFF=someValueFFF1
ENDOFRECORD
------------------------------------------------------------------------
AAA=someValueAAA2
BBB=someValueBBB2
CCC=someValueCCC2
DDD=someValueDDD2
EEE=someValueEEE2
FFF=someValueFFF2
ENDOFRECORD
------------------------------------------------------------------------
AAA=someValueAAA3
BBB=someValueBBB3
CCC=someValueCCC3
DDD=someValueDDD3
EEE=someValueEEE3
FFF=someValueFFF3
GGG=someValueGGG3
HHH=someValueHHH3
ENDOFRECORD
------------------------------------------------------------------------

Needed output

someValueAAA1, someValueBBB1, someValueCCC1, someValueDDD1, someValueEEE1, someValueFFF1
someValueAAA2, someValueBBB2, someValueCCC2, someValueDDD2, someValueEEE2, someValueFFF2
someValueAAA3, someValueBBB3, someValueCCC3, someValueDDD3, someValueEEE3, someValueFFF3

Code ive tried so far -

#inputRDD
val inputRDD = sc.textFile("/somePath/someFile.gz")

#transform
val singleRDD = inputRDD.map(x=>x.split("ENDOFRECORD")).filter(x=>x.trim.startsWith("AAA"))


val logData = singleRDD.map(x=>{
  val rowData = x.split("\n")

  var AAA = ""
  var BBB = ""
  var CCC = ""
  var DDD = ""
  var EEE = ""
  var FFF = ""

  for (data <- rowData){
    if(data.trim().startsWith("AAA")){
      AAA = data.split("AAA=")(1)
    }else if(data.trim().startsWith("BBB")){
      BBB = data.split("BBB=")(1)
    }else if(data.trim().startsWith("CCC=")){
      CCC = data.split("CCC=")(1)
    }else if(data.trim().startsWith("DDD=")){
      DDD = data.split("DDD=")(1)
    }else if(data.trim().startsWith("EEE=")){
      EEE = data.split("EEE=")(1)
    }else if(data.trim().startsWith("FFF=")){
      FFF = data.split("FFF=")(1)
    }
  }
  (AAA,BBB,CCC,DDD,EEE,FFF)
})

logData.take(10).foreach(println)

This does not seem to work and i get o/p such as

AAA,,,,,,
,BBB,,,,,
,,CCC,,,,
,,,DDD,,,

Cant seem to figure out whats wrong here. Do i have to write a custom input format to solve this?

Upvotes: 0

Views: 98

Answers (1)

V Sree Harissh
V Sree Harissh

Reputation: 663

To process the data as per your requirement:

  1. Load the dataset as wholeTextFiles, this makes your dataset as key, value pairs
  2. Convert the key, value pair into FlatMap to obtain individual collections of text. For Example:

    AAA=someValueAAA1 BBB=someValueBBB1 CCC=someValueCCC1 DDD=someValueDDD1 EEE=someValueEEE1 FFF=someValueFFF1 ENDOFRECORD

  3. Convert the collection to individual element by splitting using \n

Try the below code:

// load your data set
val data = sc.wholeTextFiles("file:///path/to/file")

val data1 = data.flatMap(x => x._2.split("ENDOFRECORD"))

val logData = data1.map(x=>{
  val rowData = x.split("\n")

  var AAA = ""
  var BBB = ""
  var CCC = ""
  var DDD = ""
  var EEE = ""
  var FFF = ""

  for (data <- rowData){
    if(data.trim().contains("AAA")){
      AAA = data.split("AAA=")(1)
    }else if(data.trim().contains("BBB")){
      BBB = data.split("BBB=")(1)
    }else if(data.trim().contains("CCC=")){
      CCC = data.split("CCC=")(1)
    }else if(data.trim().contains("DDD=")){
      DDD = data.split("DDD=")(1)
    }else if(data.trim().contains("EEE=")){
      EEE = data.split("EEE=")(1)
    }else if(data.trim().contains("FFF=")){
      FFF = data.split("FFF=")(1)
    }
  }
  (AAA,BBB,CCC,DDD,EEE,FFF)
})

logData.foreach(println)

OUTPUT:

(someValueAAA1,someValueBBB1,someValueCCC1,someValueDDD1,someValueEEE1,someValueFFF1)
(someValueAAA2,someValueBBB2,someValueCCC2,someValueDDD2,someValueEEE2,someValueFFF2)
(someValueAAA3,someValueBBB3,someValueCCC3,someValueDDD3,someValueEEE3,someValueFFF3)

Upvotes: 1

Related Questions