Reputation: 2160
I am using Scala and Spark to analyze some data. Sorry I am absolute novice in this area.
I have data in the following format (below) I want to create RDD to filter data, group and transform data.
Currently I have rdd with list of unparsed strings I have created it from rawData: list of strings
val rawData ( this is ListBuffer[String] )
val rdd = sc.parallelize(rawData)
How can I create data set to manipulate data? I want to have objects in Rdd with named fields line ob.name, obj.year and so on What is the right approach?
Should I create data frame for this?
Raw data strings looks like this : this is list of strings, with space separated values
Column meaning: "name", year" , "month", "tmax", "tmin", "afdays", "rainmm", "sunhours"
aberporth 1941 10 --- --- --- 106.2 ---
aberporth 1941 11 --- --- --- 92.3 ---
aberporth 1941 12 --- --- --- 86.5 ---
aberporth 1942 1 5.8 2.1 --- 114.0 58.0
aberporth 1942 2 4.2 -0.6 --- 13.8 80.3
aberporth 1942 3 9.7 3.7 --- 58.0 117.9
aberporth 1942 4 13.1 5.3 --- 42.5 200.1
aberporth 1942 5 14.0 6.9 --- 101.1 215.1
aberporth 1942 6 16.2 9.9 --- 2.3 269.3
aberporth 1942 7 17.4 11.3 12 70.2* 185.0
aberporth 1942 8 18.7 12.3 5- 78.5 141.9
aberporth 1942 9 16.4 10.7 123 146.8 129.1#
aberporth 1942 10 13.1 8.2 125 131.1 82.1l
--- - means no data, i think I can put 0 to this colument.
70.2* , 129.1# , 82.l - * , # and l here should be filtred
Please point me to right direction.
I have found one of the possible solution here: https://medium.com/@mrpowers/manually-creating-spark-dataframes-b14dae906393
This example looks good:
val someData = Seq(
Row(8, "bat"),
Row(64, "mouse"),
Row(-27, "horse")
)
val someSchema = List(
StructField("number", IntegerType, true),
StructField("word", StringType, true)
)
val someDF = spark.createDataFrame(
spark.sparkContext.parallelize(someData),
StructType(someSchema)
)
How can I transform list of strings to Seq of Row ?
Upvotes: 0
Views: 665
Reputation: 23099
You can read the data as a text file and replace ---
with 0
and remove special characters or filter out. ( I have replaced in below example)
Create a case class to represent the data
case class Data(
name: String, year: String, month: Int, tmax: Double,
tmin: Double, afdays: Int, rainmm: Double, sunhours: Double
)
Read a file
val data = spark.read.textFile("file path") //read as a text file
.map(_.replace("---", "0").replaceAll("-|#|\\*", "")) //replace special charactes
.map(_.split("\\s+"))
.map(x => // create Data object for each record
Data(x(0), x(1), x(2).toInt, x(3).toDouble, x(4).toDouble, x(5).toInt, x(6).toDouble, x(7).replace("l", "").toDouble)
)
Now you get a Dataset[Data]
which is a dataset parsed from the text.
Output:
+---------+----+-----+----+----+------+------+--------+
|name |year|month|tmax|tmin|afdays|rainmm|sunhours|
+---------+----+-----+----+----+------+------+--------+
|aberporth|1941|10 |0.0 |0.0 |0 |106.2 |0.0 |
|aberporth|1941|11 |0.0 |0.0 |0 |92.3 |0.0 |
|aberporth|1941|12 |0.0 |0.0 |0 |86.5 |0.0 |
|aberporth|1942|1 |5.8 |2.1 |0 |114.0 |58.0 |
|aberporth|1942|2 |4.2 |0.6 |0 |13.8 |80.3 |
|aberporth|1942|3 |9.7 |3.7 |0 |58.0 |117.9 |
|aberporth|1942|4 |13.1|5.3 |0 |42.5 |200.1 |
|aberporth|1942|5 |14.0|6.9 |0 |101.1 |215.1 |
|aberporth|1942|6 |16.2|9.9 |0 |2.3 |269.3 |
|aberporth|1942|7 |17.4|11.3|12 |70.2 |185.0 |
|aberporth|1942|8 |18.7|12.3|5 |78.5 |141.9 |
|aberporth|1942|9 |16.4|10.7|123 |146.8 |129.1 |
|aberporth|1942|10 |13.1|8.2 |125 |131.1 |82.1 |
+---------+----+-----+----+----+------+------+--------+
I hope this helps!
Upvotes: 1