Reputation: 177
Below is my code from Spark 1.6. I am trying to convert it to Spark 2.3 but I am getting error for using split.
Spark 1.6 code:
val file = spark.textFile(args(0))
val mapping = file.map(_.split('/t')).map(a => a(1))
mapping.saveAsTextFile(args(1))
Spark 2.3 code:
val file = spark.read.text(args(0))
val mapping = file.map(_.split('/t')).map(a => a(1)) //Getting Error Here
mapping.write.text(args(1))
Error Message:
value split is not a member of org.apache.spark.sql.Row
Upvotes: 4
Views: 7432
Reputation: 22449
Unlike spark.textFile
which returns a RDD
,
spark.read.text returns a DataFrame
which is essentially a RDD[Row]
. You could perform map
with a partial function as shown in the following example:
// /path/to/textfile:
// a b c
// d e f
import org.apache.spark.sql.Row
val df = spark.read.text("/path/to/textfile")
df.map{ case Row(s: String) => s.split("\\t") }.map(_(1)).show
// +-----+
// |value|
// +-----+
// | b|
// | e|
// +-----+
Upvotes: 5