Giannis Stas
Giannis Stas

Reputation: 75

Creating and array from a txt file using spark shell, scala in particular

I need some help with a transformation I want to make to a txt file for a homework in scala using command line of spark. Here is the array I created by using a DF:

scala> tempDF.collect()
res6: Array[org.apache.spark.sql.Row] = Array([bid|gender|department], [1|M|Informatics], [2|M|Low], [3|M|BusinessAdministration], [5|M|Mathematics], [6|M|Low], [7|M|Economics], [8|M|Economics], [9|M|Economics], [10|M|Economics], [11|M|Informatics], [13|M|Physics], [14|M|Informatics], [15|M|Informatics], [16|M|Economics], [17|M|Informatics], [18|M|Economics], [19|M|BusinessAdministration], [20|M|Mathematics], [21|M|Mathematics], [22|M|Economics], [23|M|Economics], [24|M|BusinessAdministration], [25|M|Informatics], [26|M|Statistics], [27|M|BusinessAdministration], [28|M|Economics], [29|M|Physics], [30|M|Physics], [31|M|Informatics], [32|M|Mathematics], [33|M|Economics], [34|M|BusinessAdministration], [35|M|Economics], [36|M|BusinessAdministration], [37|M|Mathema...

Now how can I transform "bid|gender|department" into columns with the following tuples like "1|M|Informatics" as the values of each column ? In the txt file, "bid|gender|department", is row 0.

Upvotes: 1

Views: 323

Answers (2)

mck
mck

Reputation: 42422

Write the dataframe in csv format, specifying that you want a header:

tempDF.write.format("csv").option("header", "true").save("file.txt")

Upvotes: 0

notNull
notNull

Reputation: 31540

Spark supports reading delimited files using .csv method.

Try by reading csv file with delimiter specified as shown below.

spark.
read.
option("header","true").
option("delimiter","|").
csv("<txt file path>").
show()
//+---+------+-----------+
//|bid|gender| department|
//+---+------+-----------+
//|  1|     M|Informatics|
//|  2|     M|        Low|
//+---+------+-----------+

Upvotes: 2

Related Questions