user3555287
user3555287

Reputation: 11

Working with non-english characters in columns of spark scala dataframes

Here is part of a file I am trying to load into a dataframe:

alphabet|Sentence|Comment1

è|Small e|None

Ü|Capital U|None

ã|Small a|

Ç|Capital C|None

When I load this file into a dataframe all the non-english characters get converted into boxes. Tried to give option("encoding","UTF-8"), but there is no change.

val nonEnglishDF = spark.read.format("com.databricks.spark.csv").option("delimiter","|").option("header",true).option("encoding","UTF-8").load(hdfs file path)

Please let me know is there is any solution for this. I need to save the file finally with no change in the non-english characters. Currently when the file is saved, it puts boxes or question mark instead of the non-english characters.

Upvotes: 0

Views: 3149

Answers (2)

user3555287
user3555287

Reputation: 11

It works with option("encoding","ISO-8859-1"). e.g.

val nonEnglishDF = spark.read.format("com.databricks.spark.csv").option("delimiter","|").option("header",true).option("encoding","ISO-8859-1").load(hdfs file path)

Upvotes: 1

jayrythium
jayrythium

Reputation: 767

use decode function on that column:

decode(col("column_name"), "US-ASCII")

//It should work with one of these ('US-ASCII', 'ISO-8859-1', 'UTF-8', 'UTF-16BE', 'UTF-16LE', 'UTF-16')

Upvotes: 1

Related Questions