PythonRookie
PythonRookie

Reputation: 311

Splitting a column in pyspark

I am trying to split a dataframe in pyspark This is the data i have

df = sc.parallelize([[1, 'Foo|10'], [2, 'Bar|11'], [3,'Car|12']]).toDF(['Key', 'Value'])
df = df.withColumn('Splitted', split(df['Value'], '|')[0])

I got

+-----+---------+-----+
|Key|Value|Splitted   |
+-----+---------+-----+
|    1|   Food|10|   F|
|    2|   Bar|11 |   B|
|    3|   Caring 12| C|
+-----+---------+-----+

But i want

+-----+---------+-----+
|Key  | Value|Splitted|
+-----+---------+-----+
|    1|   10|  Food   |
|    2|   11|  Bar    |
|    3|   12|Caring   |
+-----+---------+-----+

Can any one please point me to what i am doing wrong?

What if i have a unique situation like this?
df = sc.parallelize([[1, 'Foo|10|we'], [2, 'Bar|11|we'], [3,'Car|12|we']]).toDF(['Key', 'Value'])

+---+---------+
|Key|    Value|
+---+---------+
|  1|Foo|10|we|
|  2|Bar|11|we|
|  3|Car|12|we|
+---+---------+

Upvotes: 8

Views: 18124

Answers (1)

Ramesh Maharjan
Ramesh Maharjan

Reputation: 41987

You forgot the escape character, you should include escape character as

df = df.withColumn('Splitted', split(df['Value'], '\|')[0])

If you want output as

+---+-----+--------+
|Key|Value|Splitted|
+---+-----+--------+
|1  |10   |Foo     |
|2  |11   |Bar     |
|3  |12   |Car     |
+---+-----+--------+

You should do

from pyspark.sql import functions as F
df = df.withColumn('Splitted', F.split(df['Value'], '\|')).withColumn('Value', F.col('Splitted')[1]).withColumn('Splitted', F.col('Splitted')[0])

Upvotes: 13

Related Questions