getaway22
getaway22

Reputation: 199

replace values of one column in a spark df by dictionary key-values (pyspark)

I got stucked with a data transformation task in pyspark. I want to replace all values of one column in a df with key-value-pairs specified in a dictionary.

dict = {'A':1, 'B':2, 'C':3}

My df looks like this:

+-----------++-----------+
|       col1||       col2|
+-----------++-----------+
|          B||          A|
|          A||          A|
|          A||          A|
|          C||          B|
|          A||          A|
+-----------++-----------+

Now I want to replace all values of col1 by the key-values pairs defined in dict.

Desired Output:

+-----------++-----------+
|       col1||       col2|
+-----------++-----------+
|          2||          A|
|          1||          A|
|          1||          A|
|          3||          B|
|          1||          A|
+-----------++-----------+

I tried

df.na.replace(dict, 1).show()

but that also replaces the values on col2, which shall stay untouched.

Thank you for your help. Greetings :)

Upvotes: 8

Views: 22129

Answers (3)

titiro89
titiro89

Reputation: 2108

Your data:

print df
DataFrame[col1: string, col2: string]
df.show()   
+----+----+
|col1|col2|
+----+----+
|   B|   A|
|   A|   A|
|   A|   A|
|   C|   B|
|   A|   A|
+----+----+

diz = {"A":1, "B":2, "C":3}

Convert values of your dictionary from integer to string, in order to not get errors of replacing different types:

diz = {k:str(v) for k,v in diz.items()}

print diz
{'A': '1', 'C': '3', 'B': '2'}

Replace value of col1

df2 = df.na.replace(diz,1,"col1")
print df2
DataFrame[col1: string, col2: string]

df2.show()
+----+----+
|col1|col2|
+----+----+
|   2|   A|
|   1|   A|
|   1|   A|
|   3|   B|
|   1|   A|
+----+----+

If you need to cast your values from String to Integer

from pyspark.sql.types import *

df3 = df2.select(df2["col1"].cast(IntegerType()),df2["col2"]) 
print df3
DataFrame[col1: int, col2: string]

df3.show()
+----+----+
|col1|col2|
+----+----+
|   2|   A|
|   1|   A|
|   1|   A| 
|   3|   B|
|   1|   A|
+----+----+

Upvotes: 14

vikrant rana
vikrant rana

Reputation: 4674

you can also create a simple lambda function to get the dictionary values and update your dataframe column.

+----+----+
|col1|col2|
+----+----+
|   B|   A|
|   A|   A|
|   A|   A|
|   A|   A|
|   C|   B|
|   A|   A|
+----+----+

dict = {'A':1, 'B':2, 'C':3}
from pyspark.sql.functions import udf
from pyspark.sql.types import IntegerType

user_func =  udf (lambda x: dict.get(x), IntegerType())
newdf = df.withColumn('col1',user_func(df.col1))

>>> newdf.show();
+----+----+
|col1|col2|
+----+----+
|   2|   A|
|   1|   A|
|   1|   A|
|   1|   A|
|   3|   B|
|   1|   A|
+----+----+

I hope this also works !

Upvotes: 4

Grant Shannon
Grant Shannon

Reputation: 5055

Before replacing the values of column 1 in my df, i had to automate the generation of my dictionary (given the many keys). This was done as follows:

keys =sorted(df.select('col1').rdd.flatMap(lambda x: x).distinct().collect())

keys
['A', 'B', 'C']

import numpy

maxval = len(keys)
values = list(numpy.array(list(range(maxval)))+1)

values
[1, 2, 3]

making sure (as titiro89 mentions above) that the type of the 'new' values is the same type as the 'old' values (string in this case)

dct = {k:str(v) for k,v in zip(keys,values)}
print(dct)

{'A': '1', 'B': '2', 'C': '3'}

df2 = df.replace(dct,1,"'col1'")

Upvotes: 0

Related Questions