pylearner
pylearner

Reputation: 1460

python to pyspark, converting the pivot in pyspark

I have below DataFrame's and achieved the desired output in python. But I wanted to convert the same into pyspark.

d = {'user': ['A', 'A', 'B','B','C', 'D', 'C', 'E', 'D', 'E', 'F', 'F'], 'songs' : [11,22,99,11,11,44,66,66,33,55,11,77]}
data = pd.DataFrame(data = d)


e = {'user': ['A', 'B','C', 'D',  'E', 'F','A'], 'cluster': [1,2,3,1,2,3,2]}
clus = pd.DataFrame(data= e)

Desired output: I wanted to achieve all the songs which were not listened by the user of a particular cluster. A belongs to cluster 1, and cluster 1 has songs [11,22,33,44] so A hasnt listened to [33,44] so I achieved that using the below python code.

user
A    [33, 44]
B    [55, 66]
C        [77]
D    [11, 22]
E    [11, 99]
F        [66]

PYTHON CODE:

df = pd.merge(data, clus, on='user', how='left').drop_duplicates(['user','movie'])

df1 = (df.groupby(['cluster']).apply(lambda x: x.pivot('user','movie','cluster').isnull())
        .fillna(False)
        .reset_index(level=0, drop=True)
        .sort_index())

s = np.where(df1, ['{}'.format(x) for x in df1.columns], '')

#remove empty values
s1 = pd.Series([''.join(x).strip(', ') for x in s], index=df1.index)
print (s1)

Hot to achieve the same in pyspark distributed coding ?

Upvotes: 2

Views: 506

Answers (1)

mayank agrawal
mayank agrawal

Reputation: 2545

There could be a better solution than this, but it works.

Assuming that each user belongs to only one cluster,

import pyspark.sql.functions as F
from pyspark.sql.types import *

d = zip(['A', 'A', 'B','B','C', 'D', 'C', 'E', 'D', 'E', 'F', 'F'],[11,22,99,11,11,44,66,66,33,55,11,77])
data = sql.createDataFrame(d).toDF('user','songs')

This gives,

+----+-----+
|user|songs|
+----+-----+
|   A|   11|
|   A|   22|
|   B|   99|
|   B|   11|
|   C|   11|
|   D|   44|
|   C|   66|
|   E|   66|
|   D|   33|
|   E|   55|
|   F|   11|
|   F|   77|
+----+-----+

Creating clusters assuming each user belongs only to one cluster,

c = zip(['A', 'B','C', 'D',  'E', 'F'],[1,2,3,1,2,3])
clus = sql.createDataFrame(c).toDF('user','cluster')
clus.show()

+----+-------+
|user|cluster|
+----+-------+
|   A|      1|
|   B|      2|
|   C|      3|
|   D|      1|
|   E|      2|
|   F|      3|
+----+-------+

Now, we get all songs heard by a user along with their cluster,

all_combine = data.groupBy('user').agg(F.collect_list('songs').alias('songs'))\
                  .join(clus, data.user==clus.user).select(data['user'],'songs','cluster')
all_combine.show()
+----+--------+-------+                                                         
|user|   songs|cluster|
+----+--------+-------+
|   F|[11, 77]|      3|
|   E|[66, 55]|      2|
|   B|[99, 11]|      2|
|   D|[44, 33]|      1|
|   C|[11, 66]|      3|
|   A|[11, 22]|      1|
+----+--------+-------+

Finally, calculating all songs heard in a cluster and subsequently all songs not heard by a user in that cluster,

not_listened = F.udf(lambda song,all_: list(set(all_) - set(song)) , ArrayType(IntegerType()))

grouped_clusters = data.join(clus, data.user==clus.user).select(data['user'],'songs','cluster')\
                    .groupby('cluster').agg(F.collect_list('songs').alias('all_songs'))\
                    .join(all_combine, ['cluster']).select('user', all_combine['cluster'], 'songs', 'all_songs')\
                    .select('user', not_listened(F.col('songs'), F.col('all_songs')).alias('not_listened'))
grouped_clusters.show()

We get output as,

+----+------------+                                                             
|user|not_listened|
+----+------------+
|   D|    [11, 22]|
|   A|    [33, 44]|
|   F|        [66]|
|   C|        [77]|
|   E|    [99, 11]|
|   B|    [66, 55]|
+----+------------+

Upvotes: 1

Related Questions