Urian
Urian

Reputation: 85

pyspark operations not scaling up

I have a code in a notebook that works fine but failed however on bigger data with endless computation and a java.lang.OutOfMemoryError: Java heap space.

The process is as follows:

mocking pyspark data

I start with a dataframe with 3 columns namely (User, Time, and Item) as mocked in the code below:

    from pyspark.sql.types import *
    from pyspark.context import SparkContext
    from pyspark.sql.session import SparkSession
    import pandas as pd
    sc = SparkContext.getOrCreate()
    spark = SparkSession(sc)

    df_schema = StructType([ StructField("User", StringType(), True)\
                       ,StructField("Time", IntegerType(), True)\
                       ,StructField("Item", StringType(), True)])
    pddf = pd.DataFrame([["u1",1,"A"],
                    ["u1",1,"A"],
                    ["u1",2,"A"],
                    ["u1",3,"B"],
                    ["u1",3,"C"],
                    ["u1",4,"B"],
                    ["u2",1,"D"],
                    ["u2",2,"D"],
                    ["u2",2,"A"],
                    ["u2",2,"F"],
                    ["u2",3,"D"],
                    ["u2",3,"A"],],columns=["User", "Time", "Item"])

    df = spark.createDataFrame(pddf,schema=df_schema)
    df.show()

which gives

+----+----+----+
|User|Time|Item|
+----+----+----+
|  u1|   1|   A|
|  u1|   1|   A|
|  u1|   2|   A|
|  u1|   3|   B|
|  u1|   3|   C|
|  u1|   4|   B|
|  u2|   1|   D|
|  u2|   2|   D|
|  u2|   2|   A|
|  u2|   2|   F|
|  u2|   3|   D|
|  u2|   3|   A|
+----+----+----+

intermediate step

Then I compute the topn most common items for each user and create a dataframe with new column uc (uc for uncommon) which is set to 0 if item is in topn list or 1 otherwise.

    import pyspark.sql.functions as F
    from pyspark.sql import Window
    ArrayOfTupleType = ArrayType(StructType([
        StructField("itemId", StringType(), False),
        StructField("count", IntegerType(), False)
    ]))

    @F.udf(returnType=ArrayOfTupleType)
    def most_common(x, topn=2):
        from collections import Counter
        c = Counter(x)
        mc = c.most_common(topn)
        return mc
    topn=2
    w0 = Window.partitionBy("User")
    dfd = (df.withColumn("Item_freq", most_common(F.collect_list("Item").over(w0), F.lit(topn)))
             .select("User", "Time" , "Item" , "Item_freq")
             .withColumn("mcs", F.col("Item_freq.itemId"))
             .withColumn("uc", F.when(F.expr("array_contains(mcs, Item)"), 0).otherwise(1)).cache())

    dfd.select("User", "Time", "Item" , "mcs" , "uc").show()

which gives the interemediate dataframe below

+----+----+----+------+---+
|User|Time|Item|mcs   |uc |
+----+----+----+------+---+
|u1  |1   |A   |[A, B]|0  |
|u1  |1   |A   |[A, B]|0  |
|u1  |2   |A   |[A, B]|0  |
|u1  |3   |B   |[A, B]|0  |
|u1  |3   |C   |[A, B]|1  |
|u1  |4   |B   |[A, B]|0  |
|u2  |1   |D   |[D, A]|0  |
|u2  |2   |D   |[D, A]|0  |
|u2  |2   |A   |[D, A]|0  |
|u2  |2   |F   |[D, A]|1  |
|u2  |3   |D   |[D, A]|0  |
|u2  |3   |A   |[D, A]|0  |
+----+----+----+------+---+

aggregating step

Then I finally group by user and time which is the operation that failed on real data:

    uncommon = dfd.groupBy("User", "Time").agg(F.sum(F.col("uc")).alias("UncommonItem"))
    uncommon.orderBy("User", "Time", ascending=True).show()

which gives the expected results on the dummy data

+----+----+------------+
|User|Time|UncommonItem|
+----+----+------------+
|u1  |1   |0           |
|u1  |2   |0           |
|u1  |3   |1           |
|u1  |4   |0           |
|u2  |1   |0           |
|u2  |2   |1           |
|u2  |3   |0           |
+----+----+------------+

but it failed with a java.lang.OutOfMemoryError: Java heap space on real data.

Increasing the spark.driver.memory from 6G to 60G make only the crash comming after a much longer time untill it fills the 60G. My real data has 1907505 input samples

I am not very experienced with pyspark, and I am not sure where the problem comes from. Many other groupby/agg operation elsewere are fast and do not fail on the same type of data. Therefore I am suspecting that the issue comes from the way i made my dataframe dfd in the intermediate step above.

Any idea on how to optimize the code?

Upvotes: 1

Views: 126

Answers (1)

anky
anky

Reputation: 75100

If you are okay to change the approach , you can give the below a shot:

import pyspark.sql.functions as F

topn=2
w = Window.partitionBy('User','Item')
df1 = df.withColumn("Counts",F.count('Item').over(w))

w1 = Window.partitionBy(df1["User"]).orderBy(df1['Counts'].desc())

(df1.withColumn("dummy",F.when(F.dense_rank().over(w1)<=topn,0).otherwise(1))
.groupBy('User','Time').agg(F.max("dummy").alias('UncommonItem'))).show()

+----+----+------------+
|User|Time|UncommonItem|
+----+----+------------+
|  u1|   1|           0|
|  u1|   2|           0|
|  u1|   3|           1|
|  u1|   4|           0|
|  u2|   1|           0|
|  u2|   2|           1|
|  u2|   3|           0|
+----+----+------------+

Steps followed in the answer:

  1. get Count over a window of User and Item
  2. get dense_rank over User and the Count returned in step1
  3. Wherever rank is within 2 (topn) return 1 else 0 and named it dummy
  4. group on User and time and get max of the dummy

Upvotes: 1

Related Questions