Eden
Eden

Reputation: 98

Why does orderBy() modify the result of aggregation on a DataFrame with PySpark?

I have to use PySpark to find the most often sold products together from data stored on a dataframe with name sales_NY and whose sample is:

+-------+--------------------+--------+------+-------------------+--------------------+-------------+-----+-----+----+
|OrderID|             Product|Quantity| Price|          OrderDate|        StoreAddress|         City|State|Month|Hour|
+-------+--------------------+--------+------+-------------------+--------------------+-------------+-----+-----+----+
| 295665|  Macbook Pro Laptop|       1|1700.0|2019-12-30 00:01:00|136 Church St, Ne...|New York City|   NY|   12|   0|
| 295666|  LG Washing Machine|       1| 600.0|2019-12-29 07:03:00|562 2nd St, New Y...|New York City|   NY|   12|   7|
| 295667|USB-C Charging Cable|       1| 11.95|2019-12-12 18:21:00|277 Main St, New ...|New York City|   NY|   12|  18|
| 295670|AA Batteries (4-p...|       1|  3.84|2019-12-31 22:58:00|200 Jefferson St,...|New York City|   NY|   12|  22|
| 295698|     Vareebadd Phone|       1| 400.0|2019-12-13 14:32:00|175 1st St, New Y...|New York City|   NY|   12|  14|
| 295698|USB-C Charging Cable|       2| 11.95|2019-12-13 14:32:00|175 1st St, New Y...|New York City|   NY|   12|  14|
| 295700|Bose SoundSport H...|       1| 99.99|2019-12-25 19:02:00|363 Hickory St, N...|New York City|   NY|   12|  19|
| 295704|    Wired Headphones|       1| 11.99|2019-12-12 00:20:00|457 8th St, New Y...|New York City|   NY|   12|   0|
| 295705|    Wired Headphones|       1| 11.99|2019-12-25 10:41:00|133 Jackson St, N...|New York City|   NY|   12|  10|
| 295712|  Macbook Pro Laptop|       1|1700.0|2019-12-10 20:02:00|331 Madison St, N...|New York City|   NY|   12|  20|
| 295713|Bose SoundSport H...|       1| 99.99|2019-12-24 07:55:00|490 Spruce St, Ne...|New York City|   NY|   12|   7|
| 295720|AA Batteries (4-p...|       1|  3.84|2019-12-17 22:52:00|298 Ridge St, New...|New York City|   NY|   12|  22|
| 295728|    27in FHD Monitor|       1|149.99|2019-12-21 19:21:00|366 Washington St...|New York City|   NY|   12|  19|
| 295735|              iPhone|       1| 700.0|2019-12-22 18:25:00|374 Lincoln St, N...|New York City|   NY|   12|  18|
| 295735|Apple Airpods Hea...|       1| 150.0|2019-12-22 18:25:00|374 Lincoln St, N...|New York City|   NY|   12|  18|
| 295735|    Wired Headphones|       1| 11.99|2019-12-22 18:25:00|374 Lincoln St, N...|New York City|   NY|   12|  18|
| 295740|USB-C Charging Cable|       1| 11.95|2019-12-01 20:36:00|102 Cedar St, New...|New York City|   NY|   12|  20|
| 295742|Apple Airpods Hea...|       1| 150.0|2019-12-09 23:45:00|368 Sunset St, Ne...|New York City|   NY|   12|  23|
| 295743|USB-C Charging Cable|       1| 11.95|2019-12-03 11:52:00|346 South St, New...|New York City|   NY|   12|  11|
| 295745|       Flatscreen TV|       1| 300.0|2019-12-24 10:38:00|124 Lakeview St, ...|New York City|   NY|   12|  10|
+-------+--------------------+--------+------+-------------------+--------------------+-------------+-----+-----+----+

To find the most often sold products together I'm using a first part of the code where I load the data into memory (sorry for the lack of reproducibility, but the size of the whole data is quite large to include it here):

Common part:

from pyspark.sql import SparkSession
from pyspark.sql.functions import col, collect_list, size, hour

spark = (SparkSession.builder.appName('SalesAnalytics').getOrCreate())

### This is the local path to my data:
file_path = './data/output/sales/ReportYear=2019'
sales_raw_df = (spark.read.format('parquet')
                 .option('header', 'True')
                 .option('inferSchema', 'True')
                 .load(file_path))
sales_raw_df = sales_raw_df.withColumn('Hour', hour(sales_raw_df.OrderDate))

sales_NY = (sales_raw_df.where(col('State') == 'NY'))

Now I may follow 2 different versions of the solution, which I believed being fully equivalent, but the outputs are slightly different. The versions differ in that the second one adds an intermediate step using orderBy('OrderID', 'State').

Version 1:

sales_q4_df = (sales_NY.groupBy('OrderID', 'State')
                       .agg(collect_list('Product').alias('ProductList')))
sales_q4_df = (sales_q4_df.withColumn('ProductListSize', size('ProductList')))

### discards the orders with a single product (the OrderID just appears once)
sales_q4_df = sales_q4_df.filter(col('ProductListSize') > 1).orderBy('ProductList', ascending=True)
most_prods_together = sales_q4_df.groupBy('ProductList').count().orderBy('count', ascending=False).show(10, False)

Output 1:

+------------------------------------------------------+-----+
|ProductList                                           |count|
+------------------------------------------------------+-----+
|[iPhone, Lightning Charging Cable]                    |126  |
|[Google Phone, USB-C Charging Cable]                  |124  |
|[Google Phone, Wired Headphones]                      |52   |
|[Vareebadd Phone, USB-C Charging Cable]               |49   |
|[iPhone, Wired Headphones]                            |46   |
|[iPhone, Apple Airpods Headphones]                    |43   |
|[Google Phone, Bose SoundSport Headphones]            |23   |
|[Vareebadd Phone, Wired Headphones]                   |17   |
|[Apple Airpods Headphones, Wired Headphones]          |12   |
|[Google Phone, USB-C Charging Cable, Wired Headphones]|11   |
+------------------------------------------------------+-----+

Version 2:

sales_q4_df = (sales_NY.orderBy('OrderID', 'Product')
                       .groupBy('OrderID', 'State')
                       .agg(collect_list('Product').alias('ProductList')))
sales_q4_df = (sales_q4_df.withColumn('ProductListSize', size('ProductList')))

### discards the orders with a single product (the OrderID just appears once)
sales_q4_df = sales_q4_df.filter(col('ProductListSize') > 1).orderBy('ProductList', ascending=True)
most_prods_together = sales_q4_df.groupBy('ProductList').count().orderBy('count', ascending=False).show(10, False)

Output 2:

+-------------------------------------------------+-----+
|ProductList                                      |count|
+-------------------------------------------------+-----+
|[Google Phone, USB-C Charging Cable]             |127  |
|[Lightning Charging Cable, iPhone]               |126  |
|[Google Phone, Wired Headphones]                 |53   |
|[USB-C Charging Cable, Vareebadd Phone]          |50   |
|[Wired Headphones, iPhone]                       |46   |
|[Apple Airpods Headphones, iPhone]               |45   |
|[Bose SoundSport Headphones, Google Phone]       |24   |
|[Apple Airpods Headphones, Wired Headphones]     |19   |
|[Vareebadd Phone, Wired Headphones]              |17   |
|[AA Batteries (4-pack), Lightning Charging Cable]|16   |
+-------------------------------------------------+-----+

Can anyone explain me why the results are different? Is this a bug in PySpark?

I am working on Notebooks with jupyterlab v.3.4.2, PySpark v.3.0.1 and Java v.15.

PD: I must add that I tried as well the method sort() (more efficient than orderBy() because of using several partitions), but the result was the same.

Upvotes: 1

Views: 388

Answers (2)

walking
walking

Reputation: 960

As Emma mentioned, using group_by on list columns with random order may return weird results. ['a', 'b'] and ['b', 'a'] are two different values.

You can run array_sort on the collect_list and then the group_by should return the same results every time.

That being said a group_by on a list is not the most accurate way to answer the question what are the most often sold products together? ['iphone', 'iphone charger', 'chewing gum'] and ['iphone', 'iphone charger'] are two different rows in your result dataframe yet the association between iphone and iphone charger is probably what you are looking for.

You can use pyspark.ml.fpm FP-Grow which can return you the association rules and pairs of items (or other sizes) that return together commonly.

from pyspark.ml.fpm import FPGrowth
from pyspark.sql.functions import size, col

df = spark.createDataFrame(
    [
     [1, ["tomato", "cucumber", "onion"]],
     [2, ["cucumber", "avocado", "tomato", "olive oil"]],
     [3, ["cucumber", "tomato", "lettuce", "onion"]],
     [4, ["lettuce", "onion"]],
     [5, ["olive oil", "bread"]],
     [6, ["onion", "olive oil", "lettuce"]]
    ], ["order_id", "products"]
)

fpGrowth = FPGrowth(itemsCol="products", minSupport=0.2, minConfidence=0.75)
model = fpGrowth.fit(df)

model.freqItemsets.filter(size("items") > 1).orderBy(col("freq").desc()).show()

+--------------------+----+
|               items|freq|
+--------------------+----+
|    [lettuce, onion]|   3|
|  [tomato, cucumber]|   3|
|   [cucumber, onion]|   2|
|[tomato, cucumber...|   2|
|     [tomato, onion]|   2|
+--------------------+----+

Also model.associationRules.show() might interest you

Upvotes: 3

Emma
Emma

Reputation: 9308

This is because Spark's dataframe is unordered.

In version 1, you don't have orderBy, so collect_list can collect the Product in any order.

sales_q4_df = (sales_NY.groupBy('OrderID', 'State')
               .agg(collect_list('Product').alias('ProductList')))

Check this item in the list from version 1 [Google Phone, Bose SoundSport Headphones], "G"oogle comes before "B"ose with a count 23.

I am guessing you have an entry of [Bose SoundSport Headphones, Google Phone] with a count 1, if you remove the > 1 filter.

In version 2, you added .orderBy('OrderID', 'Product') which now makes the query deterministic, and you are counting the correct set of [Bose SoundSport Headphones, Google Phone] with 24.

Upvotes: 0

Related Questions