Reputation: 341
I have a dataset that contains sellout data for some weeks. I want to calculate a moving average for e.g. 3 weeks, but with considering the weeks that have no sells.
Let's consider the following data:
|------|-------|
|wk_id |sellout|
|------|-------|
|201801| 1.0|
|201802| 5.0|
|201803| 3.0|
|201805| 1.0|
|201806| 5.0|
|------|-------|
My expected result is:
|------|-------|-------------|
|wk_id |sellout|moving_avg_3w|
|------|-------|-------------|
|201801| 1.0|0.333 | <- (0+0+1)/3
|201802| 5.0|2.000 | <- (0+1+5)/3
|201803| 3.0|3.000 | <- (1+5+3)/3
|201805| 1.0|1.333 | <- (3+0+1)/3
|201806| 5.0|2.000 | <- (5+1+0)/3
|------|-------|-------------|
My naive solution would be, that I fill the missing weeks up with 0 and then use the approach that was provided here: pyspark: rolling average using timeseries data
But if one has a huge amount of data, this does not seem to be the most performant approach. Does anyone have a better solution?
This question is about PySpark
Upvotes: 0
Views: 2438
Reputation: 29635
So you can actually use the method in the link you posted with rangeBetween
over a window
once you have change the 'wk_id' to unix_timestamp
to get an adequate space between weeks.
import pyspark.sql.functions as F
from pyspark.sql.window import Window
# create the df: some wk_id are different to see it works when you change year as well
df = spark.createDataFrame( [ (201801, 1.0), (201802, 5.0), (201804,3.0),
(201851, 3.0), (201852, 1.0), (201901,5.0)],
['wk_id','sellout'])
# nb_wk you want to roll over
nb_wk = 3
# function to calculate the number of seconds from the number of weeks
wk_to_sec = lambda i: i * 7 * 86400
# create the window of nb_wk
w = Window().orderBy(F.col("sec")).rangeBetween(-wk_to_sec(nb_wk-1), 0)
# add the columns of the number of seconds then the moving average by a sum divide by nb_wk
# the method mean does not work here as there are missing weeks
df = df.withColumn( 'sec', F.unix_timestamp(F.col('wk_id').cast('string'), format="YYYYww"))\
.withColumn( 'moving_avg_{}w'.format(nb_wk), F.sum('sellout').over(w)/nb_wk)
df.show()
+------+-------+----------+------------------+
| wk_id|sellout| sec| moving_avg_3w|
+------+-------+----------+------------------+
|201801| 1.0|1514696400|0.3333333333333333|
|201802| 5.0|1515301200| 2.0|
|201804| 3.0|1516510800|2.6666666666666665| # here it is (5+0+3)/3
|201851| 3.0|1544936400| 1.0|
|201852| 1.0|1545541200|1.3333333333333333|
|201901| 5.0|1546146000| 3.0| # here it is (3+1+5)/3
+------+-------+----------+------------------+
You can drop the column 'sec' after, or if you don't want to create this column, you can do it all at once:
# create the window of nb_wk with unix_timestamp directly in it
w = Window().orderBy(F.unix_timestamp(F.col('wk_id').cast('string'), format="YYYYww"))
.rangeBetween(-wk_to_sec(nb_wk-1), 0)
df = df.withColumn( 'moving_avg_{}w'.format(nb_wk), F.sum('sellout').over(w)/nb_wk)
EDIT: for moving standard deviation, I think you can do it like this, but not sure about performance:
df = df.withColumn('std', F.sqrt( (F.sum( (F.col('sellout') - F.last('roll_mean_3w').over(w))**2).over(w)
+ (nb_wk - F.count('sellout').over(w))*F.last('roll_mean_3w').over(w)**2)
/nb_wk))
Upvotes: 4