Fish357
Fish357

Reputation: 97

Pyspark Group and Order by Sum for Group Divide by parts

I am new to pyspark and am confused on how to group some data together by a couple of columns, order it by another column, then add up a column for each of the groups, then use that as a denominator for each row of data to calculate a weight in each row making up the groups.

This is being done in jupyterlab using a pyspark3 notebook. There's no way to get around that.

Here is an example of the data...

+-------+-----+-----------+------------+------+--------+
| ntwrk | zip | zip-ntwrk | event-date | hour | counts |
+-------+-----+-----------+------------+------+--------+
| A     | 1   | 1-A       | 2019-10-10 | 1    | 12362  |
| B     | 3   | 3-B       | 2019-10-10 | 1    | 100    |
| C     | 5   | 5-C       | 2019-10-10 | 1    | 17493  |
| B     | 3   | 3-B       | 2019-10-10 | 4    | 4873   |
| A     | 2   | 2-A       | 2019-10-11 | 1    | 28730  |
| C     | 6   | 6-C       | 2019-10-11 | 1    | 728    |
| C     | 5   | 5-C       | 2019-10-10 | 2    | 9827   |
| A     | 1   | 1-A       | 2019-10-10 | 9    | 13245  |
| B     | 4   | 4-B       | 2019-10-11 | 1    | 3765   |
+-------+-----+-----------+------------+------+--------+

I'd like to group this together by ntrk, zipcode, zip-ntwrk, event-date and then order it by event-date desc and hour desc. There are 24 hours for each date, so for each zip-ntwrk combo I would want to see the date and the hour in order. Something like this...

+-------+-----+-----------+------------+------+--------+
| ntwrk | zip | zip-ntwrk | event-date | hour | counts |
+-------+-----+-----------+------------+------+--------+
| A     | 1   | 1-A       | 2019-10-10 | 1    | 12362  |
| A     | 1   | 1-A       | 2019-10-10 | 9    | 3765   |
| A     | 2   | 2-A       | 2019-10-11 | 1    | 28730  |
| B     | 3   | 3-B       | 2019-10-10 | 1    | 100    |
| B     | 3   | 3-B       | 2019-10-10 | 4    | 4873   |
| B     | 4   | 4-B       | 2019-10-11 | 1    | 3765   |
| C     | 5   | 5-C       | 2019-10-10 | 1    | 17493  |
| C     | 5   | 5-C       | 2019-10-10 | 2    | 9827   |
| C     | 6   | 6-C       | 2019-10-11 | 1    | 728    |
+-------+-----+-----------+------------+------+--------+

Now that everything is in order, I need to run a calculation to create a ratio of how much count there is in each hour compared to the total of counts for each day when combining the hours. This will be used in the denominator to divide the hourly count by the total to get a ratio of how much count is in each hour compared to the day total. So something like this...

+-------+-----+-----------+------------+------+--------+-------+
| ntwrk | zip | zip-ntwrk | event-date | hour | counts | total |
+-------+-----+-----------+------------+------+--------+-------+
| A     | 1   | 1-A       | 2019-10-10 | 1    | 12362  | 16127 |
| A     | 1   | 1-A       | 2019-10-10 | 9    | 3765   | 16127 |
| A     | 2   | 2-A       | 2019-10-11 | 1    | 28730  | 28730 |
| B     | 3   | 3-B       | 2019-10-10 | 1    | 100    | 4973  |
| B     | 3   | 3-B       | 2019-10-10 | 4    | 4873   | 4973  |
| B     | 4   | 4-B       | 2019-10-11 | 1    | 3765   | 3765  |
| C     | 5   | 5-C       | 2019-10-10 | 1    | 17493  | 27320 |
| C     | 5   | 5-C       | 2019-10-10 | 2    | 9827   | 27320 |
| C     | 6   | 6-C       | 2019-10-11 | 1    | 728    | 728   |
+-------+-----+-----------+------------+------+--------+-------+

And now that we have the denominator, we can divide counts by total for each row to get the factor counts/total=factor and this would end up looking like...

+-------+-----+-----------+------------+------+--------+-------+--------+
| ntwrk | zip | zip-ntwrk | event-date | hour | counts | total | factor |
+-------+-----+-----------+------------+------+--------+-------+--------+
| A     | 1   | 1-A       | 2019-10-10 | 1    | 12362  | 16127 | .766   |
| A     | 1   | 1-A       | 2019-10-10 | 9    | 3765   | 16127 | .233   |
| A     | 2   | 2-A       | 2019-10-11 | 1    | 28730  | 28730 | 1      |
| B     | 3   | 3-B       | 2019-10-10 | 1    | 100    | 4973  | .02    |
| B     | 3   | 3-B       | 2019-10-10 | 4    | 4873   | 4973  | .979   |
| B     | 4   | 4-B       | 2019-10-11 | 1    | 3765   | 3765  | 1      |
| C     | 5   | 5-C       | 2019-10-10 | 1    | 17493  | 27320 | .64    |
| C     | 5   | 5-C       | 2019-10-10 | 2    | 9827   | 27320 | .359   |
| C     | 6   | 6-C       | 2019-10-11 | 1    | 728    | 728   | 1      |
+-------+-----+-----------+------------+------+--------+-------+--------+

That's what I'm trying to do, and any advice on how to get this done would be greatly appreciated.

Thanks

Upvotes: 2

Views: 606

Answers (3)

Prateek Jain
Prateek Jain

Reputation: 587

Pyspark works on distributive architecture and hence it may not retain the order. So, you should always order it the way you need before showing the records.

Now, on your point to get the %of records at various levels. You can achieve the same using window function, partition by on the levels you want the data.

Like: w = Window.partitionBy("ntwrk-zip", "hour") df =df.withColumn("hourly_recs", F.count().over(w))

Also, you can refer to this tutorial in YouTube - https://youtu.be/JEBd_4wWyj0

Upvotes: 0

derp
derp

Reputation: 11

You must have reticulated the splines

Upvotes: 1

notNull
notNull

Reputation: 31490

Use window sum function and then sum over the window partition by ntwrk,zip.

  • finally we are going to divide with counts/total.

Example:

from pyspark.sql.functions import *
from pyspark.sql import Window
w = Window.partitionBy("ntwrk","zip","event-date")

df1.withColumn("total",sum(col("counts")).over(w).cast("int")).orderBy("ntwrk","zip","event-date","hour").\
withColumn("factor",format_number(col("counts")/col("total"),3)).show()

#+-----+---+---------+----------+----+------+-----+------+
#|ntwrk|zip|zip-ntwrk|event-date|hour|counts|total|factor|
#+-----+---+---------+----------+----+------+-----+------+
#|    A|  1|      1-A|2019-10-10|   1| 12362|25607| 0.483|
#|    A|  1|      1-A|2019-10-10|   9| 13245|25607| 0.517|#input 13245 not 3765
#|    A|  2|      2-A|2019-10-11|   1| 28730|28730| 1.000|
#|    B|  3|      3-B|2019-10-10|   1|   100| 4973| 0.020|
#|    B|  3|      3-B|2019-10-10|   4|  4873| 4973| 0.980|
#|    B|  4|      4-B|2019-10-11|   1|  3765| 3765| 1.000|
#|    C|  5|      5-C|2019-10-10|   1| 17493|27320| 0.640|
#|    C|  5|      5-C|2019-10-10|   2|  9827|27320| 0.360|
#|    C|  6|      6-C|2019-10-11|   1|   728|  728| 1.000|
#+-----+---+---------+----------+----+------+-----+------+

Upvotes: 3

Related Questions