keerthi007
keerthi007

Reputation: 223

Groupby function on Dataframe using conditions in Pyspark

I am new to Spark and I need some help in applying condition based groupby function.Below is my current output

+----------+------------------+-----------------+----------+---------+------------+------+----------+--------+----------------+
|account_id|credit_card_Number|credit_card_limit|first_name|last_name|phone_number|amount|      date|    shop|transaction_code|
+----------+------------------+-----------------+----------+---------+------------+------+----------+--------+----------------+
|     12345|      123456789123|           100000|       abc|      xyz|  1234567890|  1000|01/06/2020|  amazon|             buy|
|     12345|      123456789123|           100000|       abc|      xyz|  1234567890|  1100|02/06/2020|    ebay|             buy|
|     12345|      123456789123|           100000|       abc|      xyz|  1234567890|   500|02/06/2020|  amazon|            sell|
|     12345|      123456789123|           100000|       abc|      xyz|  1234567890|   200|03/06/2020|flipkart|             buy|
|     12345|      123456789123|           100000|       abc|      xyz|  1234567890|  4000|04/06/2020|    ebay|             buy|
|     12345|      123456789123|           100000|       abc|      xyz|  1234567890|   900|05/06/2020|  amazon|             buy|
+----------+------------------+-----------------+----------+---------+------------+------+----------+--------+----------------+

I need to groupby using date and in addition to that I need to create a additonal column of Balance left for that date based upon 'buy' or 'sell' in transaction code.

For example, for the first row the amount is 1000 and transaction code is 'buy' so I subtract 1000 from the credit limit(100000) and create a new value of 90000 in a new column.

For the second row we have 2 values one of buy(1100) and another of sell(500), here I should subtract 1100 from the previous row output(i.e 90000)add 500 to that. So the output for 02/06/2020 is 98400

Expected output An additional column attached with the above dataframe

Credit_left
99000
98400
98200
94200
93300

Below is the schema of this table

root
 |-- account_id: long (nullable = true)
 |-- credit_card_Number: long (nullable = true)
 |-- credit_card_limit: long (nullable = true)
 |-- first_name: string (nullable = true)
 |-- last_name: string (nullable = true)
 |-- phone_number: long (nullable = true)
 |-- amount: long (nullable = true)
 |-- date: string (nullable = true)
 |-- shop: string (nullable = true)
 |-- transaction_code: string (nullable = true)

This is such a complex task so I couldn't find the required answer for this.Please help me in solving this problem.Thanks a lot !

Upvotes: 1

Views: 556

Answers (1)

Shubham Jain
Shubham Jain

Reputation: 5526

The solution can be implemented as

from pyspark.sql import Window
from pyspark.sql.functions import *
import pyspark.sql.functions as f

w = Window.orderBy('date')

df.groupBy('date','credit_card_limit','credit_card_Number').agg(f.sum(f.when(f.col('transaction_code')=='buy',-f.col('amount')).\
              otherwise(f.col('amount'))).alias('expenses')).\
    select('*',(f.col('credit_card_limit')+f.sum(f.col('expenses')).over(w)).alias('Credit_left')).show()

----------+-----------------+------------------+--------+-----------+
|      date|credit_card_limit|credit_card_Number|expenses|Credit_left|
+----------+-----------------+------------------+--------+-----------+
|01/06/2020|           100000|      123456789123| -1000.0|    99000.0|
|02/06/2020|           100000|      123456789123|  -600.0|    98400.0|
|03/06/2020|           100000|      123456789123|  -200.0|    98200.0|
|04/06/2020|           100000|      123456789123| -4000.0|    94200.0|
|05/06/2020|           100000|      123456789123|  -900.0|    93300.0|
+----------+-----------------+------------------+--------+-----------+

Hope it helps :)

Upvotes: 4

Related Questions