Syed Saad
Syed Saad

Reputation: 157

Pandas : Get count of rows which satisfy certain conditions , for each row

I'm trying to implement the following paper : Melville invoice to cash

Page 3 of the paper has list of features used.

The dataset will have one entry for an invoice and each entry will have the following fields :

creation_date  payment_date   customer_id
 2016-01-01     2016-01-03       0
 2016-01-02     2016-01-02       1
 2016-01-02     2016-01-02       1 
 2016-01-04     2016-01-05       0 

Now , for each invoice , I need to calculate the number invoices that have been paid prior to the creation date of the current invoice , for that customer. So,the result will be :

 creation_date  payment_date   customer_id    no_invoice_paid 
 2016-01-01     2016-01-03       0                  0
 2016-01-02     2016-01-02       1                  0 
 2016-01-02     2016-01-02       1                  0 
 2016-01-04     2016-01-05       0                  1

I came up with a naive solution :

data_customer  = data.groupby(by='customer_id')

final_df = pd.DataFrame()

for group , group_data in data_customer:
    group_data = group_data.assign(no_invoice_before=count_paid_invoice)
    final_df  = final_df.append(group_data)

The count paid invoice function is as follows :

def count_paid_invoice(group_data):
    for index , row in group_data.iterrows() :
     group_data.iloc[index,13] = group_data[(group_data['creation_date'] < row['creation_date']) & (group_data['payment_date'] < row['creation_date'])].shape[0]
    return group_data.iloc[:,13]

But this is very slow. Is there a way this can be done more efficiently?

Upvotes: 0

Views: 683

Answers (2)

Jan Zeiseweis
Jan Zeiseweis

Reputation: 3738

Assuming your dataframe is called df, this should give you the desired result.

df['no_invoice_paid '] = df.apply(lambda row:
    df[(df.customer_id == row['customer_id']) &
       (df.payment_date < row['creation_date'])].shape[0] ,axis=1)

Or even shorter:

df['no_invoice_paid '] = df.apply(lambda row:
    ((df.customer_id == row['customer_id']) &
     (df.payment_date < row['creation_date'])).sum(), axis=1)

Another approach (similar to the one from Spandan):

df = df.sort_values(['customer_id', 'payment_date'])
payment_lookup  = df.groupby(('customer_id', 'payment_date')).count().groupby(level=[0]).cumsum()

from functools import lru_cache
@lru_cache(maxsize=1024)
def get_customer_payments(customer_id):
    return payment_lookup.loc[customer_id]

@lru_cache(maxsize=1024)
def lookup_payments(customer_id, payment_date):
    customer_payments = get_customer_payments(customer_id)
    payments_before_current = customer_payments[customer_payments.index <
                                                payment_date]
    try:
        return payments_before_current.values[-1][0]
    except IndexError:
        return 0


df['no_invoice_paid'] = df.apply(lambda row:
                                 lookup_payments(row['customer_id'],
                                                 row['creation_date']), axis=1)

I didn't test the performance. Let me know if works.

Upvotes: 1

Spandan Brahmbhatt
Spandan Brahmbhatt

Reputation: 4044

You can use a combination of cumsum() and pd.concat.

Try following :

data['is_invoice_paid'] = 1  # Creating a dummy variable
count_invoice = data.groupby('customer_id')['is_invoice_paid'].cumsum()
count_invoice.name = 'no_invoice_paid'
final_df = pd.concat([data,count_invoice],axis=1)
final_df['no_invoice_paid'] = final_df['no_invoice_paid'] - 1  # to set the count correct
final_df = final_df.drop('is_invoice_paid',axis=1)

I am assuming 2 things here :

  1. Your payment date cannot be before creation date.
  2. Payment date for every customer is sorted.

Upvotes: 0

Related Questions