Ahsan
Ahsan

Reputation: 561

Pandas BuildPaths Efficiently

I have a Pandas dataframe like below, which has two arbitrary customers with 2 months' data(there are more months) and ATL_Flag which are marketing channels(there are more of them too):

|App_Flag|ATL_Flag|Cust_No|month1|month2
| 0      |  TV    | 1     | 1    | 0
| 0      |  FB    | 1     | 0    | 0
| 0      |  OOH   | 1     | 1    | 1
| 1      |  RAD   | 2     | 1    | 1
| 1      |  TV    | 2     | 1    | 0
| 1      |  FB    | 2     | 1    | 0

My goal is to construct ATL_Flags such that

1) where month value is 1 for a specific customer, cluster/concatenate ATL_Flag. E.g. from above example, for month1 & customer 1, the string should be: TVOOH and for month2 and customer 1, the string should be: OOH (month2 vector only has a single 1, corresponding to OOH).

2) Then, combine these two resulting strings for two (or more) months together like so: TVOOH->OOH

The end result should be like this:

|App_Flag|Cust_No|Path | 0 | 1 | TVOOH->OOH | | 1 | 2 | RADTVFB->RAD|

I have tried it with following method but it seems too slow and too convoluted:

def str_sum(channel):
    return '>'.join(channel['c_path'])

wrk_data_temp = pd.melt(work_data_temp[['cust_no', 'ATL_Flag', 'max_exp_1_mnth', 'max_exp_2_mnth']], id_vars=['cust_no', 'ATL_Flag'], value_vars=['max_exp_1_mnth', 'max_exp_2_mnth'], value_name='key')
wrk_data_temp['variable'] = wrk_data_temp['variable'].str.extract(r'([\d]+)').astype(int)
wrk_data_temp['c_path'] = wrk_data_temp.sort_values(['cust_no', 'variable', 'ATL_Flag'])[wrk_data_temp.key == 1][['cust_no', 'ATL_Flag', 'variable']].groupby(['cust_no', 'variable']).transform('sum')
wrk_data_temp2 = wrk_data_temp[['cust_no', 'variable', 'c_path']].drop_duplicates()
wrk_data_temp3 = wrk_data_temp2.dropna()
final = pd.DataFrame(wrk_data_temp3[['cust_no', 'c_path']].groupby('cust_no').apply(str_sum))

Upvotes: 1

Views: 23

Answers (1)

jezrael
jezrael

Reputation: 862731

First get all columns with months, replace 1 values by ATL_Flag column and aggregate join per groups and then join columns together by another join:

c = df.filter(like='month').columns
df[c] = np.where(df[c].astype(bool), df['ATL_Flag'].values[:, None], '')

df1 = (df.groupby(['App_Flag','Cust_No'])[c]
         .agg(''.join)
         .apply('>'.join, axis=1)
         .reset_index(name='Path'))
print (df1)

   App_Flag  Cust_No         Path
0         0        1    TVOOH>OOH
1         1        2  RADTVFB>RAD

EDIT: For ignore 0 values in groups:

print (df)
   App_Flag ATL_Flag  Cust_No  month1  month2  month3
0         0       TV        0       0       0       0
1         0       FB        1       0       0       0
2         0      OOH        1       0       1       1
3         1      RAD        2       1       1       0
4         1       TV        2       1       0       0
5         1       FB        3       1       0       1

c = df.filter(like='month').columns
df[c] = np.where(df[c].astype(bool), df['ATL_Flag'].values[:, None], '')

df1 = (df.groupby(['App_Flag','Cust_No'])[c]
         .agg(''.join)
         .apply(lambda x: '>'.join(y for y in x if y != ''), axis=1)
         .reset_index(name='Path')
         )

print (df1)
   App_Flag  Cust_No       Path
0         0        0           
1         0        1    OOH>OOH
2         1        2  RADTV>RAD
3         1        3      FB>FB

Upvotes: 2

Related Questions