shitoto
shitoto

Reputation: 3

Conditionally Increment String ID based on groupby results

I am new to python I have the following df

    ClientID    DOB LostDate    Category    ReportedDate
APJ5L9C 1975    3/13/2017   Ungrouped   3/23/2017
APJ5L9C 1993    7/25/2014   Ungrouped   3/5/2017
BKL1N9C 1981    3/22/2017   Ungrouped   3/29/2017
BKL1N9C 1981    1/31/2017   Ungrouped   3/31/2017
BMO3K9C 1982    3/15/2017   Ungrouped   3/27/2017
BOM1N9C 1981    3/16/2017   Ungrouped   3/27/2017
K9E6JSC 2000    3/15/2017   Ungrouped   4/3/2017
K9E6JSC 1994    1/14/2017   Ungrouped   3/24/2017
M12L0A93    1986    3/16/2017   Ungrouped   3/23/2017
M12L0A93    1981    1/17/2017   Ungrouped   3/29/2017
M12L0A94    1981    3/17/2017   Ungrouped   3/29/2017
MCI6A92 1993    3/24/2017   Ungrouped   3/24/2017
N9E4HSC 2000    3/30/2017   Ungrouped   4/3/2017

The following code runs well except I am not able to put it in a loop so that the Cat is written with an incremental ID (Basically a concatenation of Client ID with an _1, _2 etc). the desired outcome is that if the first difference between the LostDate and ReporteDate in any group is logged as ClientID_1, any consequent difference between LostDate and ReporteDate in a group already classified increments to the next ID not used. Say if we have ID_2, it goes to ID_3, if ID_5 is the last, it goes to ID_6 etc

    #Finding the earliest lost date reported in a group
mask = df['Category'] == 'Ungrouped'

df.loc[mask, 'LostDatef'] = df.loc[mask].groupby(['ClientID', 'DOB'])['LostDate'].transform(lambda x:x.min())


df['TimeDiffinDAYS'] = (df['ReportedDate']-df['LostDatef']).dt.days


#Iterate and group INCREMENTALLY DEFINING ClientID
for row in df['TimeDiffinDAYS']:

    if row <=7:
#def assessmentsort(kala):

        df.loc['Category'] = df ['GHJY'].apply(lambda x: '{}'"_1".format(x))

     else:
        df.loc[df.TimeDiffinDAYS > 50, 'Category'] = df ['GHJY'].apply(lambda x: '{}'.format('Ugrouped'))

print df

My desired result:

ClientID    DOB LostDate    Category    ReportedDate
APJ5L9C 1975    3/13/2017   APJ5L9C_1   3/23/2017
APJ5L9C 1993    7/25/2014   APJ5L9C_2   3/5/2017
BKL1N9C 1981    3/22/2017   BKL1N9C-1   3/29/2017
BKL1N9C 1981    1/31/2017   BKL1N9C-2   3/31/2017
BMO3K9C 1982    3/15/2017   BMO3K9C_1   3/27/2017
BOM1N9C 1981    3/16/2017   BOM1N9C_1   3/27/2017
K9E6JSC 2000    3/15/2017   K9E6JSC_1   4/3/2017
K9E6JSC 1994    1/14/2017   K9E6JSC_2   3/24/2017
M12L0A93    1986    3/16/2017   M12L0A93_1  3/23/2017
M12L0A93    1981    1/17/2017   M12L0A93_2  3/29/2017
M12L0A94    1981    3/17/2017   M12L0A94_1  3/29/2017
MCI6A92 1993    3/24/2017   MCI6A92_1   3/24/2017
N9E4HSC 2000    3/30/2017   N9E4HSC_1   4/3/2017

Is this possible?

Upvotes: 0

Views: 135

Answers (1)

yatu
yatu

Reputation: 88226

You could GroupBy ClientID and use cumcount, and then concatenate this value to ClientID using str.cat:

g = (df.groupby('ClientID').cumcount() + 1)
df['Category'] = df.ClientID.str.cat('_' + g.astype(str))

   ClientID   DOB   LostDate    Category ReportedDate
0    APJ5L9C  1975  3/13/2017   APJ5L9C_1    3/23/2017
1    APJ5L9C  1993  7/25/2014   APJ5L9C_2     3/5/2017
2    BKL1N9C  1981  3/22/2017   BKL1N9C_1    3/29/2017
3    BKL1N9C  1981  1/31/2017   BKL1N9C_2    3/31/2017
4    BMO3K9C  1982  3/15/2017   BMO3K9C_1    3/27/2017
5    BOM1N9C  1981  3/16/2017   BOM1N9C_1    3/27/2017
6    K9E6JSC  2000  3/15/2017   K9E6JSC_1     4/3/2017
7    K9E6JSC  1994  1/14/2017   K9E6JSC_2    3/24/2017
8   M12L0A93  1986  3/16/2017  M12L0A93_1    3/23/2017
9   M12L0A93  1981  1/17/2017  M12L0A93_2    3/29/2017
10  M12L0A94  1981  3/17/2017  M12L0A94_1    3/29/2017
11   MCI6A92  1993  3/24/2017   MCI6A92_1    3/24/2017
12   N9E4HSC  2000  3/30/2017   N9E4HSC_1     4/3/2017

Upvotes: 1

Related Questions