LuminosityXVII
LuminosityXVII

Reputation: 133

Cleaner way to select subset of minimum values per group?

Given a dataframe like the below, here's what I want: within only the rows containing the earliest date for each serial number, locate rows where Location is null and update them with a specified default value.

df = pd.DataFrame([['123456',pd.to_datetime('1/1/2019'),'Location A'],
                   ['123456',pd.to_datetime('1/2/2019'),np.nan],
                   ['123456',pd.to_datetime('1/3/2019'),np.nan],
                   ['123456',pd.to_datetime('5/1/2019'),np.nan],
                   ['654321',pd.to_datetime('2/1/2019'),'Location B'],
                   ['654321',pd.to_datetime('2/2/2019'),'Location B'],
                   ['654321',pd.to_datetime('2/3/2019'),'Location C'],
                   ['112233',pd.to_datetime('3/1/2019'),np.nan],
                   ['112233',pd.to_datetime('3/2/2019'),'Location D'],
                   ['112233',pd.to_datetime('3/3/2019'),np.nan],
                   ['445566',pd.to_datetime('4/1/2019'),'Location E'],
                   ['445566',pd.to_datetime('4/2/2019'),'Location E'],
                   ['445566',pd.to_datetime('4/3/2019'),'Location E'],
                   ['778899',pd.to_datetime('5/1/2019'),np.nan],
                   ['778899',pd.to_datetime('5/2/2019'),np.nan],
                   ['778899',pd.to_datetime('5/3/2019'),np.nan],
                   ['332211',pd.to_datetime('6/1/2019'),np.nan],
                   ['332211',pd.to_datetime('6/2/2019'),'Location F'],
                   ['332211',pd.to_datetime('6/3/2019'),'Location F'],
                   ['665544',pd.to_datetime('7/1/2019'),'Location G'],
                   ['665544',pd.to_datetime('7/2/2019'),'Location G'],
                   ['665544',pd.to_datetime('7/3/2019'),'Location G'],
                   ['998877',pd.to_datetime('8/1/2019'),'Location H'],
                   ['998877',pd.to_datetime('8/2/2019'),'Location I'],
                   ['998877',pd.to_datetime('8/2/2019'),'Location I'],
                   ['147258',pd.to_datetime('9/1/2019'),np.nan],
                   ['147258',pd.to_datetime('9/2/2019'),np.nan],
                   ['147258',pd.to_datetime('9/3/2019'),'Location J']],
                   columns=['Serial','Date','Location'])

df
Out[498]: 
    Serial       Date    Location
0   123456 2019-01-01  Location A
1   123456 2019-01-02         NaN
2   123456 2019-01-03         NaN
3   123456 2019-05-01         NaN
4   654321 2019-02-01  Location B
5   654321 2019-02-02  Location B
6   654321 2019-02-03  Location C
7   112233 2019-03-01         NaN
8   112233 2019-03-02  Location D
9   112233 2019-03-03         NaN
10  445566 2019-04-01  Location E
11  445566 2019-04-02  Location E
12  445566 2019-04-03  Location E
13  778899 2019-05-01         NaN
14  778899 2019-05-02         NaN
15  778899 2019-05-03         NaN
16  332211 2019-06-01         NaN
17  332211 2019-06-02  Location F
18  332211 2019-06-03  Location F
19  665544 2019-07-01  Location G
20  665544 2019-07-02  Location G
21  665544 2019-07-03  Location G
22  998877 2019-08-01  Location H
23  998877 2019-08-02  Location I
24  998877 2019-08-02  Location I
25  147258 2019-09-01         NaN
26  147258 2019-09-02         NaN
27  147258 2019-09-03  Location J

So in the above example, only rows 6, 12, 15, and 24 should be selected. I've got this working with the below line, which:

While functional, this feels clunky and roundabout. Is there a better way?

df.loc[pd.Series(df.index).isin(df.groupby('Serial')['Date'].idxmin().tolist()) & df['Location'].isnull(), 'Location'] = 'XXXX'

df
Out[502]: 
    Serial       Date    Location
0   123456 2019-01-01  Location A
1   123456 2019-01-02         NaN
2   123456 2019-01-03         NaN
3   123456 2019-05-01         NaN
4   654321 2019-02-01  Location B
5   654321 2019-02-02  Location B
6   654321 2019-02-03  Location C
7   112233 2019-03-01        XXXX
8   112233 2019-03-02  Location D
9   112233 2019-03-03         NaN
10  445566 2019-04-01  Location E
11  445566 2019-04-02  Location E
12  445566 2019-04-03  Location E
13  778899 2019-05-01        XXXX
14  778899 2019-05-02         NaN
15  778899 2019-05-03         NaN
16  332211 2019-06-01        XXXX
17  332211 2019-06-02  Location F
18  332211 2019-06-03  Location F
19  665544 2019-07-01  Location G
20  665544 2019-07-02  Location G
21  665544 2019-07-03  Location G
22  998877 2019-08-01  Location H
23  998877 2019-08-02  Location I
24  998877 2019-08-02  Location I
25  147258 2019-09-01        XXXX
26  147258 2019-09-02         NaN
27  147258 2019-09-03  Location J

EDIT: Added new row 3 to the sample df to clarify that dates are unique within serial number groups but may not be unique across serials. The row with index 3 in this sample has the same date as another serial's minimum date, but should not be selected. I dealt with this by matching indices instead of the dates themselves, but the way I did so feels messy.

Upvotes: 0

Views: 51

Answers (1)

Erfan
Erfan

Reputation: 42886

I think your solution is 'okayish', but you could make it a bit more tighter and speed it up with using numpy.

You can use df.groupby.Series.min() for this and df.Series.isnull().

After that you conditionally fill your Location column with XXXX with np.where:

min_date = df.groupby('Serial')['Date'].min()
cond = df['Location'].isnull()

df['Location'] = np.where((df['Date'].isin(min_date)) & (cond) , 'XXXX', df['Location'])

print(df)
    Serial       Date    Location
0   123456 2019-01-01  Location A
1   123456 2019-01-02         NaN
2   123456 2019-01-03         NaN
3   654321 2019-02-01  Location B
4   654321 2019-02-02  Location B
5   654321 2019-02-03  Location C
6   112233 2019-03-01        XXXX
7   112233 2019-03-02  Location D
8   112233 2019-03-03         NaN
9   445566 2019-04-01  Location E
10  445566 2019-04-02  Location E
11  445566 2019-04-03  Location E
12  778899 2019-05-01        XXXX
13  778899 2019-05-02         NaN
14  778899 2019-05-03         NaN
15  332211 2019-06-01        XXXX
16  332211 2019-06-02  Location F
17  332211 2019-06-03  Location F
18  665544 2019-07-01  Location G
19  665544 2019-07-02  Location G
20  665544 2019-07-03  Location G
21  998877 2019-08-01  Location H
22  998877 2019-08-02  Location I
23  998877 2019-08-02  Location I
24  147258 2019-09-01        XXXX
25  147258 2019-09-02         NaN
26  147258 2019-09-03  Location J

Edit After OP's comment about duplicate dates:

We can merge the min_dates dataframe and use indicator=True while merging

min_date = df.groupby('Serial')['Date'].min().reset_index()
cond = df['Location'].isnull()

df = df.merge(min_date, on=['Serial', 'Date'], how='left', indicator=True)

df['Location'] = np.where((df['_merge'] == 'both') & (cond) , 'XXXX', df['Location'])
df = df.drop('_merge', axis=1)
print(df)

    Serial       Date    Location
0   123456 2019-01-01  Location A
1   123456 2019-01-02         NaN
2   123456 2019-01-03         NaN
3   123456 2019-05-01         NaN
4   654321 2019-02-01  Location B
5   654321 2019-02-02  Location B
6   654321 2019-02-03  Location C
7   112233 2019-03-01        XXXX
8   112233 2019-03-02  Location D
9   112233 2019-03-03         NaN
10  445566 2019-04-01  Location E
11  445566 2019-04-02  Location E
12  445566 2019-04-03  Location E
13  778899 2019-05-01        XXXX
14  778899 2019-05-02         NaN
15  778899 2019-05-03         NaN
16  332211 2019-06-01        XXXX
17  332211 2019-06-02  Location F
18  332211 2019-06-03  Location F
19  665544 2019-07-01  Location G
20  665544 2019-07-02  Location G
21  665544 2019-07-03  Location G
22  998877 2019-08-01  Location H
23  998877 2019-08-02  Location I
24  998877 2019-08-02  Location I
25  147258 2019-09-01        XXXX
26  147258 2019-09-02         NaN
27  147258 2019-09-03  Location J

Upvotes: 1

Related Questions