Aleš Juvančič
Aleš Juvančič

Reputation: 59

Calculate duration between events with pandas

I have a dataframe

df = pd.DataFrame([['2018-07-02', 'B'],
 ['2018-07-03', 'A'],
 ['2018-07-06', 'B'],
 ['2018-07-08', 'B'],
 ['2018-07-09', 'A'],
 ['2018-07-09', 'A'],
 ['2018-07-10', 'A'],
 ['2018-07-12', 'B'],
 ['2018-07-15', 'A'],
 ['2018-07-16', 'A'],
 ['2018-07-18', 'B'],
 ['2018-07-22', 'A'],
 ['2018-07-25', 'B'],
 ['2018-07-25', 'B'],
 ['2018-07-27', 'A'],
 ['2018-07-28', 'A']], columns = ['DateEvent','Event'])

where counting starts with event A and ends with event B. Some events could start on more than one day and end on more than one day.

I already calculated the difference:

df = df.set_index('DateEvent')
begin = df.loc[df['Event'] == 'A'].index
cutoffs = df.loc[df['Event'] == 'B'].index

idx = cutoffs.searchsorted(begin)
mask = idx < len(cutoffs)
idx = idx[mask]
begin = begin[mask]
end = cutoffs[idx]

pd.DataFrame({'begin':begin, 'end':end})

but I get the difference for multiple starts and ends also:

begin         end
0  2018-07-03  2018-07-06
1  2018-07-09  2018-07-12
2  2018-07-09  2018-07-12
3  2018-07-10  2018-07-12
4  2018-07-15  2018-07-18
5  2018-07-16  2018-07-18
6  2018-07-22  2018-07-25

The desired output includes the first occurrence of event A and the last occurrence of event B... looking for maximum duration, just to be sure.

I could loop before or after to delete the unnecessary events, but is there a nicer, more pythonic way?

Thank you,

Aleš

EDIT:

I've been using the code sucessfully as a function in a groupby. But it's not clean and it does take some time. How can I rewrite the code to include the group in the df?

df = pd.DataFrame([['2.07.2018', 1, 'B'],
['3.07.2018', 1, 'A'],
['3.07.2018', 2, 'A'],
['6.07.2018', 2, 'B'],
['8.07.2018', 2, 'B'],
['9.07.2018', 2, 'A'],
['9.07.2018', 2, 'A'],
['9.07.2018', 2, 'B'],
['9.07.2018', 3, 'A'],
['10.07.2018', 3, 'A'],
['10.07.2018', 3, 'B'],
['12.07.2018', 3, 'B'],
['15.07.2018', 3, 'A'],
['16.07.2018', 4, 'A'],
['16.07.2018', 4, 'B'],
['18.07.2018', 4, 'B'],
['18.07.2018', 4, 'A'],
['22.07.2018', 5, 'A'],
['25.07.2018', 5, 'B'],
['25.07.2018', 7, 'B'],
['25.07.2018', 7, 'A'],
['25.07.2018', 7, 'B'],
['27.07.2018', 9, 'A'],
['28.07.2018', 9, 'A'],
['28.07.2018', 9, 'B']], columns = ['DateEvent','Group','Event'])

I'm trying to somehow do a combination of cumsum on a group, but cannot get the desired results.

Thank you!

Upvotes: 0

Views: 880

Answers (1)

Scott Boston
Scott Boston

Reputation: 153460

Let's try:

df = pd.DataFrame([['2018-07-02', 'B'],
 ['2018-07-03', 'A'],
 ['2018-07-06', 'B'],
 ['2018-07-08', 'B'],
 ['2018-07-09', 'A'],
 ['2018-07-09', 'A'],
 ['2018-07-10', 'A'],
 ['2018-07-12', 'B'],
 ['2018-07-15', 'A'],
 ['2018-07-16', 'A'],
 ['2018-07-18', 'B'],
 ['2018-07-22', 'A'],
 ['2018-07-25', 'B'],
 ['2018-07-25', 'B'],
 ['2018-07-27', 'A'],
 ['2018-07-28', 'A']], columns = ['DateEvent','Event'])

a = (df['Event'] != 'A').cumsum()
a = a.groupby(a).cumcount()
df['Event Group'] = (a == 1).cumsum()

df_out = df.groupby('Event Group').filter(lambda x: set(x['Event']) == set(['A','B']))\
           .groupby('Event Group')['DateEvent'].agg(['first','last'])\
           .rename(columns={'first':'start','last':'end'})\
           .reset_index()

print(df_out)

Output:

   Event Group       start         end
0            1  2018-07-03  2018-07-08
1            2  2018-07-09  2018-07-12
2            3  2018-07-15  2018-07-18
3            4  2018-07-22  2018-07-25

Edit

a = (df['Event'] != 'A').cumsum().mask(df['Event'] != 'A')
df['Event Group'] = a.ffill()
df_out = df.groupby('Event Group').filter(lambda x: set(x['Event']) == set(['A','B']))\
           .groupby('Event Group')['DateEvent'].agg(['first','last'])\
           .rename(columns={'first':'start','last':'end'})\
           .reset_index()

Upvotes: 2

Related Questions