zbinsd
zbinsd

Reputation: 4214

Dataframe groupby - return delta time for log entries

I've got some log data that I'd like to first group by user_id, then, pick out the, say, 2nd entry. That's done below. The missing step is the age of each entry relative to the first, after grouping.

dd = pd.DataFrame({'item_id': {0: 0, 1: 4, 2: 6, 3: 8, 4: 9, 5: 1}, 'date': {0: '2013-12-29T17:56:01Z', 1: '2013-12-29T19:44:09Z', 2: '2013-12-29T19:58:05Z', 3: '2013-12-29T20:00:09Z', 4: '2013-12-29T20:13:35Z', 5: '2013-12-29T20:19:56Z'}, 'user_id': {0: 6, 1: 8, 2: 3, 3: 3, 4: 6, 5: 6}})
print "Step 1: Original DataFrame, sorted by date:\n",  dd

g = dd.groupby(by='user_id', sort=False)
print "\nStep 2: Grouped by User ID:\n", g.head()

# Print the 2nd entey (if it exists)
print "\nStep 3: The 2nd user for each entry:\n", g.nth(1).dropna(how='all')

# age?

returns:

Step 1: Original DataFrame, sorted by date:
                   date  item_id  user_id
0  2013-12-29T17:56:01Z        0        6
1  2013-12-29T19:44:09Z        4        8
2  2013-12-29T19:58:05Z        6        3
3  2013-12-29T20:00:09Z        8        3
4  2013-12-29T20:13:35Z        9        6
5  2013-12-29T20:19:56Z        1        6

Step 2: Grouped by User ID:
                           date  item_id  user_id
user_id                                          
6       0  2013-12-29T17:56:01Z        0        6
        4  2013-12-29T20:13:35Z        9        6
        5  2013-12-29T20:19:56Z        1        6
8       1  2013-12-29T19:44:09Z        4        8
3       2  2013-12-29T19:58:05Z        6        3
        3  2013-12-29T20:00:09Z        8        3

Step 3: The 2nd user for each entry:
                         date  item_id
user_id                               
6        2013-12-29T20:13:35Z        9
3        2013-12-29T20:00:09Z        8

But I'd like to print the age (in, say, decimal days) at step 2, relative to the first item_id consumed by that user, so I can judge the age of the log entries in step 3. Is there a pythonic way to do this without iteration?

The desired output is:

   user_id                 date  item_id      age
0        3  2013-12-29 20:00:09        8  0:02:04
1        6  2013-12-29 20:13:35        9  2:17:34

Upvotes: 1

Views: 200

Answers (1)

Jeff
Jeff

Reputation: 128948

First convert the date from a string column to datetime64[ns] dtype

In [21]: dd['date'] = pd.to_datetime(dd['date'])

In [22]: dd
Out[22]: 
                 date  item_id  user_id
0 2013-12-29 17:56:01        0        6
1 2013-12-29 19:44:09        4        8
2 2013-12-29 19:58:05        6        3
3 2013-12-29 20:00:09        8        3
4 2013-12-29 20:13:35        9        6
5 2013-12-29 20:19:56        1        6

[6 rows x 3 columns]

sort by the date

In [23]: dd.sort_index(by='date')
Out[23]: 
                 date  item_id  user_id
0 2013-12-29 17:56:01        0        6
1 2013-12-29 19:44:09        4        8
2 2013-12-29 19:58:05        6        3
3 2013-12-29 20:00:09        8        3
4 2013-12-29 20:13:35        9        6
5 2013-12-29 20:19:56        1        6

[6 rows x 3 columns]

define a function to diff on that column (and just return the rest of the group)

In [4]: def f(x):
   ...:     x['diff'] = x['date']-x['date'].iloc[0]
   ...:     return x
   ...: 

In [5]: dd.sort_index(by='date').groupby('user_id').apply(f)
Out[5]: 
                 date  item_id  user_id     diff
0 2013-12-29 17:56:01        0        6 00:00:00
1 2013-12-29 19:44:09        4        8 00:00:00
2 2013-12-29 19:58:05        6        3 00:00:00
3 2013-12-29 20:00:09        8        3 00:02:04
4 2013-12-29 20:13:35        9        6 02:17:34
5 2013-12-29 20:19:56        1        6 02:23:55

[6 rows x 4 columns]

the diff is now a timedelta64[ns], see here for how to convert/round to a specific frequency (e.g. days).

This is with pandas 0.13 (releasing in next day or 2). Most of this will work in 0.12 as well.

Upvotes: 5

Related Questions