Dror
Dror

Reputation: 13051

In pandas, group by date from DatetimeIndex

Consider the following synthetic example:

import pandas as pd
import numpy as np
np.random.seed(42)
ix = pd.date_range('2017-01-01', '2017-01-15', freq='1H')
df = pd.DataFrame(
    {
        'val': np.random.random(size=ix.shape[0]),
        'cat': np.random.choice(['foo', 'bar'], size=ix.shape[0])
    },
    index=ix
)

Which yields a table of the following form:

                    cat val
2017-01-01 00:00:00 bar 0.374540
2017-01-01 01:00:00 foo 0.950714
2017-01-01 02:00:00 bar 0.731994
2017-01-01 03:00:00 bar 0.598658
2017-01-01 04:00:00 bar 0.156019

Now, I want to count the number and the average value of instances per each category and date.

The following groupby, is almost perfect:

df.groupby(['cat',df.index.date]).agg({'val': ['count', 'mean']})

returning:

                val
                count   mean
cat         
bar 2017-01-01  16  0.437941
    2017-01-02  16  0.456361
    2017-01-03  9   0.514388...

The problem with this one, is that the second level of the index turned into strings and not date. First question: Why is it happening? How can I avoid it?

Next, I tried a combination of groupby and resample:

df.groupby('cat').resample('1d').agg({'val': 'mean'})

Here, the index is correct, but I fail to run both mean and count aggregations. This is the second question: why does

df.groupby('cat').resample('1d').agg({'val': ['mean', 'count']})

Doesn't work?

Last question what is the clean way to get an aggregated (using both functions) view and with date type for the index?

Upvotes: 5

Views: 12279

Answers (1)

jezrael
jezrael

Reputation: 862511

For first question need convert to datetimes with no times like:

df1 = df.groupby(['cat',df.index.floor('d')]).agg({'val': ['count', 'mean']})
#df1 = df.groupby(['cat',df.index.normalize()]).agg({'val': ['count', 'mean']})

#df1 = df.groupby(['cat',pd.to_datetime(df.index.date)]).agg({'val'‌​: ['count', 'mean']})

print (df1.index.get_level_values(1))


DatetimeIndex(['2017-01-01', '2017-01-02', '2017-01-03', '2017-01-04',
               '2017-01-05', '2017-01-06', '2017-01-07', '2017-01-08',
               '2017-01-09', '2017-01-10', '2017-01-11', '2017-01-12',
               '2017-01-13', '2017-01-14', '2017-01-01', '2017-01-02',
               '2017-01-03', '2017-01-04', '2017-01-05', '2017-01-06',
               '2017-01-07', '2017-01-08', '2017-01-09', '2017-01-10',
               '2017-01-11', '2017-01-12', '2017-01-13', '2017-01-14',
               '2017-01-15'],
              dtype='datetime64[ns]', freq=None)

... because dates are python objects:

df1 = df.groupby(['cat',df.index.date]).agg({'val': ['count', 'mean']})
print (type(df1.index.get_level_values(1)[0]))
<class 'datetime.date'>

Second question - in my opinion it is bug or not implemented yet, because working one function name in agg only:

df2 = df.groupby('cat').resample('1d')['val'].agg('mean')
#df2 = df.groupby('cat').resample('1d')['val'].mean()
print (df2)
cat            
bar  2017-01-01    0.437941
     2017-01-02    0.456361
     2017-01-03    0.514388
     2017-01-04    0.580295
     2017-01-05    0.426841
     2017-01-06    0.642465
     2017-01-07    0.395970
     2017-01-08    0.359940
...
... 

but working old way with apply:

df2 = df.groupby('cat').apply(lambda x: x.resample('1d')['val'].agg(['mean','count']))
print (df2)
                    mean  count
cat                            
bar 2017-01-01  0.437941     16
    2017-01-02  0.456361     16
    2017-01-03  0.514388      9
    2017-01-04  0.580295     12
    2017-01-05  0.426841     12
    2017-01-06  0.642465      7
    2017-01-07  0.395970     11
    2017-01-08  0.359940      9
    2017-01-09  0.564851     12
    ...
    ...

Upvotes: 4

Related Questions