Reputation: 1793
I have a dataframe, df, which is as follows:
| date | Revenue |
|-----------|---------|
| 6/2/2017 | 100 |
| 5/23/2017 | 200 |
| 5/20/2017 | 300 |
| 6/22/2017 | 400 |
| 6/21/2017 | 500 |
I need to group the above data by month to get output as:
| date | SUM(Revenue) |
|------|--------------|
| May | 500 |
| June | 1000 |
I tried this code, but it did not work:
df.groupby(month('date')).agg({'Revenue': 'sum'})
I want to only use Pandas or NumPy and no additional libraries.
Upvotes: 41
Views: 116943
Reputation: 203
Try this:
Change the date column into datetime format.
---> df['Date'] = pd.to_datetime(df['Date'])
Insert a new row in the data frame which has month like [May, 'June']
---> df['months'] = df['date'].apply(lambda x:x.strftime('%B'))
---> here x is date which take from date column in data frame.
Now aggregate the aggregate data in the month column and sum the revenue.
--->response_data_frame = df.groupby('months')['Revenue'].sum()
---->print(response_data_frame)
Output:
month | Revenue |
---|---|
May | 500 |
June | 1000 |
Upvotes: 0
Reputation: 41
This will work better.
Try this:
# Explicitly convert to date
df['Date'] = pd.to_datetime(df['Date'])
# Set your date column as index
df.set_index('Date',inplace=True)
# For monthly use 'M', If needed for other freq you can change.
df[revenue].resample('M').sum()
This code gives the same result as shivsn's answer on the first post.
But the thing is we can do a lot more operations in this mentioned code.
It is recommended to use this:
>>> df['Date'] = pd.to_datetime(df['Date'])
>>> df.set_index('Date',inplace=True)
>>> df['withdrawal'].resample('M').sum().sort_values()
Date
2019-10-31 28710.00
2019-04-30 31437.00
2019-07-31 39728.00
2019-11-30 40121.00
2019-05-31 46495.00
2020-02-29 57751.10
2019-12-31 72469.13
2020-01-31 76115.78
2019-06-30 76947.00
2019-09-30 79847.04
2020-03-31 97920.18
2019-08-31 205279.45
Name: withdrawal, dtype: float64
where shivsn's code does the same.
>>> df.groupby(df['Date'].dt.strftime('%B'))['withdrawal'].sum().sort_values()
Date
October 28710.00
April 31437.00
July 39728.00
November 40121.00
May 46495.00
February 57751.10
December 72469.13
January 76115.78
June 76947.00
September 79847.04
March 97920.18
August 205279.45
Name: withdrawal, dtype: float64
Upvotes: 3
Reputation: 4652
Try a groupby using a pandas Grouper:
df = pd.DataFrame({'date':['6/2/2017','5/23/2017','5/20/2017','6/22/2017','6/21/2017'],'Revenue':[100,200,300,400,500]})
df.date = pd.to_datetime(df.date)
dg = df.groupby(pd.Grouper(key='date', freq='1M')).sum() # groupby each 1 month
dg.index = dg.index.strftime('%B')
Output:
Revenue
May 500
June 1000
Upvotes: 34
Reputation: 7828
Try this:
In [6]: df['date'] = pd.to_datetime(df['date'])
In [7]: df
Out[7]:
date Revenue
0 2017-06-02 100
1 2017-05-23 200
2 2017-05-20 300
3 2017-06-22 400
4 2017-06-21 500
In [59]: df.groupby(df['date'].dt.strftime('%B'))['Revenue'].sum().sort_values()
Out[59]:
date
May 500
June 1000
Upvotes: 80
Reputation: 23
df['Month'] = pd.DatetimeIndex(df['date']).month_name()
Using this you should get
date | Revenue | Month |
---|---|---|
6/2/2017 | 100 | June |
5/23/2017 | 200 | May |
5/20/2017 | 300 | May |
6/22/2017 | 400 | June |
6/21/2017 | 500 | June |
Upvotes: 0
Reputation: 371
For DataFrame with many rows, using strftime
takes up more time. If the date column already has dtype of datetime64[ns]
(can use pd.to_datetime()
to convert, or specify parse_dates
during csv import, etc.), one can directly access datetime property for groupby
labels (Method 3). The speedup is substantial.
import numpy as np
import pandas as pd
T = pd.date_range(pd.Timestamp(0), pd.Timestamp.now()).to_frame(index=False)
T = pd.concat([T for i in range(1,10)])
T['revenue'] = pd.Series(np.random.randint(1000, size=T.shape[0]))
T.columns.values[0] = 'date'
print(T.shape) #(159336, 2)
print(T.dtypes) #date: datetime64[ns], revenue: int32
%timeit -n 10 -r 7 T.groupby(T['date'].dt.strftime('%B'))['revenue'].sum()
1.47 s ± 10.1 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit -n 10 -r 7 T.groupby(pd.Grouper(key='date', freq='1M')).sum()
#NOTE Manually map months as integer {01..12} to strings
56.9 ms ± 2.88 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit -n 10 -r 7 T.groupby(T['date'].dt.month)['revenue'].sum()
#NOTE Manually map months as integer {01..12} to strings
34 ms ± 3.34 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
Upvotes: 12