Reputation: 13395
I have a dataframe that has columns created_at
and entities
that look like this
created_at entities
2017-10-29 23:06:28 {'hashtags': [{'text': 'OPEC', 'indices': [0, ...
2017-10-29 22:28:20 {'hashtags': [{'text': 'Iraq', 'indices': [21,...
2017-10-29 20:01:37 {'hashtags': [{'text': 'oil', 'indices': [58, ...
2017-10-29 20:00:14 {'hashtags': [{'text': 'oil', 'indices': [38, ...
2017-10-27 08:44:30 {'hashtags': [{'text': 'Iran', 'indices': [19,...
2017-10-27 08:44:10 {'hashtags': [{'text': 'Oil', 'indices': [17, ...
2017-10-27 08:43:13 {'hashtags': [{'text': 'Oil', 'indices': [0, 4...
2017-10-27 08:43:00 {'hashtags': [{'text': 'Iran', 'indices': [19,.
I want to calculate the count of entities for each day. Basically I want to receive something like
created_at number_of_entities
2017-10-29 4
2017-10-27 4
How to do that? I have pandas 0.23.4
Upvotes: 2
Views: 128
Reputation: 14689
Given you data:
In [3]: df
Out[3]:
created_at entities
0 2017-10-29 23:06:28 {'hashtags': [{'text': 'OPEC', 'indices': [0, ...
1 2017-10-29 22:28:20 {'hashtags': [{'text': 'Iraq', 'indices': [21,...
2 2017-10-29 20:01:37 {'hashtags': [{'text': 'oil', 'indices': [58, ...
3 2017-10-29 20:00:14 {'hashtags': [{'text': 'oil', 'indices': [38, ...
4 2017-10-27 08:44:30 {'hashtags': [{'text': 'Iran', 'indices': [19,...
5 2017-10-27 08:44:10 {'hashtags': [{'text': 'Oil', 'indices': [17, ...
6 2017-10-27 08:43:13 {'hashtags': [{'text': 'Oil', 'indices': [0, 4...
7 2017-10-27 08:43:00 {'hashtags': [{'text': 'Iran', 'indices': [19,.
You can use groupby(..).count() as follows to get what you want:
In [4]: df[["created_at"]].groupby(pd.to_datetime(df["created_at"]).dt.date).count().rename(columns={"created_at":"number_of_entities"}).reset_index()
...:
Out[4]:
created_at number_of_entities
0 2017-10-27 4
1 2017-10-29 4
Notice that:
if the created_at
column is already in datetime format, you can simply use the following:
df[["created_at"]].groupby(df.created_at.dt.date).count().rename(columns={"created_at":"number_of_entities"}).reset_index()
Upvotes: 2
Reputation: 78650
Given
>>> df
created_at entities
0 2017-10-29 23:06:28 1
1 2017-10-29 22:28:20 2
2 2017-10-29 20:01:37 3
3 2017-10-29 20:00:14 4
4 2017-10-27 08:44:30 5
5 2017-10-27 08:44:10 6
6 2017-10-27 08:43:13 7
7 2017-10-27 08:43:00 8
with
>>> df.dtypes
created_at datetime64[ns]
entities int64
dtype: object
you can issue:
>>> pd.PeriodIndex(df['created_at'], freq='D').value_counts()
2017-10-29 4
2017-10-27 4
Freq: D, Name: created_at, dtype: int64
A nicer approach without the PeriodIndex
constructor has been suggested by jezrael in the comments:
>>> df['created_at'].dt.to_period('D').value_counts()
2017-10-27 4
2017-10-29 4
With some additional renaming to match your output it starts to look suspiciously like jezrael's solution. ;)
>>> datecol = 'created_at'
>>> df[datecol].dt.to_period('D').value_counts().rename_axis(datecol).reset_index(name='number_of_entities')
created_at number_of_entities
0 2017-10-27 4
1 2017-10-29 4
Alternatively, you can set the index to the dates and then resample
:
>>> df.set_index('created_at').resample('D').size()
created_at
2017-10-27 4
2017-10-28 0
2017-10-29 4
Freq: D, dtype: int64
... and if converting to your exact output is necessary:
>>> resampled = df.set_index('created_at').resample('D').size()
>>> resampled[resampled != 0].reset_index().rename(columns={0: 'number_of_entities'})
created_at number_of_entities
0 2017-10-27 4
1 2017-10-29 4
Some more context: resample
is especially useful for arbitrary time intervals, for example "five minutes". The following example is taken directly from Wes McKinney's book "Python for Data Analysis".
>>> N = 15
>>> times = pd.date_range('2017-05-20 00:00', freq='1min', periods=N)
>>> df = pd.DataFrame({'time': times, 'value': np.arange(N)})
>>>
>>> df
time value
0 2017-05-20 00:00:00 0
1 2017-05-20 00:01:00 1
2 2017-05-20 00:02:00 2
3 2017-05-20 00:03:00 3
4 2017-05-20 00:04:00 4
5 2017-05-20 00:05:00 5
6 2017-05-20 00:06:00 6
7 2017-05-20 00:07:00 7
8 2017-05-20 00:08:00 8
9 2017-05-20 00:09:00 9
10 2017-05-20 00:10:00 10
11 2017-05-20 00:11:00 11
12 2017-05-20 00:12:00 12
13 2017-05-20 00:13:00 13
14 2017-05-20 00:14:00 14
>>>
>>> df.set_index('time').resample('5min').size()
time
2017-05-20 00:00:00 5
2017-05-20 00:05:00 5
2017-05-20 00:10:00 5
Freq: 5T, dtype: int64
Upvotes: 3
Reputation: 862441
You can use floor
or date
for remove times and then value_counts
for counting, last rename_axis
and reset_index
for 2 columns DataFrame
:
df = (df['created_at'].dt.floor('d')
.value_counts()
.rename_axis('created_at')
.reset_index(name='number_of_entities'))
print (df)
created_at number_of_entities
0 2017-10-29 4
1 2017-10-27 4
Or:
df = (df['created_at'].dt.date
.value_counts()
.rename_axis('created_at')
.reset_index(name='number_of_entities'))
If want avoid default sorting in value_counts
pass parameter sort=False
:
df = (df['created_at'].dt.floor('d')
.value_counts(sort=False)
.rename_axis('created_at')
.reset_index(name='number_of_entities'))
Upvotes: 2
Reputation: 4233
Using groupby.size
# Convert to datetime dtype if you haven't.
df1.created_at = pd.to_datetime(df1.created_at)
df2 = df1.groupby(df1.created_at.dt.date).size().reset_index(name='number_of_entities')
print (df2)
created_at number_of_entities
0 2017-10-27 4
1 2017-10-29 4
Upvotes: 2
Reputation: 7186
You can group by day using df.groupby(df.created_at.dt.day)
.
As for the function calculating the counts, for that we would need a full row, your datastructure looks very weird.
Upvotes: 1