Reputation: 4478
I have a dataframe for values form a file by which I have grouped by two columns, which return a count of the aggregation. Now I want to sort by the max count value, however I get the following error:
KeyError: 'count'
Looks the group by agg count column is some sort of index so not sure how to do this, I'm a beginner to Python and Panda. Here's the actual code, please let me know if you need more detail:
def answer_five():
df = census_df#.set_index(['STNAME'])
df = df[df['SUMLEV'] == 50]
df = df[['STNAME','CTYNAME']].groupby(['STNAME']).agg(['count']).sort(['count'])
#df.set_index(['count'])
print(df.index)
# get sorted count max item
return df.head(5)
Upvotes: 51
Views: 183360
Reputation: 328
For sort the rows by count of a colum, you can do this:
sorted_index = df['col'].value_counts().index
df.set_index('col').loc[sorted_index].reset_index()
If you want to keep the old index, do this:
sorted_index = df['col'].value_counts().index
df['index'] = df.index
df.set_index('col', drop=True).loc[sorted_index].reset_index().set_index('index', drop=True)
Upvotes: 0
Reputation: 9658
Some of the existing answers are outdated. The following solution works for listing a column and the frequency of its distinct values:
df = df[col].value_counts(ascending=False).reset_index()
Upvotes: 7
Reputation: 9300
I agree with @Christoph Schranz to take slice a series from dataframe
df[['STNAME','CTYNAME']].groupby('STNAME')['CTYNAME'].count().nlargest(3)
Upvotes: 2
Reputation: 862481
I think you need add reset_index
, then parameter ascending=False
to sort_values
because sort
return:
FutureWarning: sort(columns=....) is deprecated, use sort_values(by=.....) .sort_values(['count'], ascending=False)
df = df[['STNAME','CTYNAME']].groupby(['STNAME'])['CTYNAME'] \
.count() \
.reset_index(name='count') \
.sort_values(['count'], ascending=False) \
.head(5)
Sample:
df = pd.DataFrame({'STNAME':list('abscscbcdbcsscae'),
'CTYNAME':[4,5,6,5,6,2,3,4,5,6,4,5,4,3,6,5]})
print (df)
CTYNAME STNAME
0 4 a
1 5 b
2 6 s
3 5 c
4 6 s
5 2 c
6 3 b
7 4 c
8 5 d
9 6 b
10 4 c
11 5 s
12 4 s
13 3 c
14 6 a
15 5 e
df = df[['STNAME','CTYNAME']].groupby(['STNAME'])['CTYNAME'] \
.count() \
.reset_index(name='count') \
.sort_values(['count'], ascending=False) \
.head(5)
print (df)
STNAME count
2 c 5
5 s 4
1 b 3
0 a 2
3 d 1
But it seems you need Series.nlargest
:
df = df[['STNAME','CTYNAME']].groupby(['STNAME'])['CTYNAME'].count().nlargest(5)
or:
df = df[['STNAME','CTYNAME']].groupby(['STNAME'])['CTYNAME'].size().nlargest(5)
The difference between
size
andcount
is:
Sample:
df = pd.DataFrame({'STNAME':list('abscscbcdbcsscae'),
'CTYNAME':[4,5,6,5,6,2,3,4,5,6,4,5,4,3,6,5]})
print (df)
CTYNAME STNAME
0 4 a
1 5 b
2 6 s
3 5 c
4 6 s
5 2 c
6 3 b
7 4 c
8 5 d
9 6 b
10 4 c
11 5 s
12 4 s
13 3 c
14 6 a
15 5 e
df = df[['STNAME','CTYNAME']].groupby(['STNAME'])['CTYNAME']
.size()
.nlargest(5)
.reset_index(name='top5')
print (df)
STNAME top5
0 c 5
1 s 4
2 b 3
3 a 2
4 d 1
Upvotes: 104
Reputation: 915
I don't know exactly how your df looks like. But if you have to sort the frequency of several categories by its count, it is easier to slice a Series from the df and sort the series:
series = df.count().sort_values(ascending=False)
series.head()
Note that this series will use the name of the category as index!
Upvotes: 21