Reputation: 2269
I have a groupby object:
g = dfchurn.groupby('ID')['isconfirm'].value_counts().groupby(level=0).apply(lambda x: x / float(x.sum()))
type(g)
Out[230]: pandas.core.series.Series
g.head(5)
Out[226]:
ID isconfirm
0000 0 0.985981
1 0.014019
0064 0 0.996448
1 0.003552
0080 0 0.997137
My goal is to get the top 100 ID's sorted by ratio descending (the right-most column) where isconfirm=0.
To do this, I thought about getting to a nice dataframe with well named columns so I can query the top ID's in terms of the ratio when isconfirm=0.
I tried, e.g.,
gdf = g.to_frame()
gdf.unstack(level=1)
gdf.head(5)
isconfirm
isconfirm 0 1
ID
0000 0.985981 0.014019
0064 0.996448 0.003552
gdf.columns
Out[227]: Index([u'isconfirm'], dtype='object')
That didn't lead anywhere. There must a clean and terse way to do this.
Upvotes: 0
Views: 58
Reputation: 2269
I found the hint in a related question:
gdf.unstack(level=1)
gdf = gdf.add_suffix('_ratio').reset_index() # KEY STEP
gdf.columns # friendly columns now
Index([u'ID', u'isconfirm', u'isconfirm_ratio'], dtype='object')
gdf[gdf['isconfirm_ratio'] > 0.999] # e.g. a filter like this works now or a sort
Upvotes: 0
Reputation: 880339
You can select all rows where isconfirm
is 0 using g.loc
:
In [90]: g.loc[:, 0]
Out[90]:
ID
0 0.827957
1 0.911111
2 0.944954
3 0.884956
4 0.931373
5 0.869048
6 0.941176
7 0.884615
8 0.901961
9 0.930693
Name: isconfirm, dtype: float64
The 0
in [:, 0]
refers to the value in the second level of the index.
Thus, you could find the ID
s corresponding to the top 100 values using:
In [93]: g.loc[:, 0].sort_values(ascending=False).head(100)
Out[93]:
ID
2 0.944954
6 0.941176
4 0.931373
9 0.930693
1 0.911111
8 0.901961
3 0.884956
7 0.884615
5 0.869048
0 0.827957
Name: isconfirm, dtype: float64
In [94]: g.loc[:, 0].sort_values(ascending=False).head(100).index
Out[94]: Int64Index([2, 6, 4, 9, 1, 8, 3, 7, 5, 0], dtype='int64', name='ID')
To produce the result above, I defined g
this way:
import numpy as np
import pandas as pd
np.random.seed(2017)
N = 1000
dfchurn = pd.DataFrame({'ID':np.random.randint(10, size=N),
'isconfirm': np.random.choice(2, p=[0.9, 0.1], size=N)})
g = dfchurn.groupby('ID')['isconfirm'].value_counts().groupby(level=0).apply(lambda x: x / float(x.sum()))
Upvotes: 1