Reputation:
Suppose I have a dataframe df with columns 'A', 'B', 'C'. I would like to count the number of null values in column 'B' as grouped by 'A' and make a dictionary out of it:
Tried the following by failed:
df.groupby('A')['B'].isnull().sum().to_dict()
Any help will be appreciated.
Upvotes: 15
Views: 9413
Reputation: 323316
Or using the different between count
and size
,see the link
(df.groupby('A')['B'].size()-df.groupby('A')['B'].count()).to_dict()
Out[119]: {1: 2, 2: 1}
Upvotes: 1
Reputation: 294488
Setup
df = pd.DataFrame(dict(A=[1, 2] * 3, B=[1, 2, None, 4, None, None]))
df
A B
0 1 1.0
1 2 2.0
2 1 NaN
3 2 4.0
4 1 NaN
5 2 NaN
Option 1
df['B'].isnull().groupby(df['A']).sum().to_dict()
{1: 2.0, 2: 1.0}
Option 2
df.groupby('A')['B'].apply(lambda x: x.isnull().sum()).to_dict()
{1: 2, 2: 1}
Option 3
Getting creative
df.A[df.B.isnull()].value_counts().to_dict()
{1: 2, 2: 1}
Option 4
from collections import Counter
dict(Counter(df.A[df.B.isnull()]))
{1: 2, 2: 1}
Option 5
from collections import defaultdict
d = defaultdict(int)
for t in df.itertuples():
d[t.A] += pd.isnull(t.B)
dict(d)
{1: 2, 2: 1}
Option 6
Unnecessarily complicated
(lambda t: dict(zip(t[1], np.bincount(t[0]))))(df.A[df.B.isnull()].factorize())
{1: 2, 2: 1}
Option 7
df.groupby([df.B.isnull(), 'A']).size().loc[True].to_dict()
{1: 2, 2: 1}
Upvotes: 23