Reputation: 4638
My Dataframe:
Name fav_fruit
0 justin apple
1 bieber justin apple
2 Kris Justin bieber apple
3 Kim Lee orange
4 lee kim orange
5 mary barnet orange
6 tom hawkins pears
7 Sr Tom Hawkins pears
8 Jose Hawkins pears
9 Shanita pineapple
10 Joe pineapple
df1=pd.DataFrame({'Name':['justin','bieber justin','Kris Justin bieber','Kim Lee','lee kim','mary barnet','tom hawkins','Sr Tom Hawkins','Jose Hawkins','Shanita','Joe'],
'fav_fruit':['apple'
,'apple'
,'apple'
,'orange'
,'orange'
,'orange'
,'pears'
,'pears','pears'
,'pineapple','pineapple']})
I want to count the number of common words in Name column after grouby on fav_fruit column, so for apple count is 2 justin bieber, for orange kim,lee and for pineapple is 0
Expected Output:
Name fav_fruit count
0 justin apple 2
1 bieber justin apple 2
2 Kris Justin bieber apple 2
3 Kim Lee orange 2
4 lee kim orange 2
5 mary barnet orange 2
6 tom hawkins pears 2
7 Sr Tom Hawkins pears 2
8 Jose Hawkins pears 2
9 Shanita pineapple 0
10 Joe pineapple 0
Upvotes: 1
Views: 83
Reputation: 863146
I think need transform
with custom function - first create one big string of joined values, convert to lowercase and split, last use collections.Counter
with filtering all duplicated values:
from collections import Counter
def f(x):
a = ' '.join(x).lower().split()
return len([k for k, v in Counter(a).items() if v != 1])
df['count'] = df.groupby('fav_fruit')['Name'].transform(f)
print (df)
Name fav_fruit count
0 justin apple 2
1 bieber justin apple 2
2 Kris Justin bieber apple 2
3 Kim Lee orange 2
4 lee kim orange 2
5 mary barnet orange 2
6 tom hawkins pears 2
7 Sr Tom Hawkins pears 2
8 Jose Hawkins pears 2
9 Shanita pineapple 0
10 Joe pineapple 0
Upvotes: 1