Reputation: 25
I have a dataframe which has same values in different parts of it, they can be in different rows and different columns. For example it has same email in 2 different columns and I want to get ids of 2 different rows with this email.
test1 = pd.DataFrame([{'id': 'iii1', 'phone': 'aaa1', 'email': 'qqq1', 'phone2': 'bbb1', 'email2': 'sss1'},
{'id': 'iii2', 'phone': 'aaa2', 'email': 'qqq2', 'phone2': 'aaa1', 'email2': 'sss2'},
{'id': 'iii3', 'phone': 'aaa3', 'email': 'qqq3', 'phone2': 'bbb3', 'email2': 'sss3'},
{'id': 'iii4', 'phone': 'aaa4', 'email': 'qqq4', 'phone2': 'bbb4', 'email2': 'qqq3'},
{'id': 'iii5', 'phone': 'aaa5', 'email': 'qqq5', 'phone2': 'bbb5', 'email2': 'sss5'},
{'id': 'iii6', 'phone': 'aaa6', 'email': 'qqq6', 'phone2': 'bbb6', 'email2': 'qqq1'}])
I tried to make it with these steps:
test2 = pd.melt(
test1, id_vars=['id'],
value_vars=['phone', 'email', 'phone2', 'email2']
).sort_values(by=['id'], ascending=False).reset_index(drop=True)
def testf(ser):
uniqs = pd.unique(ser.values.ravel()).tolist()
uniqs_len = len(uniqs)
if uniqs_len > 1:
return uniqs
else:
return 'only 1, doesnt interesting'
test3 = test2.groupby('value')['id'].apply(testf).reset_index()
So finally after these steps I got:
which almost what I want, but expected result should be:
[iii1,iii2,iii6]; [iii3,iii4]
I think other way can be merge, but I don't know how to realize that.
Upvotes: 1
Views: 1245
Reputation: 150745
Your problem is a network problem. Try networkx
:
import networkx as nx
test2 = (test1.melt('id')
.loc[lambda x: x.duplicated('value',keep=False)]
)
# merge on `value` to connect the id's with same `value`
G = nx.from_pandas_edgelist(test2.merge(test2, on=['value']),
source='id_x', target='id_y')
# output
list(nx.connected_components(G))
Output:
[{'iii1', 'iii2', 'iii6'}, {'iii3', 'iii4'}]
Upvotes: 2