costisst
costisst

Reputation: 391

Fastest way to compare all rows of a DataFrame

I have written a program (in Python 3.6) that tries to map the columns of a users csv/excel to a template xls I have. So far so good but part of this process has to be user's data processing which are contacts. For example I want to delete duplicates ,merge data etc. To do this I need to compare every row to all other rows which is costly. Every user's csv I read has ~ 2000-4000 rows but I want it to be efficient for even more rows. I have stored the data in a pd.DataFrame.

Is there a more efficient way to do the comparisons beside brute force?

Thanks

Upvotes: 0

Views: 289

Answers (1)

MattR
MattR

Reputation: 5126

First, what code have you tried?

But to delete duplicates, this is very easy in pandas. Example below:

import pandas as pd
import numpy as np
# Creating the Test DataFrame below -------------------------------
dfp = pd.DataFrame({'A' : [np.NaN,np.NaN,3,4,5,5,3,1,5,np.NaN], 
                    'B' : [1,0,3,5,0,0,np.NaN,9,0,0], 
                    'C' : ['AA1233445','A9875', 'rmacy','Idaho Rx','Ab123455','TV192837','RX','Ohio Drugs','RX12345','USA Pharma'], 
                    'D' : [123456,123456,1234567,12345678,12345,12345,12345678,123456789,1234567,np.NaN],
                    'E' : ['Assign','Unassign','Assign','Ugly','Appreciate','Undo','Assign','Unicycle','Assign','Unicorn',]})
print(dfp)

#Output Below----------------

     A    B           C            D           E
0  NaN  1.0   AA1233445     123456.0      Assign
1  NaN  0.0       A9875     123456.0    Unassign
2  3.0  3.0       rmacy    1234567.0      Assign
3  4.0  5.0    Idaho Rx   12345678.0        Ugly
4  5.0  0.0    Ab123455      12345.0  Appreciate
5  5.0  0.0    TV192837      12345.0        Undo
6  3.0  NaN          RX   12345678.0      Assign
7  1.0  9.0  Ohio Drugs  123456789.0    Unicycle
8  5.0  0.0     RX12345    1234567.0      Assign
9  NaN  0.0  USA Pharma          NaN     Unicorn


# Remove all records with duplicated values in column a:
# keep='first' keeps the first occurences.

df2 = dfp[dfp.duplicated(['A'], keep='first')]
#output
     A    B           C           D         E
1  NaN  0.0       A9875    123456.0  Unassign
5  5.0  0.0    TV192837     12345.0      Undo
6  3.0  NaN          RX  12345678.0    Assign
8  5.0  0.0     RX12345   1234567.0    Assign
9  NaN  0.0  USA Pharma         NaN   Unicorn

if you want to have a new dataframe with no dupes that checks across all columns use the tilde. the ~ operator is essentially the not equal to or != operator. official documentation here

df2 = dfp[~dfp.duplicated(keep='first')]

Upvotes: 1

Related Questions