MattR
MattR

Reputation: 5126

Pandas Checking For Equality Too Slow for use

I have a requirement to check records that change from one DataFrame to Another. It must match on all columns.

one is an excel file (new_df), one is a SQL Query (sql_df). The shape is ~20,000 rows by 39 columns. I thought this would be a good job for df.equals(other_df)

Currently I am using the following:

import pandas as pd
import numpy as np
new_df = pd.DataFrame({'ID' : [0 ,1, 2, 3, 4, 5, 6, 7, 8, 9], 
                    'B' : [1,0,3,5,0,0,np.NaN,9,0,0], 
                    'C' : [10,0,30,50,0,0,4,10,1,3], 
                    'D' : [1,0,3,4,0,0,7,8,0,1],
                    'E' : ['Universtiy of New York','New Hampshire University','JMU','Oklahoma State','Penn State',
                          'New Mexico Univ','Rutgers','Indiana State','JMU','University of South Carolina']})

sql_df= pd.DataFrame({'ID' : [0 ,1, 2, 3, 4, 5, 6, 7, 8, 9], 
                    'B' : [1,0,3,5,0,0,np.NaN,9,0,0], 
                    'C' : [10,0,30,50,0,0,4,10,1,0], 
                    'D' : [5,0,3,4,0,0,7,8,0,1],
                    'E' : ['Universtiy of New York','New Hampshire University','NYU','Oklahoma State','Penn State',
                          'New Mexico Univ','Rutgers','Indiana State','NYU','University of South Carolina']})

# creates an empty list to append to
differences = []
# for all the IDs in the dataframe that should not change check if this record is the same in the database
# must use reset_index() so the equals() will work as I expect it to
# if it is not the same, append to a list which has the Aspn ID that is failing, along with the columns that changed
for unique_id in new_df['ID'].tolist():
# get the id from the list, and filter both sql and new dfs to this record
    if new_df.loc[new_df['ID'] == unique_id].reset_index(drop=True).equals(sql_df.loc[sql_df['ID'] == unique_id].reset_index(drop=True)) is False:
        bad_columns = []
        for column in new_df.columns.tolist():
        # if not the same above, check which column using the same logic
            if new_df.loc[new_df['ID'] == unique_id][column].reset_index(drop=True).equals(sql_df.loc[sql_df['ID'] == unique_id][column].reset_index(drop=True)) is False:
                bad_columns.append(column)                            
        differences.append([unique_id, bad_columns])

I later take differences and bad_columns and do other tasks with them.

There are many loops which I hope to avoid... as this may be the cause of my performance issue. It currently takes over 5 minutes for 20,000 records (will vary on hardware), which is abysmal performance. I was thinking adding/concatenating all the columns into one long string to compare instead, but that seems like another inefficient way. What would be a better way of solving this/how can I avoid this messy appending to an empty list solution?

Upvotes: 1

Views: 869

Answers (2)

FabienP
FabienP

Reputation: 3138

Get filtered dataframe showing only rows with differences:

result_df = new_df[new_df != sql_df].dropna(how='all')

>>> result_df
Out[]:
    B    C    D    E  ID
0 NaN  NaN  1.0  NaN NaN
2 NaN  NaN  NaN  JMU NaN
8 NaN  NaN  NaN  JMU NaN
9 NaN  3.0  NaN  NaN NaN

Get tuples of ID and columns names where there is a difference, which is the output you where trying to produce. This should work even if you have several columns with difference for the same ID.

result_df.set_axis(labels=new_df.ID[result_df.index], axis=0)

>>> result_df.apply(lambda x: (x.name, result_df.columns[x.notnull()]), axis=1)
Out[]:
ID
0    (0, [D])
2    (2, [E])
8    (8, [E])
9    (9, [C])
dtype: object

Please note that apply is close to a for loop, so the second part will likely take more time than the first.

Upvotes: 2

MaxU - stand with Ukraine
MaxU - stand with Ukraine

Reputation: 210832

In [26]: new_df.ne(sql_df)
Out[26]:
       B      C      D      E     ID
0  False  False   True  False  False
1  False  False  False  False  False
2  False  False  False   True  False
3  False  False  False  False  False
4  False  False  False  False  False
5  False  False  False  False  False
6   True  False  False  False  False
7  False  False  False  False  False
8  False  False  False   True  False
9  False   True  False  False  False

Show dissimilar columns:

In [27]: new_df.ne(sql_df).any(axis=0)
Out[27]:
B      True
C      True
D      True
E      True
ID    False
dtype: bool

Show dissimilar rows:

In [28]: new_df.ne(sql_df).any(axis=1)
Out[28]:
0     True
1    False
2     True
3    False
4    False
5    False
6     True
7    False
8     True
9     True
dtype: bool

UPDATE:

showing dissimilar cells:

In [86]: x = new_df.ne(sql_df)

In [87]: new_df[x].loc[x.any(1)]
Out[87]:
    B    C    D    E  ID
0 NaN  NaN  1.0  NaN NaN
2 NaN  NaN  NaN  JMU NaN
6 NaN  NaN  NaN  NaN NaN
8 NaN  NaN  NaN  JMU NaN
9 NaN  3.0  NaN  NaN NaN

In [88]: sql_df[x].loc[x.any(1)]
Out[88]:
    B    C    D    E  ID
0 NaN  NaN  5.0  NaN NaN
2 NaN  NaN  NaN  NYU NaN
6 NaN  NaN  NaN  NaN NaN
8 NaN  NaN  NaN  NYU NaN
9 NaN  0.0  NaN  NaN NaN

Upvotes: 4

Related Questions