Reputation: 474
I have a list in python that contains duplicate dataframes. The goal is to remove these duplicate dataframes in whole. Here is some code:
import pandas as pd
import numpy as np
##Creating Dataframes
data1_1 =[[1,2018,80], [2,2018,70]]
data1_2 = [[1,2017,77], [3,2017,62]]
df1 = pd.DataFrame(data1_1, columns = ['ID', 'Year', 'Score'])
df2 = pd.DataFrame(data1_2, columns = ['ID', 'Year', 'Score'])
###Creating list with duplicates
all_df_list = [df1,df1,df1,df2,df2,df2]
The desired result is this:
###Desired results
desired_list = [df1,df2]
Is there a way to remove any duplicated dataframes within a python list?
Thank you
Upvotes: 3
Views: 1934
Reputation: 583
I find this easier to read and understand and debug.
DISCLAIMER: If you are planning to work with large lists you need to consider a different solution.
def remove_duplicate_dataframes(dfs: list) -> list:
if len(dfs) < 2:
return dfs
unique_dfs = []
for idx, df in enumerate(dfs):
if len(unique_dfs) == 0:
unique_dfs.append(df)
continue
dfs_copy = deepcopy(dfs)
dfs_copy.pop(idx)
if any([df_.equals(df) for df_ in dfs_copy]):
continue
else:
unique_dfs.append(df)
return unique_dfs
Upvotes: 0
Reputation: 6325
There's a new Python library pyoccur
to do this easily.
from pyoccur import pyoccur
pyoccur.remove_dup(all_df_list)
Output:
0 1 2018 80
1 2 2018 70, ID Year Score
0 1 2017 77
1 3 2017 62]
Upvotes: 1
Reputation: 25239
You just need to pass the list of duplicate df's
to pd.Series
and drop duplicate and convert it back to list
In [229]: desired_list = pd.Series(all_df_list).drop_duplicates().tolist()
In [230]: desired_list
Out[230]:
[ ID Year Score
0 1 2018 80
1 2 2018 70, ID Year Score
0 1 2017 77
1 3 2017 62]
The final desired_list
hold 2 dataframe equal to df1
, df2
In [231]: desired_list[0] == df1
Out[231]:
ID Year Score
0 True True True
1 True True True
In [232]: desired_list[1] == df2
Out[232]:
ID Year Score
0 True True True
1 True True True
Upvotes: 0
Reputation: 42916
We can use pandas DataFrame.equals
with list comprehension
in combination with enumerate
to compare the items in the list between each other:
desired_list = [all_df_list[x] for x, _ in enumerate(all_df_list) if all_df_list[x].equals(all_df_list[x-1]) is False]
print(desired_list)
[ ID Year Score
0 1 2018 80
1 2 2018 70, ID Year Score
0 1 2017 77
1 3 2017 62]
DataFrame.equals
returns True
if the compared dataframes are equal:
df1.equals(df1)
True
df1.equals(df2)
False
Note
As Wen-Ben noted in the comments. Your list should be sorted like [df1, df1, df1, df2, df2, df2]
. Or with more df's: [df1, df1, df2, df2, df3, df3]
Upvotes: 1
Reputation: 323236
I am doing with numpy.unique
_,idx=np.unique(np.array([x.values for x in all_df_list]),axis=0,return_index=True)
desired_list=[all_df_list[x] for x in idx ]
desired_list
Out[829]:
[ ID Year Score
0 1 2017 77
1 3 2017 62, ID Year Score
0 1 2018 80
1 2 2018 70]
Upvotes: 3
Reputation: 12918
My first thought was to use a set, but dataframes are mutable and thus not hashable. Do you still need individual dataframes in your list, or is it useful to merge all of these into a single dataframe with all unique values?
You can pd.merge()
them all into a single dataframe with unique values using reduce
from functools
:
from functools import reduce
reduced_df = reduce(lambda left, right: pd.merge(left, right, on=None, how='outer'),
all_df_list)
print(reduced_df)
# ID Year Score
# 0 1 2018 80
# 1 2 2018 70
# 2 1 2017 77
# 3 3 2017 62
Upvotes: 1