Reputation: 417
I am attempting to drop all records which have a duplicate from the below DataFrame df
.
sales_id sales_line
100 1
100 1
200 1
300 2
300 2
400 3
500 1
500 1
600 5
The expected output I am trying to achieve is seen below.
sales_id sales_line
200 1
400 3
600 5
Any assistance that anyone could provide would be greatly appreciated.
Upvotes: 2
Views: 1370
Reputation: 2646
data.drop_duplicates(keep = False, inplace = True)
This would give you the expected output
Upvotes: 1
Reputation: 863166
Use DataFrame.drop_duplicates
with keep=False
for remove duplicates in all columns:
df = df.drop_duplicates(keep=False)
print (df)
sales_id sales_line
2 200 1
5 400 3
8 600 5
Upvotes: 6
Reputation: 3490
You can try with drop_duplicates(self, subset=None, keep="first", inplace=False)
In your case, the important bit of the function is the keep=False
.
import pandas as pd
data = { 'sales_id' : [100, 100, 200, 300, 300, 400, 500, 500, 600], 'sales_line' : [1, 1, 1, 2, 2, 3, 1, 1, 5] }
df = pd.DataFrame(data)
print('Source DataFrame:\n', df)
df_dropped = df.drop_duplicates(subset=['sales_id', 'sales_line'], keep=False)
print('Result DataFrame:\n', df_dropped)
Upvotes: 1