Reputation: 141
I have a series with duplicates which I am trying to get rid of
0 RWAY001
1 RWAY001
2 RWAY002
3 RWAY002
...
112 RWAY057
113 RWAY057
114 RWAY058
115 RWAY058
Length: 116
Drop.duplicates() seems to cut the length to 58 but the index still seems to go from 0 to 116 but just skipping the duplicates:
0 RWAY001
2 RWAY002
...
112 RWAY057
114 RWAY058
Length: 58
So it seems the rows in between still exist with NaN value. I tried dropna() but it does not have any effect on the data.
This is the code I have:
df = pd.read_csv(path + flnm)
fields = df.file
fields = fields.drop_duplicates()
print fields
Would appreciate any help. Thanks.
Upvotes: 1
Views: 457
Reputation: 862511
I think you need reset_index
with parameter drop=True
:
fields.reset_index(inplace=True, drop=True)
Or:
fields = fields.reset_index(drop=True)
Sample:
import pandas as pd
df = pd.DataFrame({'file': {0: 'RWAY001', 1: 'RWAY001', 2: 'RWAY002', 3: 'RWAY002', 115: 'RWAY058', 113: 'RWAY057', 112: 'RWAY057', 114: 'RWAY058'}})
print (df)
file
0 RWAY001
1 RWAY001
2 RWAY002
3 RWAY002
112 RWAY057
113 RWAY057
114 RWAY058
115 RWAY058
print (df.file.drop_duplicates())
0 RWAY001
2 RWAY002
112 RWAY057
114 RWAY058
Name: file, dtype: object
print (df.file.drop_duplicates().reset_index(drop=True))
0 RWAY001
1 RWAY002
2 RWAY057
3 RWAY058
Name: file, dtype: object
Upvotes: 1