BadBayesian
BadBayesian

Reputation: 344

Select rows from a Pandas DataFrame with same values in one column but different value in the other column

Say I have the pandas DataFrame below:

   A      B     C   D
1  foo    one   0   0
2  foo    one   2   4
3  foo    two   4   8
4  cat    one   8   4
5  bar    four  6  12
6  bar    three 7  14
7  bar    four  7  14

I would like to select all the rows that have equal values in A but differing values in B. So I would like the output of my code to be:

   A      B    C   D
1  foo    one  0   0
3  foo    two  4   8
5  bar  three  7  14
6  bar    four 7  14

What's the most efficient way to do this? I have approximately 11,000 rows with a lot of variation in the column values, but this situation comes up a lot. In my dataset, if elements in column A are equal then the corresponding column B value should also be equal, however due to mislabeling this is not the case and I would like to fix this, it would be impractical for me to do this one by one.

Upvotes: 11

Views: 29878

Answers (3)

Karn Kumar
Karn Kumar

Reputation: 8816

You can try groupby() + filter + drop_duplicates():

>>> df.groupby('A').filter(lambda g: len(g) > 1).drop_duplicates(subset=['A', 'B'], keep="first")
     A      B  C   D
0  foo    one  0   0
2  foo    two  4   8
4  bar   four  6  12
5  bar  three  7  14

OR, in case you want to drop duplicates between the subset of columns A & B then can use below but that will have the row having cat as well.

>>> df.drop_duplicates(subset=['A', 'B'], keep="first")
     A      B  C   D
0  foo    one  0   0
2  foo    two  4   8
3  cat    one  8   4
4  bar   four  6  12
5  bar  three  7  14

Upvotes: 12

Ali Faizan
Ali Faizan

Reputation: 502

The current answers are correct and may be more sophisticated too. If you have complex criteria, filter function will be very useful. If you are like me and want to keep things simple, i feel following is more beginner friendly way

>>> df = pd.DataFrame({
    'A': ['foo', 'foo', 'foo', 'cat', 'bar', 'bar', 'bar'],
    'B': ['one', 'one', 'two', 'one', 'four', 'three', 'four'],
    'C': [0,2,4,8,6,7,7],
    'D': [0,4,8,4,12,14,14]
}, index=[1,2,3,4,5,6,7])

>>> df = df.drop_duplicates(['A', 'B'], keep='last')
    A       B       C   D
2   foo     one     2   4
3   foo     two     4   8
4   cat     one     8   4
6   bar     three   7   14
7   bar     four    7   14


>>> df = df[df.duplicated(['A'], keep=False)]
    A       B       C   D
2   foo     one     2   4
3   foo     two     4   8
6   bar     three   7   14
7   bar     four    7   14

keep='last' is optional here

Upvotes: 1

Dani Mesejo
Dani Mesejo

Reputation: 61910

Use groupby + filter + head:

result = df.groupby('A').filter(lambda g: len(g) > 1).groupby(['A', 'B']).head(1)
print(result)

Output

     A      B  C   D
0  foo    one  0   0
2  foo    two  4   8
4  bar   four  6  12
5  bar  three  7  14

The first group-by and filter will remove the rows with no duplicated A values (i.e. cat), the second will create groups with same A, B and for each of those get the first element.

Upvotes: 4

Related Questions