A H
A H

Reputation: 2580

Find Duplicates limited to multiple ranges - pandas

Suppose our problem can be simplified like this:

df = pd.DataFrame()
df['C_rows'] = ['C1', 'C2', 'C3', 'C2', 'C1', 'C2', 'C3', 'C1', 'C2', 'C3', 'C4', 'C1']
df['values'] = ['customer1', 4321, 1266, 5671, 'customer2', 123, 7344,'customer3', 4321, 4444, 5674, 'customer4']

with the table:

    C_rows  values
0   C1      customer1
1   C2      4321
2   C3      1266
3   C2      5671
4   C1      customer2
5   C2      123
6   C3      7344
7   C1      customer3
8   C2      4321
9   C3      4444
10  C4      5674
11  C1      customer4

How can we vectorise finding duplicate C_rows between each C1, i.e. row3 has duplicate C2 occurring in rows 1 and 3. The dataset I am working with has 50,000 rows, and between each C1 is about 15 rows.

e.g. check duplicates like this:

    C_rows  values
0   C1      customer1
1   C2      4321
2   C3      1266
3   C2      5671

C2 is duplicate

4   C1      customer2
5   C2      123
6   C3      7344

No duplicates

7   C1      customer3
8   C2      4321
9   C3      4444
10  C4      5674

No duplicates

without using for loops - and quick (vectorised).

Upvotes: 2

Views: 293

Answers (3)

jezrael
jezrael

Reputation: 863166

For very fast vectorized solution create new clumn by consecutive values between C1 and then check duplicated:

df['dupe'] = df.assign(dupe=df['C_rows'].eq('C1').cumsum()).duplicated(['C_rows','dupe'])
print (df)
   C_rows     values   dupe
0      C1  customer1  False
1      C2       4321  False
2      C3       1266  False
3      C2       5671   True
4      C1  customer2  False
5      C2        123  False
6      C3       7344  False
7      C1  customer3  False
8      C2       4321  False
9      C3       4444  False
10     C4       5674  False
11     C1  customer4  False

And if need filter:

df = df[df.assign(dupe=df['C_rows'].eq('C1').cumsum()).duplicated(['C_rows','dupe'])]
print (df)
  C_rows values
3     C2   5671

If want check duplicates groups:

df = df.assign(dupe=df['C_rows'].eq('C1').cumsum())
a = df.loc[df.duplicated(['C_rows','dupe']), 'dupe']
df['dupe'] = df['dupe'].isin(a)
print (df)
   C_rows     values   dupe
0      C1  customer1   True
1      C2       4321   True
2      C3       1266   True
3      C2       5671   True
4      C1  customer2  False
5      C2        123  False
6      C3       7344  False
7      C1  customer3  False
8      C2       4321  False
9      C3       4444  False
10     C4       5674  False
11     C1  customer4  False

Upvotes: 3

cs95
cs95

Reputation: 402814

Seems like groupby + apply (with duplicated) would do it.

df.groupby(df.C_rows.eq('C1').cumsum()).C_rows.apply(pd.Series.duplicated)

0     False
1     False
2     False
3      True
4     False
5     False
6     False
7     False
8     False
9     False
10    False
11    False
Name: C_rows, dtype: bool

Use the mask to filter out df.

Upvotes: 3

Bharath M Shetty
Bharath M Shetty

Reputation: 30605

You can use transform and duplicated i.e

df['g'] = df['values'].astype(str).str.contains('[A-z]').cumsum()
df['is_dup'] = df.groupby('g')['C_rows'].transform(lambda x : x.duplicated().any())

  C_rows     values  g  is_dup
0      C1  customer1  1    True
1      C2       4321  1    True
2      C3       1266  1    True
3      C2       5671  1    True
4      C1  customer2  2   False
5      C2        123  2   False
6      C3       7344  2   False
7      C1  customer3  3   False
8      C2       4321  3   False
9      C3       4444  3   False
10     C4       5674  3   False
11     C1  customer4  4   False

In case you want to find only duplicated rows then remove the any()

df['is_dup'] = df.groupby('g')['C_rows'].transform(lambda x : x.duplicated())

Upvotes: 3

Related Questions