ahoosh
ahoosh

Reputation: 1360

How to remove consecutive bad data points in Pandas

I have a Pandas dataframe that looks like:

import pandas as pd
import numpy as np
df = pd.DataFrame({"Dummy_Var": [1]*12, 
                   "B": [6, 143.3, 143.3, 143.3, 3, 4, 93.9, 93.9, 93.9, 2, 2, 7],
                   "C": [4.1, 23.2, 23.2, 23.2, 4.3, 2.5, 7.8, 7.8, 2, 7, 7, 7]})


    B       C       Dummy_Var
0   6.0     4.1     1
1   143.3   23.2    1
2   143.3   23.2    1
3   143.3   23.2    1
4   3.0     4.3     1
5   4.0     2.5     1
6   93.9    7.8     1
7   93.9    7.8     1
8   93.9    2.0     1
9   2.0     7.0     1
10  2.0     7.0     1
11  7.0     7.0     1

Whenever the same numbers show up consecutively three times or more in a row, that data should be replaced with NAN. So the result should be:

    B       C       Dummy_Var
0   6.0     4.1     1
1   NaN     NaN     1
2   NaN     NaN     1
3   NaN     NaN     1
4   3.0     4.3     1
5   4.0     2.5     1
6   NaN     7.8     1
7   NaN     7.8     1
8   NaN     2.0     1
9   2.0     NaN     1
10  2.0     NaN     1
11  7.0     NaN     1

I have written a function that does that:

def non_sense_remover(df, examined_columns, allowed_repeating):
    def count_each_group(grp, column):
        grp['Count'] = grp[column].count()
        return grp
    for col in examined_columns:
        sel = df.groupby((df[col] != df[col].shift(1)).cumsum()).apply(count_each_group, column=col)["Count"] > allowed_repeating
        df.loc[sel, col] = np.nan

    return df

df = non_sense_remover(df, ["B", "C"], 2)

However, my real dataframe has 2M rows and 18 columns! It is very very slow to run this function on 2M rows. Is there a more efficient way to do this? Am I missing something? Thanks in advance.

Upvotes: 4

Views: 1025

Answers (3)

jakevdp
jakevdp

Reputation: 86328

Constructing a boolean mask in this situation will be far more efficient than a solution based on apply(), particularly for large datasets. Here is an approach:

cols = df[['B', 'C']]
mask = (cols.shift(-1) == cols) & (cols.shift(1) == cols)

df[mask | mask.shift(1).fillna(False) | mask.shift(-1).fillna(False)] = np.nan

Edit:

For a more general approach, replacing sequences of length N with NaN, you could do something like this:

from functools import reduce
from operator import or_, and_

def replace_sequential_duplicates_with_nan(df, N):
    mask = reduce(and_, [cols.shift(i) == cols.shift(i + 1)
                         for i in range(N - 1)])
    full_mask = reduce(or_, [mask.shift(-i).fillna(False)
                             for i in range(N)])
    df[full_mask] = np.nan

Upvotes: 3

Michael Vienneau
Michael Vienneau

Reputation: 1

From this link, it appears that using apply/transform (in your case, apply) is causing the biggest bottleneck here. The link I referenced goes into much more detail about why this is and how to solve it

Upvotes: 0

BENY
BENY

Reputation: 323306

We using groupby + mask

m=df[['B','C']]
df[['B','C']]=m.mask(m.apply(lambda x : x.groupby(x.diff().ne(0).cumsum()).transform('count'))>2)
df
Out[1245]: 
      B    C  Dummy_Var
0   6.0  4.1          1
1   NaN  NaN          1
2   NaN  NaN          1
3   NaN  NaN          1
4   3.0  4.3          1
5   4.0  2.5          1
6   NaN  7.8          1
7   NaN  7.8          1
8   NaN  2.0          1
9   2.0  NaN          1
10  2.0  NaN          1
11  7.0  NaN          1

Upvotes: 3

Related Questions