nkldtd
nkldtd

Reputation: 31

Check if column values in variable time range are unique

I have a DataFrame similar to this but with > 10000000 rows:

data = {'timestamp': ['1970-01-01 00:27:00', '1970-01-01 00:27:10', '1970-01-01 00:27:20',
                      '1970-01-01 00:27:30', '1970-01-01 00:27:40', '1970-01-01 00:27:50',
                      '1970-01-01 00:28:00', '1970-01-01 00:28:10', '1970-01-01 00:28:20',
                      '1970-01-01 00:28:30', '1970-01-01 00:28:40', '1970-01-01 00:28:50'],
        'label': [0, 0, 1, 1, 1, 1, 0, 0, 1 , 1, 1 ,0]}
df = pd.DataFrame(data, columns=['label'], index=data['timestamp'])
df.index = pd.to_datetime(df.index)


Index                 label
1970-01-01 00:27:00   0
1970-01-01 00:27:10   0
1970-01-01 00:27:20   1
1970-01-01 00:27:30   1
1970-01-01 00:27:40   1
1970-01-01 00:27:50   1
1970-01-01 00:28:00   0
1970-01-01 00:28:10   0
1970-01-01 00:28:20   1
1970-01-01 00:28:30   1
1970-01-01 00:28:40   1
1970-01-01 00:28:50   0

The goal is to keep all rows where the column 'label' equals to 0 and to keep only those rows where the value for the column 'label' equals to 1 and is unique for a given time range. For example, besides the 0 values, I only want to keep the rows where a 1 is given at least for 30 seconds constantly. Result should look like this:

Index                 label
1970-01-01 00:27:00   0
1970-01-01 00:27:10   0
1970-01-01 00:27:20   1
1970-01-01 00:27:30   1
1970-01-01 00:27:40   1
1970-01-01 00:27:50   1
1970-01-01 00:28:00   0
1970-01-01 00:28:10   0
1970-01-01 00:28:50   0

The following code does the job, but for huge datasets (like I have) it is impracticable.

from datetime import timedelta

valid_range = 30
valid_df = df[df['label'] == 1].index.values.size
df_temp = df.copy()
drop_list = []

while valid_df != 0:
    begin = df_temp[df_temp['label'] == 1].index[0]
    end = begin + timedelta(seconds=valid_range)

    if df_temp['label'].loc[begin:end].nunique() == 1:
        df_temp = df_temp.loc[df_temp.index > end]
    else:
        df_temp.drop(begin, axis=0, inplace=True)
        drop_list.append(begin)

    valid_df = df_temp[df_temp['label'] == 1].index.values.size

df.drop(drop_list, axis=0, inplace=True)

Any suggestions on how to do this better/faster/with less memory consumption?


EDIT: My DataFrame may have time gaps and is not continuous so I can't use the proposed answer to this question.

Upvotes: 3

Views: 284

Answers (3)

nkldtd
nkldtd

Reputation: 31

I figured out a solution that works for my situation. I extended the DataFrame for a few more 'challenging' data points.

data = {'timestamp': ['1970-01-01 00:27:00', '1970-01-01 00:27:10', '1970-01-01 00:27:20',
                      '1970-01-01 00:27:30', '1970-01-01 00:27:40', '1970-01-01 00:27:50',
                      '1970-01-01 00:28:00', '1970-01-01 00:28:10', '1970-01-01 00:28:20',
                      '1970-01-01 00:28:30', '1970-01-01 00:28:40', '1970-01-01 00:28:50',
                      '1970-01-01 00:32:10', '1970-01-01 00:33:50', '1970-01-01 00:34:58',
                      '1970-01-01 00:34:59', '1970-01-01 00:35:20', '1970-01-01 00:35:25',
                      '1970-01-01 00:35:30', '1970-01-01 00:35:56', '1970-01-01 00:35:59',
                      '1970-01-01 00:36:24'],
        'label': [0, 0, 1, 1, 1, 1, 0, 0, 1 , 1, 1 ,0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1]}
df = pd.DataFrame(data, columns=['label'], index=data['timestamp'])
df.index = pd.to_datetime(df.index)

Function:

def check_time_range(df, column, valid_range=30):
    df['diff'] = df[column].diff()
    begin_points = df.index[df['diff'] == 1].tolist()
    drop_list = []
    for begin in begin_points:
        end = begin + timedelta(seconds=valid_range)
        if not df[column].loc[begin:end].nunique() == 1  or \
           df[column][(df[column] == 1) & (df.index >= begin) & (df.index < end)].sum() <= 1:
            try:
                # Get the index where 'label' changes back to 0
                changed_back = df[(df['diff'] == -1) & (df.index >= begin)].index[0]
                index_list = df.index[(df.index >= begin) & (df.index < changed_back)].tolist()
            except IndexError:
                index_list = df.index[(df.index >= begin)].tolist()
            drop_list.append(index_list)
    flatten_drop_list = [item for sublist in drop_list for item in sublist]
    df_new = df.drop(flatten_drop_list, axis=0)
    return df_new

Timing:

In [1]: %timeit df_new = check_time_range(df, 'label', 30)
12.8 ms ± 497 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

Upvotes: 0

Roelant
Roelant

Reputation: 5119

I guess there are many ways to do this, just one method I would take. On your sample its significantly faster (100 loops, best of 3: 16.3 ms per loop instead of 10 loops, best of 3: 46.6 ms per loop). You can probably optimise it further, but taking all the steps to be clear.

df['group'] = (df['label'] != df['label'].shift()).cumsum()  # group together
df['first'] = df.groupby('group').transform('first')  # first time of a group
df['first'] = pd.to_datetime(df['first'])  # convert
df['duration'] = (df['timestamp'] - df['first']).dt.seconds  #  get duration
df['max_duration'] = df.groupby('group')['duration'].transform('last')  # get duration consecutive
df[(df['max_duration'] >= 30) | (df['label'] == 0)]  # filter

I changed the input data a bit

import pandas as pd 
data = {'timestamp': ['1970-01-01 00:27:00', '1970-01-01 00:27:10', '1970-01-01 00:27:20',
                  '1970-01-01 00:27:30', '1970-01-01 00:27:40', '1970-01-01 00:27:50',
                  '1970-01-01 00:28:00', '1970-01-01 00:28:10', '1970-01-01 00:28:20',
                  '1970-01-01 00:28:30', '1970-01-01 00:28:40', '1970-01-01 00:28:50'],
    'label': [0, 0, 1, 1, 1, 1, 0, 0, 1 , 1, 1 ,0]}
df = pd.DataFrame(data, columns=['timestamp', 'label', 'group', 'first'])
df['timestamp'] = pd.to_datetime(df['timestamp'])

Upvotes: 0

Varsha Venkatesh
Varsha Venkatesh

Reputation: 320

You can try a combination of groupby and filtering the group results

import pandas as pd

data = {'timestamp': ['1970-01-01 00:27:00', '1970-01-01 00:27:10', '1970-01-01 00:27:20',
                  '1970-01-01 00:27:30', '1970-01-01 00:27:40', '1970-01-01 00:27:50',
                  '1970-01-01 00:28:00', '1970-01-01 00:28:10', '1970-01-01 00:28:20',
                  '1970-01-01 00:28:30', '1970-01-01 00:28:40', '1970-01-01 00:28:50'
                  ],
    'label': [0, 0, 1, 1, 1, 1, 0, 0, 1 , 1, 1 ,0]}
df = pd.DataFrame(data, columns=['label'], index=data['timestamp'])
df["time"] = df.index
df["time"] = pd.to_datetime(df["time"],errors='coerce')
df["delta"]= (df["time"]-df["time"].shift()).dt.total_seconds()
gp = df.groupby([(df.label != df.label.shift()).cumsum()])
rem = gp.filter(lambda g: g.delta.sum()>30)
new_df= pd.concat([rem[rem.label==1],df[df.label==0]], axis =0).sort_index()

Upvotes: 1

Related Questions