Reputation: 8996
I'm looking to filter a large dataframe (millions of rows) based on another much smaller dataframe that has only three columns: ID, Start, End.
The following is what I put together (which works), but it seems like a groupby()
or np.where
might be faster.
SETUP:
import pandas as pd
import io
csv = io.StringIO(u'''
time id num
2018-01-01 00:00:00 A 1
2018-01-01 01:00:00 A 2
2018-01-01 02:00:00 A 3
2018-01-01 03:00:00 A 4
2018-01-01 04:00:00 A 5
2018-01-01 05:00:00 A 6
2018-01-01 06:00:00 A 6
2018-01-03 07:00:00 B 10
2018-01-03 08:00:00 B 11
2018-01-03 09:00:00 B 12
2018-01-03 10:00:00 B 13
2018-01-03 11:00:00 B 14
2018-01-03 12:00:00 B 15
2018-01-03 13:00:00 B 16
2018-05-29 23:00:00 C 111
2018-05-30 00:00:00 C 122
2018-05-30 01:00:00 C 133
2018-05-30 02:00:00 C 144
2018-05-30 03:00:00 C 155
''')
df = pd.read_csv(csv, sep = '\t')
df['time'] = pd.to_datetime(df['time'])
csv_filter = io.StringIO(u'''
id start end
A 2018-01-01 01:00:00 2018-01-01 02:00:00
B 2018-01-03 09:00:00 2018-01-03 12:00:00
C 2018-05-30 00:00:00 2018-05-30 08:00:00
''')
df_filter = pd.read_csv(csv_filter, sep = '\t')
df_filter['start'] = pd.to_datetime(df_filter['start'])
df_filter['end'] = pd.to_datetime(df_filter['end'])
WORKING CODE
df = pd.merge_asof(df, df_filter, left_on = 'time', right_on = 'start', by = 'id').dropna(subset = ['start']).drop(['start','end'], axis = 1)
df = pd.merge_asof(df, df_filter, left_on = 'time', right_on = 'end', by = 'id', direction = 'forward').dropna(subset = ['end']).drop(['start','end'], axis = 1)
OUTPUT
time id num
0 2018-01-01 01:00:00 A 2
1 2018-01-01 02:00:00 A 3
6 2018-01-03 09:00:00 B 12
7 2018-01-03 10:00:00 B 13
8 2018-01-03 11:00:00 B 14
9 2018-01-03 12:00:00 B 15
11 2018-05-30 00:00:00 C 122
12 2018-05-30 01:00:00 C 133
13 2018-05-30 02:00:00 C 144
14 2018-05-30 03:00:00 C 155
Any thoughts on a more elegant / faster solution?
Upvotes: 0
Views: 33
Reputation: 323316
Why not merge
before filter. notice this will eating up your memory when the data set are way to big .
newdf=df.merge(df_filter)
newdf=newdf.loc[newdf.time.between(newdf.start,newdf.end),df.columns.tolist()]
newdf
Out[480]:
time id num
1 2018-01-01 01:00:00 A 2
2 2018-01-01 02:00:00 A 3
9 2018-01-03 09:00:00 B 12
10 2018-01-03 10:00:00 B 13
11 2018-01-03 11:00:00 B 14
12 2018-01-03 12:00:00 B 15
15 2018-05-30 00:00:00 C 122
16 2018-05-30 01:00:00 C 133
17 2018-05-30 02:00:00 C 144
18 2018-05-30 03:00:00 C 155
Upvotes: 1