BML91
BML91

Reputation: 3170

Pandas time series - join by closest time

I have two dataframes which can be represented by the following MWE:

import pandas as pd
from datetime import datetime
import numpy as np

df_1 = pd.DataFrame(np.random.randn(9), columns = ['A'], index= [
                                                datetime(2015,1,1,19,30,1,20),
                                                datetime(2015,1,1,20,30,2,12),
                                                datetime(2015,1,1,21,30,3,50),
                                                datetime(2015,1,1,22,30,5,43),
                                                datetime(2015,1,1,22,30,52,11),
                                                datetime(2015,1,1,23,30,54,8),
                                                datetime(2015,1,1,23,40,14,2),
                                                datetime(2015,1,1,23,41,13,33),
                                                datetime(2015,1,1,23,50,21,32),
                                                ])

df_2 = pd.DataFrame(np.random.randn(9), columns = ['B'], index= [
                                                datetime(2015,1,1,18,30,1,20),
                                                datetime(2015,1,1,21,0,2,12),
                                                datetime(2015,1,1,21,31,3,50),
                                                datetime(2015,1,1,22,34,5,43),
                                                datetime(2015,1,1,22,35,52,11),
                                                datetime(2015,1,1,23,0,54,8),
                                                datetime(2015,1,1,23,41,14,2),
                                                datetime(2015,1,1,23,42,13,33),
                                                datetime(2015,1,1,23,56,21,32),
                                                ])

I want to merge the two dataframes into one, I'm aware I can do this using the following code:

In [21]: df_1.join(df_2, how='outer')
Out[21]: 
                                   A         B
2015-01-01 18:30:01.000020       NaN -1.411907
2015-01-01 19:30:01.000020  0.109913       NaN
2015-01-01 20:30:02.000012 -0.440529       NaN
2015-01-01 21:00:02.000012       NaN -1.277403
2015-01-01 21:30:03.000050 -0.194020       NaN
2015-01-01 21:31:03.000050       NaN -0.042259
2015-01-01 22:30:05.000043  1.445220       NaN
2015-01-01 22:30:52.000011 -0.341176       NaN
2015-01-01 22:34:05.000043       NaN  0.905912
2015-01-01 22:35:52.000011       NaN -0.167559
2015-01-01 23:00:54.000008       NaN  1.289961
2015-01-01 23:30:54.000008 -0.929973       NaN
2015-01-01 23:40:14.000002  0.077622       NaN
2015-01-01 23:41:13.000033 -1.688719       NaN
2015-01-01 23:41:14.000002       NaN  0.178439
2015-01-01 23:42:13.000033       NaN -0.911314
2015-01-01 23:50:21.000032 -0.750953       NaN
2015-01-01 23:56:21.000032       NaN  0.092930

This isn't quite what I want to achieve.

I want to merge df_2 with df_1 solely against the time series index of df_1 - where the value that would be in the 'B' column would be the value which was timed closest to that of the index in df_1.

I've achieved this before in the past using iterrows and relativedelta like the following:

for i, row in df_1.iterrows():
    df_2_temp = df_2.copy()
    df_2_temp['Timestamp'] = df_2_temp.index
    df_2_temp['Time Delta'] = abs(df_2_temp['Timestamp'] - row.name).apply(lambda x: x.seconds)
    closest_value = df_2_temp.sort_values('Time Delta').iloc[0]['B']
    df_1.loc[row.name, 'B'] = closest_value

This works, but this is slow and I have very large dataframes I want to perform this on.

Is there a faster solution? Perhaps a Pandas built in?

Upvotes: 4

Views: 2052

Answers (2)

Phil S
Phil S

Reputation: 185

Pandas now provides the functionality I believe you are looking for:

pd.merge_asof(df1, df2, direction='nearest')

See merge_asof docs

Example: I have two devices. I have one DataFrame per device, each with a Date column that is type "datetime64[ns, UTC]"

t_df[['dt', 'mode', 'state']]:
                                dt  mode  state
0 2020-09-23 22:10:36.508000+00:00     1      0
1 2020-09-23 22:10:57.463000+00:00     1      0
2 2020-09-23 22:11:18.815000+00:00     1      0
3 2020-09-23 22:12:16.806000+00:00     1      0
4 2020-09-23 22:12:22.512000+00:00     1      0
5 2020-09-23 22:12:43.469000+00:00     1      0
6 2020-09-23 22:13:04.776000+00:00     1      0
7 2020-09-23 22:13:25.948000+00:00     1      0
8 2020-09-23 22:13:47.223000+00:00     1      0

v_df[['dt', 'temperature', 'pressure']]: 
                              dt  temperature  pressure
0 2020-09-23 22:12:04.204000+00:00        74.85   1004.50
1 2020-09-23 22:12:18.203000+00:00        74.82   1004.67
2 2020-09-23 22:12:30.358000+00:00        74.85   1004.71
3 2020-09-23 22:12:44.601000+00:00        74.82   1004.46
4 2020-09-23 22:12:59.158000+00:00        74.82   1004.67
5 2020-09-23 22:13:10.443000+00:00        74.82   1004.67
6 2020-09-23 22:13:24.577000+00:00        74.82   1004.67
7 2020-09-23 22:13:37.544000+00:00        74.82   1004.67
8 2020-09-23 22:13:50.106000+00:00        74.78   1004.63
9 2020-09-23 22:14:03.377000+00:00        74.78   1004.42

I used:

new_df = pd.merge_asof(v_df[['dt', 'temperature', 'pressure']], t_df[['dt', 'mode', 'state']], direction='nearest')

and my result:

                                dt  temperature  pressure  mode  state
0 2020-09-23 22:12:04.204000+00:00        74.85   1004.50     1      0
1 2020-09-23 22:12:18.203000+00:00        74.82   1004.67     1      0
2 2020-09-23 22:12:30.358000+00:00        74.85   1004.71     1      0
3 2020-09-23 22:12:44.601000+00:00        74.82   1004.46     1      0
4 2020-09-23 22:12:59.158000+00:00        74.82   1004.67     1      0
5 2020-09-23 22:13:10.443000+00:00        74.82   1004.67     1      0
6 2020-09-23 22:13:24.577000+00:00        74.82   1004.67     1      0
7 2020-09-23 22:13:37.544000+00:00        74.82   1004.67     1      0
8 2020-09-23 22:13:50.106000+00:00        74.78   1004.63     1      0
9 2020-09-23 22:14:03.377000+00:00        74.78   1004.42     1      0

** This example is just the last 10 cases for each DataFrame, the tops of which are minutes apart. Here's a look at the last 10 cases after running this on the full DataFrames (note: added 'date' and 'time' in the merge operations for df1 and df2, respectively for reference):

combo_df.iloc[-10:][['dt', 'date', 'time', 'pressure', 'temperature', 'mode', 'state']]

                                   dt                      date                      time  pressure  temperature  mode  state
4440 2020-09-23 22:12:04.204000+00:00  2020-09-23T22:12:04.204Z  2020-09-23T22:12:16.806Z   1004.50        74.85     1      0
4441 2020-09-23 22:12:18.203000+00:00  2020-09-23T22:12:18.203Z  2020-09-23T22:12:16.806Z   1004.67        74.82     1      0
4442 2020-09-23 22:12:30.358000+00:00  2020-09-23T22:12:30.358Z  2020-09-23T22:12:22.512Z   1004.71        74.85     1      0
4443 2020-09-23 22:12:44.601000+00:00  2020-09-23T22:12:44.601Z  2020-09-23T22:12:43.469Z   1004.46        74.82     1      0
4444 2020-09-23 22:12:59.158000+00:00  2020-09-23T22:12:59.158Z  2020-09-23T22:13:04.776Z   1004.67        74.82     1      0
4445 2020-09-23 22:13:10.443000+00:00  2020-09-23T22:13:10.443Z  2020-09-23T22:13:04.776Z   1004.67        74.82     1      0
4446 2020-09-23 22:13:24.577000+00:00  2020-09-23T22:13:24.577Z  2020-09-23T22:13:25.948Z   1004.67        74.82     1      0
4447 2020-09-23 22:13:37.544000+00:00  2020-09-23T22:13:37.544Z  2020-09-23T22:13:47.223Z   1004.67        74.82     1      0
4448 2020-09-23 22:13:50.106000+00:00  2020-09-23T22:13:50.106Z  2020-09-23T22:13:47.223Z   1004.63        74.78     1      0
4449 2020-09-23 22:14:03.377000+00:00  2020-09-23T22:14:03.377Z  2020-09-23T22:14:08.981Z   1004.42        74.78     1      0

Upvotes: 3

IanS
IanS

Reputation: 16241

This could be faster, even though apply is still a loop behind the scene.

def find_idxmin(dt):
    return (df_2.index - dt).to_series().reset_index(drop=True).abs().idxmin()

df_1.apply(lambda row: df_2.iloc[find_idxmin(row.name)], axis=1)

I transform the DatetimeIndex to a series in order to apply abs and idxmin. I reset the index so that idxmin returns a row number that I can feed into iloc.


EDIT: This appears to be just as fast (5 ms) as the numpy-based answer linked to in the comments:

def find_idxmin(dt):
    return np.argmin(np.abs(df_2.index.to_pydatetime() - dt))

In comparison, your solution runs in 30 ms (instead of 5 here).

Upvotes: 0

Related Questions