asymon
asymon

Reputation: 187

Combine two and more lambda functions into one

Below is the code that calculates the moving average for 2 variables, filtered by several criteria (DATE_G, ID1_G, ID_C_T) . There are many such units. Is it possible to combine them in one operation? To speed up the calculation, because the filtering is the same.

df['RES1_2Y'] = df.apply(
    lambda x: (df.loc[
        (
            (df.DATE_G < x.DATE_G)
            & (df.DATE_G >= (x.DATE_G + pd.DateOffset(days=-730)))
            & (df.ID1_G == x.ID1_G)
            & (df.ID_C_T == x.ID_C_T)
        ),
        "RES",
    ].mean()) if x.DATE_G > startdate else x.RES1_2Y,
    axis=1,
)


df['C1_2Y'] = df.apply(
    lambda x: (df.loc[
        (
            (df.DATE_G < x.DATE_G)
            & (df.DATE_G >= (x.DATE_G + pd.DateOffset(days=-730)))
            & (df.ID1_G == x.ID1_G)
            & (df.ID_C_T == x.ID_C_T)
        ),
        "S1",
    ].mean()) if x.DATE_G > startdate else x.C1_2Y,
    axis=1,
)

Result rable (startdate = 31.12.2018)

 DATE_G     ID1_G   ID_C_T      RES     S1      RES1_2Y     C1_2Y
01.01.2019      1       1       1       5               
01.01.2019      2       2       1       6               
01.01.2019      1       1       1       7       1.00        5.00
02.01.2019      2       2       0       5       1.00        6.00
03.01.2019      1       1       0       4       1.00        6.00
04.01.2019      2       2       1       6       0.50        5.50
04.01.2019      1       1       0       4       0.67        5.33
04.01.2019      2       2       1       6       0.67        5.67
05.01.2019      12      3       1       8               
06.01.2019      1       1       0       6       0.50        5.00
07.01.2019      2       2       0       5       0.75        5.75
08.01.2019      1       3       1       4               
09.01.2019      2       1       0       5               
10.01.2019      2       2       1       3       0.60        5.60
10.01.2019      2       3       0       5               
10.01.2019      2       1       0       6       0.00        5.00
10.01.2019      2       2       0       3       0.67        5.17

Upvotes: 0

Views: 436

Answers (3)

GZ0
GZ0

Reputation: 4268

This is an alternative approach to solve the problem (should be more efficient on large dataframes), using groupby and rolling.

start_date = pd.Timestamp("2018-12-31")
window_size = pd.offsets.Day(730)

group_cols = ["ID1_G", "ID_C_T", "DATE_G"]
dfg = df[df["DATE_G"] >= (start_date - window_size)].groupby(group_cols).agg({
   "DATE_G": "size", "RES": "sum", "S1": "sum"
})
dfg.columns = ["num_units", "RES_sum", "S1_sum"]  # Rename column names for clarity
dfg["date"] = dfg.index.get_level_values("DATE_G") # Repeat date values as a column for the rolling function

# Group by "ID1_G" and "ID_C_T", then compute time window statistics for each group
dfg_summary = dfg.groupby(["ID1_G", "ID_C_T"]).apply(
   lambda g: g.rolling(window_size, on="date", closed="left").sum()
)

# Compute rolling mean based on rolling sums and total number of units
dfg_summary = dfg_summary[["RES_sum", "S1_sum"]].div(dfg_summary["num_units"], axis=0)

# Join output with the original dataframe
df_to_update = df.join(dfg_summary, on=group_cols, how="inner")[["RES_sum", "S1_sum"]]

# Update the original dataframe
df_to_update.columns = ["RES1_2Y", "C1_2Y"]
df.update(df_to_update)

Side note: the solution would be much simpler if pandas' time-based rolling statistics provides better support for duplicate timestamps (see this issue).

Upvotes: 0

GZ0
GZ0

Reputation: 4268

A direct answer to your question is the following (with a minor optimization to move the date comparison with startdate out of the lambda function).

df_to_update = df[df.DATE_G > startdate].apply(
    lambda x: (df.loc[
        (
            (df.DATE_G < x.DATE_G)
            & (df.DATE_G >= (x.DATE_G + pd.DateOffset(days=-730)))
            & (df.ID1_G == x.ID1_G)
            & (df.ID_C_T == x.ID_C_T)
        ),
        ["RES", "S1"],
    ].mean()),
    axis=1,
)

df_to_update.columns = ["RES1_2Y", "C1_2Y"]
df.update(df_to_update)

Upvotes: 2

Raphael
Raphael

Reputation: 1811

does this help you? You need to replace "# calculate your value here" with your logic.

def your_function(x):
    if x.DATE_G <= startdate:
        C1_2Y = x.C1_2Y
        RES1_2Y = x.RES1_2Y
    else:
        # calculate your value here
        C1_2Y = 0 # dummy
        RES1_2Y = 0 # dummy

    return pd.Series({
            'C1_2Y': C1_2Y,
            'RES1_2Y': RES1_2Y})


df[['C1_2Y', 'RES1_2Y']] = df.apply(your_function, axis=1)

Upvotes: 1

Related Questions