Maria
Maria

Reputation: 1297

Apply function with two arguments to columns

Can you make a pandas function with values in two different columns as arguments?

I have a function that returns a 1 if two columns have values in the same range. otherwise it returns 0:

def segmentMatch(RealTime, ResponseTime):
    if RealTime <= 566 and ResponseTime <= 566:
        matchVar = 1
    elif 566 < RealTime <= 1132 and 566 < ResponseTime <= 1132:
        matchVar = 1
    elif 1132 < RealTime <= 1698 and 1132 < ResponseTime <= 1698:
        matchVar = 1
    else:
        matchVar = 0
    return matchVar

I want the first argument, RealTime, to be a column in my data frame, such that the function will take the value of each row in that column. e.g. RealTime is df['TimeCol'] and the second argument is df['ResponseCol']. And I'd like the result to be a new column in the dataframe. I came across several threads that have answered a similar question, but it looks like those arguments were variables, not values in rows of the dataframe.

I tried the following but it didn't work:

df['NewCol'] = df.apply(segmentMatch, args=(df['TimeCol'], df['ResponseCol']), axis=1)

Upvotes: 73

Views: 121637

Answers (4)

rdmtinez
rdmtinez

Reputation: 93

At my current workplace the use of lambda functions is frowned upon, and perhaps you've encountered the same issue at your workplaces. So I came up with this which should work for any number of columns as input or output so long as your own function's logic is sound.

import functools # not required, but helps in production
def unpack_df_columns(func):
    """
    A general use decorator to unpack a df[subset] of columns
    into a function which expects the values at those columns
    as arguments
    """
    
    @functools.wraps(func)
    def _unpack_df_columns(*args, **kwargs):
        
        # args[0] is a pandas series equal in length as the 
        # df[subset] to which the apply function is applied 
        series = args[0]

        # series.values holds the number of arguments expected
        # by func and is os length len(df[subset].columns)
        return func(*series.values)

    return _unpack_df_columns

@unpack_df_columns
def two_arg_func(a, b):
    return pd.Series((a+b, a*b))

@unpack_df_columns
def three_arg_func(x, y, z):
    return x+y+z

df["x_y_z_sum"] = df[['x', 'y', 'z']].apply(three_arg_func, axis=1)

df[["a_b_sum", "a_b_prod"]] = df[['a', 'b']].apply(two_arg_func, axis=1)

Upvotes: 1

Nelewout
Nelewout

Reputation: 6554

Why not just do this?

df['NewCol'] = df.apply(lambda x: segmentMatch(x['TimeCol'], x['ResponseCol']), 
                        axis=1)

Rather than trying to pass the column as an argument as in your example, we now simply pass the appropriate entries in each row as argument, and store the result in 'NewCol'.

Upvotes: 120

Artem Sokolov
Artem Sokolov

Reputation: 13691

A chain-friendly way to perform this operation is via assign():

df.assign( NewCol = lambda x: segmentMatch(x['TimeCol'], x['ResponseCol']) )

Upvotes: 5

rahul
rahul

Reputation: 351

You don't really need a lambda function if you are defining the function outside:

def segmentMatch(vec):
    RealTime = vec[0]
    ResponseTime = vec[1]
    if RealTime <= 566 and ResponseTime <= 566:
        matchVar = 1
    elif 566 < RealTime <= 1132 and 566 < ResponseTime <= 1132:
        matchVar = 1
    elif 1132 < RealTime <= 1698 and 1132 < ResponseTime <= 1698:
        matchVar = 1
    else:
        matchVar = 0
    return matchVar

df['NewCol'] = df[['TimeCol', 'ResponseCol']].apply(segmentMatch, axis=1)

If "segmentMatch" were to return a vector of 2 values instead, you could do the following:

def segmentMatch(vec):
    ......
    return pd.Series((matchVar1, matchVar2)) 

df[['NewCol', 'NewCol2']] = df[['TimeCol','ResponseCol']].apply(segmentMatch, axis=1)

Upvotes: 24

Related Questions