TaterTots
TaterTots

Reputation: 99

Using Pandas to Iteratively Add Columns to a Dataframe

I have some relatively simple code that I'm struggling to put together. I have a CSV that I've read into a dataframe. The CSV is panel data (i.e., unique company and year observations for each row). I have two columns that I want to perform a function on and then I want to create new variables based on the output of the function.

Here's what I have so far with code:

#Loop through rows in a CSV file
for index, rows in df.iterrows():
    #Start at column 6 and go to the end of the file
    for row in rows[6:]:
        data = perform_function1( row )
        output =  perform_function2(data)    
        df.ix[index, 'new_variable'] = output
        print output

I want this code to iterate starting in column 6 and then going to the end of the file (e.g., I have two columns I want to perform the function on Column6 and Column7) and then create new columns based on the functions that were performed (e.g., Output6 and Output7). The code above returns the output for Column7, but I can't figure out how to create a variable that allows me to capture the outputs from both columns (i.e., a new variable that isn't overwritten by loop). I searched Stackoverflow and didn't see anything that immediately related to my question (maybe because I'm too big of a noob?). I would really appreciate your help.

Thanks,

TT

P.S. I'm not sure if I've provided enough detail. Please let me know if I need to provide more.

Upvotes: 5

Views: 20644

Answers (3)

Alex Huszagh
Alex Huszagh

Reputation: 14594

Pandas is quite slow acting row-by-row: you're much better off using the append, concat, merge, or join functionalities on the whole dataframe.

To give some idea why, let's consider a random DataFrame example:

import numpy as np
import pandas as pd
dates = pd.date_range('20130101', periods=6)
df = pd.DataFrame(np.random.randn(6,4), index=dates, columns=list('ABCD'))
df2 = df.copy()
# operation to concatenate two dataframes
%timeit pd.concat([df2, df])
1000 loops, best of 3: 737 µs per loop
 %timeit df.loc['2013-01-01']
1000 loops, best of 3: 251 µs per loop
# single element operation
%timeit df.loc['2013-01-01', 'A'] = 3
1000 loops, best of 3: 218 µs per loop

Notice how efficiently Pandas handles entire dataFrame operations, and how inefficiently it handles operations on single elements?

If we expand this, the same tendency occurs, only is much more pronounced:

df = pd.DataFrame(np.random.randn(200, 300))
# single element operation
%timeit df.loc[1,1] = 3
10000 loops, best of 3: 74.6 µs per loop
df2 = df.copy()
# full dataframe operation
%timeit pd.concat([df2, df])
1000 loops, best of 3: 830 µs per loop

Pandas performs an operation on the whole, 200x300 DataFrame about 6,000 times faster than it does for an operation on a single element. In short, the iteration would kill the whole purpose of using Pandas. If you're accessing a dataframe element-by-element, consider using a dictionary instead.

Upvotes: 0

GeauxEric
GeauxEric

Reputation: 3050

If you want to apply function to certain columns in a dataframe

# Get the Series
colmun6 = df.ix[:, 5]  
# perform_function1 applied to each row
output6 = column6.apply(perform_function1)  
df["new_variable"] = output6

Upvotes: 1

ASGM
ASGM

Reputation: 11381

Operating iteratively doesn't take advantage of Pandas' capabilities. Pandas' strength is in applying operations efficiently across the whole dataframe, rather than in iterating row by row. It's great for a task like this where you want to chain a few functions across your data. You should be able to accomplish your whole task in a single line.

df["new_variable"] = df.ix[6:].apply(perform_function1).apply(perform_function2)

perform_function1 will be applied to each row, and perform_function2 will be applied to the results of the first function.

Upvotes: 4

Related Questions