Reputation: 190
I want to speed up a process on a dataframe where every row in the dataframe are points (red points in the image), and I fit every row to a polynomial (blue points in the image):
My dataframe would look like this one:
0 21.357071 21.357071 NaN 29.240519 20.909416 23.884323 NaN NaN 21.533360 19.145000 NaN
1 29.373487 29.373487 NaN 32.593994 26.423960 29.623251 NaN NaN 30.685534 29.297455 20.411913
2 19.116655 19.116655 NaN 27.120478 18.723265 19.857676 NaN NaN 20.249647 18.867172 NaN
I already did this with the following code:
for index,row in df.iterrows():
dataR = row[:].dropna()
x = np.array(dataR.index).astype(float) #x = column index
y = dataR.values
y = np.vstack(y).astype(np.float).T[0] #y = value
coefs = poly.polyfit(x, y, deg=4)
ffit = poly.polyval(np.arange(0,maxColumns,1), coefs)
df.loc[index,0:maxColumns] = ffit
But my dataframe is very big so this is slow. I wonder if I can vectorize this code.
Upvotes: 2
Views: 452
Reputation: 1804
Since it looks like you are handling each row independently and perform curve fitting not matter what other rows look like, I think you can simply parallelize the code using joblib, so you can do
from joblib import Parallel, delayed
function fit_curve(row):
dataR = row[:].dropna()
x = np.array(dataR.index).astype(float)
y = dataR.values
y = np.vstack(y).astype(np.float).T[0]
coefs = poly.polyfit(x, y, deg=4)
ffit = poly.polyval(np.arange(0,maxColumns,1), coefs)
return ffit
fitted_curves = Parallel(n_jobs=N)(delayed(fit_curve)(row) for index, row in df.iterrows())
df.loc[:,:] = fitted_curves
where N is number of workers, aka. cores you want to use for this.
Upvotes: 1