Reputation: 5660
I have a DataFrame
in pandas
as shown below:
df = pd.DataFrame({'origin_dte':['2009-08-01','2009-08-01','2009-08-01','2009-08-01','2009-09-01','2009-09-01','2009-09-01'],
'date':['2009-08-01','2009-08-02','2009-08-03','2009-08-04','2009-09-01','2009-09-02','2009-09-03'],
'bal_pred':[10.,11.,12.,13.,21.,22.,23.],
'dbal_pred':[np.nan,.25,.3,.5,np.nan,.4,.45]})
bal_pred date dbal_pred origin_dte
0 10 2009-08-01 NaN 2009-08-01
1 11 2009-08-02 0.25 2009-08-01
2 12 2009-08-03 0.30 2009-08-01
3 13 2009-08-04 0.50 2009-08-01
4 21 2009-09-01 NaN 2009-09-01
5 22 2009-09-02 0.40 2009-09-01
6 23 2009-09-03 0.45 2009-09-01
I want to loop through and replace each observation of bal_pred
where dbal_pred != NaN
with dbal_pred[i] * bal_pred[i-1]
. For example, the second value of bal_pred
would become 0.25*10=2.5
. When origin_dte
changes, meaning dbal_pred
is again NaN
, the calculation would skip the NaN
observation and calculate the next bal_pred
. So df
would look as shown below. I have a while loop that does this, but the problem is it takes a very long time to loop through large data frames. Really appreciate a simpler/faster way to do this!
bal_pred date dbal_pred origin_dte
0 10.000 2009-08-01 NaN 2009-08-01
1 2.500 2009-08-02 0.25 2009-08-01
2 0.750 2009-08-03 0.30 2009-08-01
3 0.375 2009-08-04 0.50 2009-08-01
4 21.000 2009-09-01 NaN 2009-09-01
5 8.400 2009-09-02 0.40 2009-09-01
6 3.780 2009-09-03 0.45 2009-09-01
Upvotes: 3
Views: 947
Reputation: 61957
A different approach would be to label each group of data and then take the cumulative product of each group
group = df['dbal_pred'].isnull().cumsum()
df.dbal_pred.fillna(df.bal_pred, inplace=True)
df['bal_pred'] = df.groupby(group)['dbal_pred'].cumprod()
output
bal_pred date dbal_pred origin_dte
0 10.000 2009-08-01 NaN 2009-08-01
1 2.500 2009-08-02 0.25 2009-08-01
2 0.750 2009-08-03 0.30 2009-08-01
3 0.375 2009-08-04 0.50 2009-08-01
4 21.000 2009-09-01 NaN 2009-09-01
5 8.400 2009-09-02 0.40 2009-09-01
6 3.780 2009-09-03 0.45 2009-09-01
Upvotes: 3
Reputation: 294258
# fillna with 1 so we can cumprod
c = df.dbal_pred.fillna(1).cumprod()
# track where null
n = df.dbal_pred.isnull()
# take cumprod where null and forward fill
d = c.where(n).ffill()
# cumprods divided by cumprod where last null
# gets us a grouped cumprod that starts over
# at every null.
# multiply this by `bal_pred` where null forward filled
# and voila
df.assign(bal_pred=c.div(d) * df.bal_pred.where(n).ffill())
bal_pred date dbal_pred origin_dte
0 10.000 2009-08-01 NaN 2009-08-01
1 2.500 2009-08-02 0.25 2009-08-01
2 0.750 2009-08-03 0.30 2009-08-01
3 0.375 2009-08-04 0.50 2009-08-01
4 21.000 2009-09-01 NaN 2009-09-01
5 8.400 2009-09-02 0.40 2009-09-01
6 3.780 2009-09-03 0.45 2009-09-01
Upvotes: 2