Reputation: 439
I have an input data frame for daily grocery spend which looks like this:
input_df1
Date Potatoes Spinach Lettuce
01/01/22 10 47 0
02/01/22 0 22 3
03/01/22 11 0 3
04/01/22 3 9 2
...
I need to apply a function that takes input_df1 + (previous inflated_df2 row * inflation%)
to get inflated_df2
(excepted for the first row - the first day of the month does not have any inflation effect, will just be the same as input_df1
).
inflated_df2
inflation% 0.01 0.05 0.03
Date Potatoes Spinach Lettuce
01/01/22 10 47 0
02/01/22 0.10 24.35 3
03/01/22 11.0 1.218 3.09
04/01/22 3.11 9.06 2.093
...
This is what I attempted to get inflated_df2
inflated_df2.iloc[2:3,:] = input_df1.iloc[0:1,:]
inflated_df2.iloc[3:,:] = inflated_df2.apply(lambda x: input_df1[x] + (x.shift(periods=1, fill_value=0)) * x['inflation%'])
Upvotes: 0
Views: 61
Reputation: 120429
You can use accumulate
from itertools
from itertools import accumulate
rates = {'Potatoes': 0.01, 'Spinach': 0.05, 'Lettuce': 0.03}
c = list(rates.keys())
r = list(rates.values())
df[c] = list(accumulate(df[c].to_numpy(), lambda bal, val: val+ bal * r))
Output:
>>> df
Date Potatoes Spinach Lettuce
0 01/01/22 10.00000 47.000000 0.0000
1 02/01/22 0.10000 24.350000 3.0000
2 03/01/22 11.00100 1.217500 3.0900
3 04/01/22 3.11001 9.060875 2.0927
Upvotes: 1