Reputation: 36313
I have a sample of data pairs in two arrays. For example:
times = [0, 1, 3, 3.5, 5, 6]
values = [1, 2, 3, 4, 5, 6]
So at time 0, the value is 1, at time 1 it is 2 and so on. As you can see the time values are not in regular time distances (though ordered ascending in all cases). I'm seeking for an efficient way to convert the above into
times1 = [0, 1, 2, 3, 4, 5, 6]
values = [1, 2, 2.5, 3, 4.333, 5, 6]
These values are calculated according to this plot from the intermediate values:
Of course I could make a loop to find these values and stuff them into a target array. But I wonder if numpy has something to do that "at once".
NB: This is similar to what I want (though a bit more trivial), so I guess that there is nothing out of the box. But who knows.
Upvotes: 0
Views: 53
Reputation:
With scipy, you can use interp1d:
from scipy.interpolate import interp1d
f = interp1d(times, values)
f(times1)
Out:
array([ 1. , 2. , 2.5 , 3. , 4.33333333,
5. , 6. ])
With pandas, this is also possible:
ser = pd.Series(values, index=times)
ser2 = pd.Series(index=times1)
ser.combine_first(ser2).interpolate(method='index').reindex(ser2.index)
Out:
0 1.000000
1 2.000000
2 2.500000
3 3.000000
4 4.333333
5 5.000000
6 6.000000
dtype: float64
combine_first
takes the union of the both indices. interpolate is the main method that does the job. Since you are doing a linear interpolation on the indices, you need to pass method='index'
.
Upvotes: 1