Reputation: 77
I have a 3-vector c = [0.7, 0.5, 0.2]
and I want to multiply it with everything in an n-vector x = rand((-1,1),n)
such that I get a resulting n+2-vector y where y[i] == x[i]*c[3] + x[i-1]*c[2] + x[i-2]*c[1]
How should I do this in julia? I feel like there should be a way to broadcast the smaller 3 vector to all the values in the n vector. And for the edge cases, if i-1 or i-2 is out of bounds I just want zero for those components.
Upvotes: 4
Views: 411
Reputation: 2580
I have a package for doing such things. The simplest use is like this:
julia> c = [0.7, 0.5, 0.2]; # from question
julia> x = [10, 100, 1000, 10_000]; # from another answer
julia> using Tullio, OffsetArrays
julia> @tullio y[i] := x[i]*c[3] + x[i-1]*c[2] + x[i-2]*c[1]
2-element OffsetArray(::Vector{Float64}, 3:4) with eltype Float64 with indices 3:4:
257.0
2570.0
julia> @tullio y[i] := x[i+k-3] * c[k] # sum over all k, range of i that's safe
2-element OffsetArray(::Array{Float64,1}, 3:4) with eltype Float64 with indices 3:4:
257.0
2570.0
Since eachindex(c) == 1:3
, that's the range of k
-values which this sums over, and the range of i
is as big as it can be so that i+k-3
stays inside eachindex(x) == 1:4
.
To extend the range of i
by padding x
with two zeros in each direction, write pad(i+k-3, 2)
. And to compute the shift of i
needed to produce an ordinary 1-based Array, write i+_
on the left (and then the -3
makes no difference). Then:
julia> @tullio y[i+_] := x[pad(i+k, 2)] * c[k]
6-element Array{Float64,1}:
2.0
25.0
257.0
2570.0
5700.0
7000.0
On larger arrays, this won't be very fast (at the moment) as it must check at every step whether it is inside x
or out in the padding. It's very likely that DSP.conv
is a bit smarter about this. (Edit -- DSP.jl seems never to be faster for this c
; with a kernel of length 1000 it's faster with 10^6 elements in x
.)
Upvotes: 2
Reputation: 69869
If I understand your question correctly you want a convolution, with a twist that in a standard convolution the vector c
would be reversed. You can use e.g. DSP.jl for this.
Is this what you want?
julia> using DSP
julia> c = [0.7, 0.5, 0.2]
3-element Array{Float64,1}:
0.7
0.5
0.2
julia> conv([10, 100, 1000, 10000], reverse(c))
6-element Array{Float64,1}:
1.9999999999996967
25.0
257.0000000000003
2569.9999999999995
5700.0
6999.999999999998
You can also manually implement it using dot
from the LinearAlgebra
module like this:
julia> using LinearAlgebra
julia> x = [10, 100, 1000, 10000]
4-element Array{Int64,1}:
10
100
1000
10000
julia> y = [0;0;x;0;0]
8-element Array{Int64,1}:
0
0
10
100
1000
10000
0
0
julia> [dot(@view(y[i:i+2]), c) for i in 1:length(x)+2]
6-element Array{Float64,1}:
2.0
25.0
257.0
2570.0
5700.0
7000.0
Upvotes: 5
Reputation: 7674
Here's one approach that uses ShiftedArrays.jl.
using ShiftedArrays
c = [0.7, 0.5, 0.2]
Create lagged versions of x
, with initial zeros:
x = 1:5
xminus1 = lag(x, 1, default=0)
xminus2 = lag(x, 2, default=0)
Horizontally concatenate the vectors and use matrix multiplication with c
:
X = [xminus2 xminus1 x]
X * c
Here's what X
and X * c
look like at the REPL:
julia> X = [xminus2 xminus1 x]
5×3 Array{Int64,2}:
0 0 1
0 1 2
1 2 3
2 3 4
3 4 5
julia> X * c
5-element Array{Float64,1}:
0.2
0.9
2.3
3.7
5.1
Note that this produces an output vector of length length(x)
, not length(x) + 2
. I'm not sure how it would make sense for the output to be of length length(x) + 2
, as you requested in the question.
Upvotes: 3