Reputation: 6651
I have a time series of vector data -- each point being a 2D vector. I would like to calculate an autocorrelation (or something like it -- excuse me if I'm misusing the language here). Let's say the vector at time t is v(t). What I want is to calculate vector dot products so that my correlation looks like:
C(T) = ∑ v⃗(t) · v⃗(t+T)
summed over all t s.t. v(t) and v(t+T) exist.
Is there a clean, compact way to do this with numpy? (Would be happy to give a try to answers from scipy etc. too.) Thanks.
Upvotes: 1
Views: 660
Reputation:
I will assume v
has the following format:
v = numpy.array( [[1,2], [4,2], [15,34], [2,3]] )
Extract the two components:
v1 = v[:,0]
v2 = v[:,1]
Then use correlate to calculate the per-component correlation:
C = numpy.correlate(v1,v1,'full') + numpy.correlate(v2,v2,'full')
You will only need half of the result as the correlation is symmetric. The correct half would be:
C = C[(len(C)/2):]
Upvotes: 2