Reputation: 18242
In a lecture on YouTube, a professor said Markov Chains could be simplified to Start(S) * Transition Matrix(Q)^State#
I'm trying to replicate this using numpy.
import numpy as np
S = np.zeros(shape=(1,2))
Q = np.zeros(shape=(2,2))
#starting state
S[0] = [.2,.8]
#transition matrix
Q[0] = [.9, .1]
Q[1] = [.7, .3]
If I do print S.dot(Q).dot(Q)
, it gives me [[0.848 0.152]]
which appears to be the correct answer (two steps into the future).
However, this doesn't exactly seem the same as SQ^x
, so I tried print S.dot(np.power(Q,2))
, but that gives me [[0.554 0.074]]
. Where am I going wrong, or what don't I understand here?
Upvotes: 2
Views: 1470
Reputation: 1426
The expressions S.dot(Q).dot(Q)
and S.dot(np.power(Q,2))
are not the same thing. The first is the behaviour you desire, while S.dot(np.power(Q,2))
raises each element in Q
to the second power. Documentation here.
For a more compact notation than repeatedly chaining .dot(Q)
, use:
S.dot(np.linalg.matrix_power(Q,n))
where n
is the desired power.
Upvotes: 7
Reputation: 6389
Stick with dot.
np.power
or *
operator is elementwise multiplication. Look at:
print Q.dot(Q)
prints:
[[ 0.88 0.12]
[ 0.84 0.16]]
which is what I get by hand. Whereas,
print np.power(Q, 2)
print Q * Q
both print:
[[ 0.81 0.01]
[ 0.49 0.09]]
So S.dot(Q).dot(Q)
is correct.
Upvotes: 4