Marlowe
Marlowe

Reputation: 41

How can I convert a transition dictionary into a transition matrix in markov chain?

I want to convert a transition dictionary into a transition matrix in Marcov chain. I have a dictionary which each element's value indicates where I can go from that state(e.g. From A, I can go B or E). I want to convert it into a matrix which each row represents the probability of moving from each state.

dictionary = {'A': 'BE', 'B': 'AFC', 'C': 'BGD', 'D': 'CH', 'E': 'AF', 'F': 'EBG', 'G': 'FCH', 'H': 'GD'}

What I expect:

mat = [[0.5, 0, 0, 0, 0.5, 0, 0, 0] #state A
       [0.333, 0, 0.333, 0, 0, 0.333, 0, 0] #state B
       ... ] #untill state H (8X8 matrix)

Upvotes: 1

Views: 545

Answers (1)

Stanislas Morbieu
Stanislas Morbieu

Reputation: 1827

Here is how you convert the dictionary to the transition matrix:

import numpy as np


dictionary = {'A': 'BE', 'B': 'AFC', 'C': 'BGD', 'D': 'CH', 'E': 'AF', 'F': 'EBG', 'G': 'FCH', 'H': 'GD'}


letter_to_index = {letter: i for i, letter in enumerate(dictionary)}

n = len(dictionary)
mat = np.zeros((n, n))

for start, ends in dictionary.items():
    for end in ends:
        mat[letter_to_index[start],
            letter_to_index[end]] += 1./len(ends)

The values you gave as expected result don't seem correct however: The probabilities for the first state (A) don't sum to 1 and are not the same as those given in dictionary.

Upvotes: 1

Related Questions