Reputation: 1
I'm trying to increase the initial state vector in a discrete Markov chain at each step in order to solve for a state vector element at some future point time, and it seems to be quite cumbersome.
For a simple example, assume a corporation has an initial state vector comprised of entry level employees with 3 transition states of (entry level, promotion, quit company). The initial state vector with 1000 entry level employees is defined as:
initialstate <- t(as.matrix(c(1000,0,0)))
The transition matrix for remaining entry level, getting promoted, or leaving the company is given by:
transitionmatrix <- matrix(c(0.5,0,0,0.2,1,0,0.3,0,1),nrow = 3,ncol = 3)
After two iterations:
step1state <- initialstate%*%transitionmatrix
(500 entry level employees remaining)
step2state <- step1state%*%transitionmatrix
(250 entry level employees remaining)
After step2state
, I have 250 entry level employees remaining, but suppose I want 1,300 entry level employees after Step 1 and 2,000 entry level employees after Step 2. To do this, I hire additional employees by increasing the state vectors. This now becomes a cumbersome process where I increase the initialstate
matrix to account for new hires in Step 1, observe the number of entry level employees in Step 2, and then increase step1state
to achieve the Step 2 target.
For example, after running the previous Markov chain, I run it again and add 800 new hires in Step 1:
step1state <- initialstate%*%transitionmatrix + t(as.matrix(c(800,0,0)))
step2state <- step1state%*%transitionmatrix
Step 1 has 1,300 entry level employees as desired, but Step 2 now needs 1,350 entry level employees hired (down from 1,750 in the initial run). The following satisfies my hiring targets in each period:
step1state <- initialstate%*%transitionmatrix + t(as.matrix(c(800,0,0)))
step2state <- step1state%*%transitionmatrix + t(as.matrix(c(1350,0,0)))
If my entry level employee target changes at any step, I have to re-run the markov chain for every step afterward because the number of entry level employees depends on the number of employees in previous periods (but the transition probabilities do not). The MarkovChain package in R didn't seem to be able to solve the state vector for specific values at each step, so I'm simply running a baseline Markov Chain and iteratively adding new employees to each initialstate vector to get the target I want at each step.
Is there a better way to do this? Is a Markov Chain the right choice of model for what I'm trying to achieve?
Upvotes: 0
Views: 472
Reputation: 263481
I'm not sure what you mean by "Is a Markov Chain the right choice of model for what I'm trying to achieve?", because I don't actually understand what you are trying to achieve, nor is it entirely clear what you mean by a "Markov Chain" although I'm guessing you want to iterate a series of "Markov" transitions from an initial state vector. For a starting vector state of 1000 new hires the state after n
iterations will be:
library(expm)
initialstate %*% (transitionmatrix %^% n)
The expm
-package implements a matrix exponential operator %^%
which allows you to calculate in one step the outcomes of n
iterations of a Markov transition matrix.
So after 25 iterations there will be:
> initialstate %*% (transitionmatrix %^% 25)
[,1] [,2] [,3]
[1,] 2.980232e-05 400 600
.... 400 promoted individuals and 600 "leavers". This is all linear, so if you started with 2000 you would then have 800 and 1200 in those states respectively. And it's not surprising that their ratio is 2:3 since that is also the ratio for the numbers making the transition due to the second and third elements of the first column. Note that the 1's in the diagonal for columns 2 and 3 make those "absorbing states". Once you are "promoted" you will remain promoted. There is no exit. Just like Hotel California if you're an Eagles fan. Once you "leave", there's also no return possible.
Upvotes: 0