We now investigate a finitestate stochastic process in which
the defining random variables are observed at discrete points
in time. When the future probabilistic behavior of the process
depends only on the present state regardless of when the present
state is measured, the resultant model is called a Markov
chain. When time is measured in discrete intervals the model
is called the Discrete Time Markov Chain (DTMC). The
term Markov Chain is more general that DTMC because it also
includes continuous time processes called Continuous Time
Markov Chains (CTMC).We consider CTMC in a later section.
We use the term Markov chain when a comment refers to both CTMC
and DTMC.
Unlike most stochastic processes, Markov chains have very agreeable
properties allowing for easy study. Often they are used to approximate
quite complex physical systems, even when it is clear that the
actual behavior of the system being analyzed may depend on more
than just the present state, or when the number of states is
not really finite.
To develop a model of a DTMC we need to define the system states
S and specify the onestep transition matrix P.
Given this information, computational procedures are available
to answer questions related to the steadystate behavior of
the DTMC. In particular we can compute: the tstep transition
matrix, transient and steadystate probability
vectors, absorbing state probabilities, and first
passage probability distributions. Integrating these results
with economic data leads directly to a framework for systems
design and optimal decision making under uncertainty.
