Return to Index
Operations Research Models and Methods
Computation Section
Subunit Dynamic Programming Models
 - Markov Decision Processes


The Markov Decision Process (MDP) adds actions to the Markov chain. The model consists of states, actions, events, and decisions. Optionally, state blocks and decision blocks may also be included. The first three pages of this DP Models section describes a MDP model, so we will not repeat the development here. Further examples can be found by following the links in the table below. The links direct you to pages in the DP Examples and DP Data sections of the dynamic programming collection.


The features of the MDP model are described in the first three sections of this models discussion.

Model Solution Data File Description
Cab Cab dp_cab.xls

A small model from Howard

Baseball Baseball dp_baseball.xls

A model of an inning of baseball (Howard )

Replacement Replacement dp_replace.xls

An equipment replacement problem (Howard )

Sequence Sequence dp_well_sequence.xls Sequencing well drilling (Bickel)
Birth-Death with Purge   dp_birth_death.xls

The Birth-Death process with the purge decision

Queue Queue dp_book.xls Model to determine the optimum number of servers, where the decision depends on the state
Doors Doors doors.xls Model structured on the random walk problem where the action is to block movement out of a state (Dimitrov and Morton)
Return to Top

tree roots

Operations Research Models and Methods
by Paul A. Jensen
Copyright 2004 - All rights reserved