Approximate dynamic programming based approach to process control and scheduling

Multi-stage decision problems under uncertainty are abundant in process industries. Markov decision process (MDP) is a general mathematical formulation of such problems. Whereas stochastic programming and dynamic programming are the standard methods to solve MDPs, their unwieldy computational requir...

Full description

Saved in:
Bibliographic Details
Published in:Computers & chemical engineering Vol. 30; no. 10; pp. 1603 - 1618
Main Authors: Lee, Jay H., Lee, Jong Min
Format: Journal Article
Language:English
Published: Elsevier Ltd 12.09.2006
Subjects:
ISSN:0098-1354, 1873-4375
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Multi-stage decision problems under uncertainty are abundant in process industries. Markov decision process (MDP) is a general mathematical formulation of such problems. Whereas stochastic programming and dynamic programming are the standard methods to solve MDPs, their unwieldy computational requirements limit their usefulness in real applications. Approximate dynamic programming (ADP) combines simulation and function approximation to alleviate the ‘curse-of-dimensionality’ associated with the traditional dynamic programming approach. In this paper, we present the ADP as a viable way to solve MDPs for process control and scheduling problems. We bring forth some key issues for its successful application in these types of problems, including the choice of function approximator and the use of a penalty function to guard against over-extending the value function approximation in the value iteration. Application studies involving a number of well-known control and scheduling problems, including dual control, multiple controller scheduling, and resource constrained project scheduling problems, point to the promising potentials of ADP.
ISSN:0098-1354
1873-4375
DOI:10.1016/j.compchemeng.2006.05.043