Approximate dynamic programming for the aeromedical evacuation dispatching problem: Value function approximation utilizing multiple level aggregation

•We consider the dispatching of aerial military medical evacuation (MEDEVAC) assets.•A Markov decision process model of the MEDEVAC dispatching problem is formulated.•We develop a scalable, approximate dynamic programming algorithm to attain high-quality dispatch policies.•Our algorithm provides dis...

Full description

Saved in:
Bibliographic Details
Published in:Omega (Oxford) Vol. 91; p. 102020
Main Authors: Robbins, Matthew J., Jenkins, Phillip R., Bastian, Nathaniel D., Lunday, Brian J.
Format: Journal Article
Language:English
Published: Elsevier Ltd 01.03.2020
Subjects:
ISSN:0305-0483, 1873-5274
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•We consider the dispatching of aerial military medical evacuation (MEDEVAC) assets.•A Markov decision process model of the MEDEVAC dispatching problem is formulated.•We develop a scalable, approximate dynamic programming algorithm to attain high-quality dispatch policies.•Our algorithm provides dispatching policies more easily employed by military medical practitioners.•Results inform the development of procedures governing the dispatching of MEDEVAC assets. Sequential resource allocation decision-making for the military medical evacuation of wartime casualties consists of identifying which available aeromedical evacuation (MEDEVAC) assets to dispatch in response to each casualty event. These sequential decisions are complicated due to uncertainty in casualty demand (i.e., severity, number, and location) and service times. In this research, we present a Markov decision process model solved using a hierarchical aggregation value function approximation scheme within an approximate policy iteration algorithmic framework. The model seeks to optimize this sequential resource allocation decision under uncertainty of how to best dispatch MEDEVAC assets to calls for service. The policies determined via our approximate dynamic programming (ADP) approach are compared to optimal military MEDEVAC dispatching policies for two small-scale problem instances and are compared to a closest-available MEDEVAC dispatching policy that is typically implemented in practice for a large-scale problem instance. Results indicate that our proposed approximation scheme provides high-quality, scalable dispatching policies that are more easily employed by military medical planners in the field. The identified ADP policies attain 99.8% and 99.5% optimal for the 6- and 12-zone problem instances investigated, as well as 9.6%, 9.2%, and 12.4% improvement over the closest-MEDEVAC policy for the 6-, 12-, and 34-zone problem instances investigated.
ISSN:0305-0483
1873-5274
DOI:10.1016/j.omega.2018.12.009