Convergent Numerical Scheme for Singular Stochastic Control with State Constraints in a Portfolio Selection Problem

We consider a singular stochastic control problem with state constraints that arises in problems of optimal consumption and investment under transaction costs. Numerical approximations for the value function using the Markov chain approximation method of Kushner and Dupuis are studied. The main resu...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:SIAM journal on control and optimization Jg. 45; H. 6; S. 2169 - 2206
Hauptverfasser: Budhiraja, Amarjit, Ross, Kevin
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Philadelphia, PA Society for Industrial and Applied Mathematics 01.01.2007
Schlagworte:
ISSN:0363-0129, 1095-7138
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:We consider a singular stochastic control problem with state constraints that arises in problems of optimal consumption and investment under transaction costs. Numerical approximations for the value function using the Markov chain approximation method of Kushner and Dupuis are studied. The main result of the paper shows that the value function of the Markov decision problem (MDP) corresponding to the approximating controlled Markov chain converges to that of the original stochastic control problem as various parameters in the approximation approach suitable limits. All our convergence arguments are probabilistic; the main assumption that we make is that the value function be finite and continuous. In particular, uniqueness of the solutions of the associated HJB equations is neither needed nor available (in the generality under which the problem is considered). Specific features of the problem that make the convergence analysis nontrivial include unboundedness of the state and control space and the cost function; degeneracies in the dynamics; mixed boundary (Dirichlet-Neumann) conditions; and presence of both singular and absolutely continuous controls in the dynamics. Finally, schemes for computing the value function and optimal control policies for the MDP are presented and illustrated with a numerical study.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
content type line 14
ISSN:0363-0129
1095-7138
DOI:10.1137/050640515