Convergent Numerical Scheme for Singular Stochastic Control with State Constraints in a Portfolio Selection Problem

We consider a singular stochastic control problem with state constraints that arises in problems of optimal consumption and investment under transaction costs. Numerical approximations for the value function using the Markov chain approximation method of Kushner and Dupuis are studied. The main resu...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:SIAM journal on control and optimization Ročník 45; číslo 6; s. 2169 - 2206
Hlavní autoři: Budhiraja, Amarjit, Ross, Kevin
Médium: Journal Article
Jazyk:angličtina
Vydáno: Philadelphia, PA Society for Industrial and Applied Mathematics 01.01.2007
Témata:
ISSN:0363-0129, 1095-7138
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:We consider a singular stochastic control problem with state constraints that arises in problems of optimal consumption and investment under transaction costs. Numerical approximations for the value function using the Markov chain approximation method of Kushner and Dupuis are studied. The main result of the paper shows that the value function of the Markov decision problem (MDP) corresponding to the approximating controlled Markov chain converges to that of the original stochastic control problem as various parameters in the approximation approach suitable limits. All our convergence arguments are probabilistic; the main assumption that we make is that the value function be finite and continuous. In particular, uniqueness of the solutions of the associated HJB equations is neither needed nor available (in the generality under which the problem is considered). Specific features of the problem that make the convergence analysis nontrivial include unboundedness of the state and control space and the cost function; degeneracies in the dynamics; mixed boundary (Dirichlet-Neumann) conditions; and presence of both singular and absolutely continuous controls in the dynamics. Finally, schemes for computing the value function and optimal control policies for the MDP are presented and illustrated with a numerical study.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
content type line 14
ISSN:0363-0129
1095-7138
DOI:10.1137/050640515