We consider a classical finite horizon optimal control problem for continuous-time pure jump Markov processes described by means of a rate transition measure depending on a control parameter and controlled by a feedback law. For this class of problems the value function can often be described as the unique solution to the corresponding Hamilton–Jacobi-Bellman equation. We prove a probabilistic representation for the value function, known as nonlinear Feynman–Kac formula. It relates the value function with a backward stochastic differential equation (BSDE) driven by a random measure and with a sign constraint on its martingale part. We also prove existence and uniqueness results for this class of constrained BSDEs. The connection of the control problem with the constrained BSDE uses a control randomization method recently developed by several authors. This approach also allows to prove that the value function of the original non-dominated control problem coincides with the value function of an auxiliary dominated control problem, expressed in terms of equivalent changes of probability measures.
|Titolo:||Constrained BSDEs representation of the value function in optimal control of pure jump Markov processes|
|Parole Chiave:||backward stochastic differential equations; marked point processes; optimal control problems; pure jump markov processes; randomization; statistics and probability; modeling and simulation; applied mathematics|
|Settore Scientifico Disciplinare:||Settore MAT/06 - Probabilita' e Statistica Matematica|
|Data di pubblicazione:||mag-2017|
|Digital Object Identifier (DOI):||10.1016/j.spa.2016.08.005|
|Appare nelle tipologie:||01 - Articolo su periodico|