We study an optimal control problem on infinite horizon for a controlled stochastic differential equation driven by Brownian motion, with a discounted reward functional. The equation may have memory or delay effects in the coefficients, both with respect to state and control, and the noise can be degenerate. We prove that the value, i.e. the supremum of the reward functional over all admissible controls, can be represented by the solution of an associated backward stochastic differential equation (BSDE) driven by the Brownian motion and an auxiliary independent Poisson process and having a sign constraint on jumps. In the Markovian case when the coefficients depend only on the present values of the state and the control, we prove that the BSDE can be used to construct the solution, in the sense of viscosity theory, to the corresponding Hamilton-Jacobi-Bellman partial differential equation of elliptic type on the whole space, so that it provides us with a Feynman-Kac representation in this fully nonlinear context. The method of proof consists in showing that the value of the original problem is the same as the value of an auxiliary optimal control problem (called randomized), where the control process is replaced by a fixed pure jump process and maximization is taken over a class of absolutely continuous changes of measures which affect the stochastic intensity of the jump process but leave the law of the driving Brownian motion unchanged.

Backward SDEs and infinite horizon stochastic optimal control / F. Confortola, A. Cosso, M. Fuhrman. - In: ESAIM. COCV. - ISSN 1292-8119. - 25(2019 Aug 09).

Backward SDEs and infinite horizon stochastic optimal control

Abstract

We study an optimal control problem on infinite horizon for a controlled stochastic differential equation driven by Brownian motion, with a discounted reward functional. The equation may have memory or delay effects in the coefficients, both with respect to state and control, and the noise can be degenerate. We prove that the value, i.e. the supremum of the reward functional over all admissible controls, can be represented by the solution of an associated backward stochastic differential equation (BSDE) driven by the Brownian motion and an auxiliary independent Poisson process and having a sign constraint on jumps. In the Markovian case when the coefficients depend only on the present values of the state and the control, we prove that the BSDE can be used to construct the solution, in the sense of viscosity theory, to the corresponding Hamilton-Jacobi-Bellman partial differential equation of elliptic type on the whole space, so that it provides us with a Feynman-Kac representation in this fully nonlinear context. The method of proof consists in showing that the value of the original problem is the same as the value of an auxiliary optimal control problem (called randomized), where the control process is replaced by a fixed pure jump process and maximization is taken over a class of absolutely continuous changes of measures which affect the stochastic intensity of the jump process but leave the law of the driving Brownian motion unchanged.
Scheda breve Scheda completa Scheda completa (DC)
stochastic optimal control; backward SDEs; randomization of controls
Settore MAT/06 - Probabilita' e Statistica Matematica
Deterministic and stochastic evolution equations
Article (author)
File in questo prodotto:
File
orizzonte_infinito_finale_revised.pdf

accesso aperto

Tipologia: Pre-print (manoscritto inviato all'editore)
Dimensione 191.35 kB
cocv170159.pdf

accesso riservato

Tipologia: Publisher's version/PDF
Dimensione 546.38 kB
Utilizza questo identificativo per citare o creare un link a questo documento: `https://hdl.handle.net/2434/714720`