We study the role of the representation of state-action value functions in regret minimization in finite-horizon Markov Decision Processes (MDPs) with linear structure. We first derive a necessary condition on the representation, called universally spanning optimal features (UNISOFT), to achieve constant regret in any MDP with linear reward function. This result encompasses the well-known settings of low-rank MDPs and, more generally, zero inherent Bellman error (also known as the Bellman closure assumption). We then demonstrate that this condition is also sufficient for these classes of problems by deriving a constant regret bound for two optimistic algorithms (LSVI-UCB and ELEANOR). Finally, we propose an algorithm for representation selection and we prove that it achieves constant regret when one of the given representations, or a suitable combination of them, satisfies the UNISOFT condition.

Reinforcement Learning in Linear MDPs: Constant Regret and Representation Selection / M. Papini, A. Tirinzoni, A. Pacchiano, M. Restelli, A. Lazaric, M. Pirotta (ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS). - In: 35th Conference on Neural Information Processing Systems, NeurIPS 2021 / [a cura di] R.M. Beygelzimer, A. Dauphin, Y. Liang, P.S. Wortman, J. Vaughan. - [s.l] : Neural information processing systems foundation, 2021. - ISBN 9781713845393. - pp. 16371-16383 (( 35. Conference on Neural Information Processing Systems Online 2021.

Reinforcement Learning in Linear MDPs: Constant Regret and Representation Selection

M. Papini
Primo
;
2021

Abstract

We study the role of the representation of state-action value functions in regret minimization in finite-horizon Markov Decision Processes (MDPs) with linear structure. We first derive a necessary condition on the representation, called universally spanning optimal features (UNISOFT), to achieve constant regret in any MDP with linear reward function. This result encompasses the well-known settings of low-rank MDPs and, more generally, zero inherent Bellman error (also known as the Bellman closure assumption). We then demonstrate that this condition is also sufficient for these classes of problems by deriving a constant regret bound for two optimistic algorithms (LSVI-UCB and ELEANOR). Finally, we propose an algorithm for representation selection and we prove that it achieves constant regret when one of the given representations, or a suitable combination of them, satisfies the UNISOFT condition.
Settore IINF-05/A - Sistemi di elaborazione delle informazioni
Settore INFO-01/A - Informatica
2021
Book Part (author)
File in questo prodotto:
File Dimensione Formato  
NeurIPS-2021-reinforcement-learning-in-linear-mdps-constant-regret-and-representation-selection-Paper.pdf

accesso riservato

Tipologia: Publisher's version/PDF
Licenza: Nessuna licenza
Dimensione 458.65 kB
Formato Adobe PDF
458.65 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
2110.14798v1.pdf

accesso aperto

Tipologia: Pre-print (manoscritto inviato all'editore)
Licenza: Creative commons
Dimensione 1.16 MB
Formato Adobe PDF
1.16 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/1226138
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 14
  • ???jsp.display-item.citation.isi??? 3
  • OpenAlex ND
social impact