We study a $K$-armed bandit with delayed feedback and intermediate observations. We consider a model, where intermediate observations have a form of a finite state, which is observed immediately after taking an action, whereas the loss is observed after an adversarially chosen delay. We show that the regime of the mapping of states to losses determines the complexity of the problem, irrespective of whether the mapping of actions to states is stochastic or adversarial. If the mapping of states to losses is adversarial, then the regret rate is of order $\sqrt{(K+d)T}$ (within log factors), where $T$ is the time horizon and $d$ is a fixed delay. This matches the regret rate of a $K$-armed bandit with delayed feedback and without intermediate observations, implying that intermediate observations are not helpful. However, if the mapping of states to losses is stochastic, we show that the regret grows at a rate of $\sqrt{\bigl(K+\min\{|\mathcal{S}|,d\}\bigr)T}$ (within log factors), implying that if the number $|\mathcal{S}|$ of states is smaller than the delay, then intermediate observations help. We also provide refined high-probability regret upper bounds for non-uniform delays, together with experimental validation of our algorithms.

Delayed Bandits: When Do Intermediate Observations Help? / E. Esposito, S. Masoudian, H. Qiu, D. VAN DER HOEVEN, N. Cesa-Bianchi, Y. Seldin (PROCEEDINGS OF MACHINE LEARNING RESEARCH). - In: ICML / [a cura di] A. Krause, E. Brunskill, K. Cho, B. Engelhardt, S. Sabato, J. Scarlett. - [s.l] : PMLR, 2023. - pp. 9374-9395 (( Intervento presentato al 40. convegno International Conference on Machine Learning : 23 through 29 July tenutosi a Honolulu nel 2023.

Delayed Bandits: When Do Intermediate Observations Help?

E. Esposito
Co-primo
;
H. Qiu
Secondo
;
D. VAN DER HOEVEN
Penultimo
;
N. Cesa-Bianchi
Co-ultimo
;
2023

Abstract

We study a $K$-armed bandit with delayed feedback and intermediate observations. We consider a model, where intermediate observations have a form of a finite state, which is observed immediately after taking an action, whereas the loss is observed after an adversarially chosen delay. We show that the regime of the mapping of states to losses determines the complexity of the problem, irrespective of whether the mapping of actions to states is stochastic or adversarial. If the mapping of states to losses is adversarial, then the regret rate is of order $\sqrt{(K+d)T}$ (within log factors), where $T$ is the time horizon and $d$ is a fixed delay. This matches the regret rate of a $K$-armed bandit with delayed feedback and without intermediate observations, implying that intermediate observations are not helpful. However, if the mapping of states to losses is stochastic, we show that the regret grows at a rate of $\sqrt{\bigl(K+\min\{|\mathcal{S}|,d\}\bigr)T}$ (within log factors), implying that if the number $|\mathcal{S}|$ of states is smaller than the delay, then intermediate observations help. We also provide refined high-probability regret upper bounds for non-uniform delays, together with experimental validation of our algorithms.
Settore INF/01 - Informatica
   Algorithms, Games, and Digital Markets (ALGADIMAR)
   ALGADIMAR
   MINISTERO DELL'ISTRUZIONE E DEL MERITO
   2017R9FHSR_006

   European Learning and Intelligent Systems Excellence (ELISE)
   ELISE
   EUROPEAN COMMISSION
   H2020
   951847
2023
https://proceedings.mlr.press/v202/esposito23a.html
Book Part (author)
File in questo prodotto:
File Dimensione Formato  
esposito23a.pdf

accesso aperto

Tipologia: Publisher's version/PDF
Dimensione 6.88 MB
Formato Adobe PDF
6.88 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/1024138
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact