We study nonstochastic bandits and experts in a delayed setting where delays depend on both time and arms. While the setting in which delays only depend on time has been extensively studied, the arm-dependent delay setting better captures real-world applications at the cost of introducing new technical challenges. In the full information (experts) setting, we design an algorithm with a first-order regret bound that reveals an interesting trade-off between delays and losses. We prove a similar first-order regret bound also for the bandit setting, when the learner is allowed to observe how many losses are missing. Our bounds are the first in the delayed setting that only depend on the losses and delays of the best arm. In the bandit setting, when no information other than the losses is observed, we still manage to prove a regret bound for bandits through a modification to the algorithm of Zimmert and Seldin (2020). Our analyses hinge on a novel bound on the drift, measuring how much better an algorithm can perform when given a look-ahead of one round.

Nonstochastic Bandits and Experts with Arm-Dependent Delays / D. VAN DER HOEVEN, N. Cesa Bianchi (PROCEEDINGS OF MACHINE LEARNING RESEARCH). - In: International Conference on Artificial Intelligence and Statistics / [a cura di] G. Camps-Valls, F.J.R. Ruiz, I. Valera. - [s.l] : PMLR, 2022. - pp. 2022-2044 (( convegno International Conference on Artificial Intelligence and Statistics tenutosi a virtual nel 20222.

Nonstochastic Bandits and Experts with Arm-Dependent Delays

D. VAN DER HOEVEN
Primo
;
N. Cesa Bianchi
Ultimo
2022

Abstract

We study nonstochastic bandits and experts in a delayed setting where delays depend on both time and arms. While the setting in which delays only depend on time has been extensively studied, the arm-dependent delay setting better captures real-world applications at the cost of introducing new technical challenges. In the full information (experts) setting, we design an algorithm with a first-order regret bound that reveals an interesting trade-off between delays and losses. We prove a similar first-order regret bound also for the bandit setting, when the learner is allowed to observe how many losses are missing. Our bounds are the first in the delayed setting that only depend on the losses and delays of the best arm. In the bandit setting, when no information other than the losses is observed, we still manage to prove a regret bound for bandits through a modification to the algorithm of Zimmert and Seldin (2020). Our analyses hinge on a novel bound on the drift, measuring how much better an algorithm can perform when given a look-ahead of one round.
Settore INF/01 - Informatica
   European Learning and Intelligent Systems Excellence (ELISE)
   ELISE
   EUROPEAN COMMISSION
   H2020
   951847
2022
https://proceedings.mlr.press/v151/van-der-hoeven22a.html
Book Part (author)
File in questo prodotto:
File Dimensione Formato  
van-der-hoeven22a.pdf

accesso aperto

Tipologia: Publisher's version/PDF
Dimensione 409.15 kB
Formato Adobe PDF
409.15 kB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/939310
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 0
social impact