We consider prediction with expert advice for strongly convex and bounded losses, and investigate trade-offs between regret and “variance” (i.e., squared difference of learner’s predictions and best expert predictions). With K experts, the Exponentially Weighted Average (EWA) algorithm is known to achieve O(logK) regret. We prove that a variant of EWA either achieves a negative regret (i.e., the algorithm outperforms the best expert), or guarantees a O(logK) bound on both variance and regret. Building on this result, we show several examples of how variance of predictions can be exploited in learning. In the online to batch analysis, we show that a large empirical variance allows to stop the online to batch conversion early and outperform the risk of the best predictor in the class. We also recover the optimal rate of model selection aggregation when we do not consider early stopping. In online prediction with corrupted losses, we show that the effect of corruption on the regret can be compensated by a large variance. In online selective sampling, we design an algorithm that samples less when the variance is large, while guaranteeing the optimal regret bound in expectation. In online learning with abstention, we use a similar term as the variance to derive the first high-probability O(logK) regret bound in this setting. Finally, we extend our results to the setting of online linear regression.

A Regret-Variance Trade-Off in Online Learning / D. van der Hoeven, N. Zhivotovskiy, N. Cesa Bianchi (ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS). - In: NeurIPS / [a cura di] S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, A. Oh. - [s.l] : Curran Associates, 2022. - ISBN 9781713871088. - pp. 35188-35200 (( Intervento presentato al 36. convegno Conference on Neural Information Processing Systems : Monday November 28th through Friday December 9th tenutosi a New Orleans nel 2022.

A Regret-Variance Trade-Off in Online Learning

D. van der Hoeven
Primo
;
N. Cesa Bianchi
Ultimo
2022

Abstract

We consider prediction with expert advice for strongly convex and bounded losses, and investigate trade-offs between regret and “variance” (i.e., squared difference of learner’s predictions and best expert predictions). With K experts, the Exponentially Weighted Average (EWA) algorithm is known to achieve O(logK) regret. We prove that a variant of EWA either achieves a negative regret (i.e., the algorithm outperforms the best expert), or guarantees a O(logK) bound on both variance and regret. Building on this result, we show several examples of how variance of predictions can be exploited in learning. In the online to batch analysis, we show that a large empirical variance allows to stop the online to batch conversion early and outperform the risk of the best predictor in the class. We also recover the optimal rate of model selection aggregation when we do not consider early stopping. In online prediction with corrupted losses, we show that the effect of corruption on the regret can be compensated by a large variance. In online selective sampling, we design an algorithm that samples less when the variance is large, while guaranteeing the optimal regret bound in expectation. In online learning with abstention, we use a similar term as the variance to derive the first high-probability O(logK) regret bound in this setting. Finally, we extend our results to the setting of online linear regression.
Settore INF/01 - Informatica
   European Learning and Intelligent Systems Excellence (ELISE)
   ELISE
   EUROPEAN COMMISSION
   H2020
   951847

   One Health Action Hub: task force di Ateneo per la resilienza di ecosistemi territoriali (1H_Hub) Linea Strategica 3, Tema One health, one earth
   1H_Hub
   UNIVERSITA' DEGLI STUDI DI MILANO

   Algorithms, Games, and Digital Markets (ALGADIMAR)
   ALGADIMAR
   MINISTERO DELL'ISTRUZIONE E DEL MERITO
   2017R9FHSR_006
2022
Institute of Electrical and Electronics Engineers (IEEE)
https://proceedings.neurips.cc/paper_files/paper/2022/file/e473f29459a4a006d4e968537b135e40-Paper-Conference.pdf
Book Part (author)
File in questo prodotto:
File Dimensione Formato  
NeurIPS-2022-a-regret-variance-trade-off-in-online-learning-Paper-Conference.pdf

accesso riservato

Tipologia: Publisher's version/PDF
Dimensione 325.58 kB
Formato Adobe PDF
325.58 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/961316
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact