In the setting of online learning, Implicit algorithms turn out to be highly successful from a practical standpoint. However, the tightest regret analyses only show marginal improvements over Online Mirror Descent. In this work, we shed light on this behavior carrying out a careful regret analysis. We prove a novel static regret bound that depends on the temporal variability of the sequence of loss functions, a quantity which is often encountered when considering dynamic competitors. We show, for example, that the regret can be constant if the temporal variability is constant and the learning rate is tuned appropriately, without the need of smooth losses. Moreover, we present an adaptive algorithm that achieves this regret bound without prior knowledge of the temporal variability and prove a matching lower bound. Finally, we validate our theoretical findings on classification and regression datasets.

Temporal variability in implicit online learning / N. Campolongo, F. Orabona (ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS). - In: NeurIPS[s.l] : Neural information processing systems foundation, 2020. - ISBN 978-1-7138-2954-6. (( Intervento presentato al 34. convegno Conference on Neural Information Processing Systems : December 6 - 12 tenutosi a Vancouver BC (Canada) nel 2020.

Temporal variability in implicit online learning

N. Campolongo
Primo
;
F. Orabona
Ultimo
2020

Abstract

In the setting of online learning, Implicit algorithms turn out to be highly successful from a practical standpoint. However, the tightest regret analyses only show marginal improvements over Online Mirror Descent. In this work, we shed light on this behavior carrying out a careful regret analysis. We prove a novel static regret bound that depends on the temporal variability of the sequence of loss functions, a quantity which is often encountered when considering dynamic competitors. We show, for example, that the regret can be constant if the temporal variability is constant and the learning rate is tuned appropriately, without the need of smooth losses. Moreover, we present an adaptive algorithm that achieves this regret bound without prior knowledge of the temporal variability and prove a matching lower bound. Finally, we validate our theoretical findings on classification and regression datasets.
Settore INFO-01/A - Informatica
   AF: Small: Collaborative Research: New Representations for Learning Algorithms and Secure Computation
   National Science Foundation
   Directorate for Computer & Information Science & Engineering
   1908111

   Collaborative Research: TRIPODS Institute for Optimization and Learning
   National Science Foundation
   Directorate for Computer & Information Science & Engineering
   1925930
2020
Apple
Tenstorrent
Microsoft
PDT Partners
Sony
Book Part (author)
File in questo prodotto:
File Dimensione Formato  
NeurIPS-2020-temporal-variability-in-implicit-online-learning-Paper.pdf

accesso riservato

Tipologia: Publisher's version/PDF
Dimensione 336.24 kB
Formato Adobe PDF
336.24 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/1145275
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 13
  • ???jsp.display-item.citation.isi??? 1
social impact