We study dynamic regret in online convex optimization, where the objective is to achieve low cumulative loss relative to an arbitrary benchmark sequence. By observing that competing with an arbitrary sequence of comparators u1, . . . , uT in W ⊆ Rd can be reframed as competing with a fixed comparator function u : [1, T ] → W, we cast dynamic regret minimization as a static regret problem in a function space. By carefully constructing a suitable function space in the form of a Reproducing Kernel Hilbert Space (RKHS), our reduction enables us to recover the optimal RT (u1, . . . , uT ) = O(pP t ∥ut − ut−1∥T ) dynamic regret guaran- tee in the setting of linear losses, and yields new scale-free and directionally- adaptive dynamic regret guarantees. Moreover, unlike prior dynamic-to-static reductions—which are valid only for linear losses—our reduction holds for any sequence of losses, allowing us to recover O∥u∥2 H + deff (λ) ln T  bounds when the losses have meaningful curvature, where deff (λ) is a measure of complexity of the RKHS. Despite working in an infinite-dimensional space, the resulting reduc- tion leads to algorithms that are computable in practice, due to the reproducing property of RKHSs.

Dynamic Regret Reduces to Kernelized Static Regret / A. Jacobsen, A. Rudi, F. Orabona, N. Cesa Bianchi (ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS). - In: Advances in Neural Information Processing Systems / [a cura di] D. Belgrave and C. Zhang and H. Lin and R. Pascanu and P. Koniusz and M. Ghassemi and N. Chen. - [s.l] : Curran Associates, 2025. - pp. 172984-173028 (( 38. Advances in Neural Information Processing Systems2025.

Dynamic Regret Reduces to Kernelized Static Regret

F. Orabona
Penultimo
;
N. Cesa Bianchi
Ultimo
2025

Abstract

We study dynamic regret in online convex optimization, where the objective is to achieve low cumulative loss relative to an arbitrary benchmark sequence. By observing that competing with an arbitrary sequence of comparators u1, . . . , uT in W ⊆ Rd can be reframed as competing with a fixed comparator function u : [1, T ] → W, we cast dynamic regret minimization as a static regret problem in a function space. By carefully constructing a suitable function space in the form of a Reproducing Kernel Hilbert Space (RKHS), our reduction enables us to recover the optimal RT (u1, . . . , uT ) = O(pP t ∥ut − ut−1∥T ) dynamic regret guaran- tee in the setting of linear losses, and yields new scale-free and directionally- adaptive dynamic regret guarantees. Moreover, unlike prior dynamic-to-static reductions—which are valid only for linear losses—our reduction holds for any sequence of losses, allowing us to recover O∥u∥2 H + deff (λ) ln T  bounds when the losses have meaningful curvature, where deff (λ) is a measure of complexity of the RKHS. Despite working in an infinite-dimensional space, the resulting reduc- tion leads to algorithms that are computable in practice, due to the reproducing property of RKHSs.
Settore INFO-01/A - Informatica
   Learning in Markets and Society
   MINISTERO DELL'UNIVERSITA' E DELLA RICERCA
   2022EKNE5K_001

   European Lighthouse of AI for Sustainability (ELIAS)
   ELIAS
   EUROPEAN COMMISSION
   101120237

   One Health Action Hub: task force di Ateneo per la resilienza di ecosistemi territoriali (1H_Hub) - ONE HEALTH ACTION HUB
   (1H_Hub) - ONE HEALTH ACTION HUB
   UNIVERSITA' DEGLI STUDI DI MILANO
2025
https://proceedings.neurips.cc/paper_files/paper/2025/file/fc7f90ed14864bef5cafa89ba38f1993-Paper-Conference.pdf
Book Part (author)
File in questo prodotto:
File Dimensione Formato  
NeurIPS-2025-dynamic-regret-reduces-to-kernelized-static-regret-Paper-Conference.pdf

accesso aperto

Tipologia: Post-print, accepted manuscript ecc. (versione accettata dall'editore)
Licenza: Creative commons
Dimensione 556.34 kB
Formato Adobe PDF
556.34 kB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/1242463
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact