We study dynamic regret in online convex optimization, where the objective is to achieve low cumulative loss relative to an arbitrary benchmark sequence. By observing that competing with an arbitrary sequence of comparators u1, . . . , uT in W ⊆ Rd can be reframed as competing with a fixed comparator function u : [1, T ] → W, we cast dynamic regret minimization as a static regret problem in a function space. By carefully constructing a suitable function space in the form of a Reproducing Kernel Hilbert Space (RKHS), our reduction enables us to recover the optimal RT (u1, . . . , uT ) = O(pP t ∥ut − ut−1∥T ) dynamic regret guaran- tee in the setting of linear losses, and yields new scale-free and directionally- adaptive dynamic regret guarantees. Moreover, unlike prior dynamic-to-static reductions—which are valid only for linear losses—our reduction holds for any sequence of losses, allowing us to recover O∥u∥2 H + deff (λ) ln T  bounds when the losses have meaningful curvature, where deff (λ) is a measure of complexity of the RKHS. Despite working in an infinite-dimensional space, the resulting reduc- tion leads to algorithms that are computable in practice, due to the reproducing property of RKHSs.
Dynamic Regret Reduces to Kernelized Static Regret / A. Jacobsen, A. Rudi, F. Orabona, N. Cesa Bianchi (ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS). - In: Advances in Neural Information Processing Systems / [a cura di] D. Belgrave and C. Zhang and H. Lin and R. Pascanu and P. Koniusz and M. Ghassemi and N. Chen. - [s.l] : Curran Associates, 2025. - pp. 172984-173028 (( 38. Advances in Neural Information Processing Systems2025.
Dynamic Regret Reduces to Kernelized Static Regret
F. OrabonaPenultimo
;N. Cesa BianchiUltimo
2025
Abstract
We study dynamic regret in online convex optimization, where the objective is to achieve low cumulative loss relative to an arbitrary benchmark sequence. By observing that competing with an arbitrary sequence of comparators u1, . . . , uT in W ⊆ Rd can be reframed as competing with a fixed comparator function u : [1, T ] → W, we cast dynamic regret minimization as a static regret problem in a function space. By carefully constructing a suitable function space in the form of a Reproducing Kernel Hilbert Space (RKHS), our reduction enables us to recover the optimal RT (u1, . . . , uT ) = O(pP t ∥ut − ut−1∥T ) dynamic regret guaran- tee in the setting of linear losses, and yields new scale-free and directionally- adaptive dynamic regret guarantees. Moreover, unlike prior dynamic-to-static reductions—which are valid only for linear losses—our reduction holds for any sequence of losses, allowing us to recover O∥u∥2 H + deff (λ) ln T bounds when the losses have meaningful curvature, where deff (λ) is a measure of complexity of the RKHS. Despite working in an infinite-dimensional space, the resulting reduc- tion leads to algorithms that are computable in practice, due to the reproducing property of RKHSs.| File | Dimensione | Formato | |
|---|---|---|---|
|
NeurIPS-2025-dynamic-regret-reduces-to-kernelized-static-regret-Paper-Conference.pdf
accesso aperto
Tipologia:
Post-print, accepted manuscript ecc. (versione accettata dall'editore)
Licenza:
Creative commons
Dimensione
556.34 kB
Formato
Adobe PDF
|
556.34 kB | Adobe PDF | Visualizza/Apri |
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.




