Gibbs-ERM learning is a natural idealized model of learning with stochastic optimization algorithms (such as SGLD and —to some extent— SGD), while it also arises in other contexts, including PAC-Bayesian theory, and sampling mechanisms. In this work we study the excess risk suffered by a Gibbs-ERM learner that uses non-convex, regularized empirical risk with the goal to understand the interplay between the data-generating distribution and learning in large hypothesis spaces. Our main results are emphdistribution-dependent upper bounds on several notions of excess risk. We show that, in all cases, the distribution-dependent excess risk is essentially controlled by the empheffective dimension $ exttrleft(oldsymbolH^star (oldsymbolH^star + lambda oldsymbolI)^-1 ight)$ of the problem, where $oldsymbolH^star$ is the Hessian matrix of the risk at a local minimum. This is a well-established notion of effective dimension appearing in several previous works, including the analyses of SGD and ridge regression, but ours is the first work that brings this dimension to the analysis of learning using Gibbs densities. The distribution-dependent view we advocate here improves upon earlier results of Raginsky et al. 2017, and can yield much tighter bounds depending on the interplay between the data-generating distribution and the loss function. The first part of our analysis focuses on the emphlocalized excess risk in the vicinity of a fixed local minimizer. This result is then extended to bounds on the emphglobal excess risk, by characterizing probabilities of local minima (and their complement) under Gibbs densities, a results which might be of independent interest.
Distribution-Dependent Analysis of Gibbs-ERM Principle / I. Kuzborskij, N. Cesa-Bianchi, C. Szepesvari (PROCEEDINGS OF MACHINE LEARNING RESEARCH). - In: Conference on Learning Theory / [a cura di] A. Beygelzimer, D. Hsu. - [s.l] : PMLR, 2019. - pp. 2028-2054 (( Intervento presentato al 32. convegno Learning Theory tenutosi a Phoenix nel 2019.
Distribution-Dependent Analysis of Gibbs-ERM Principle
I. Kuzborskij;N. Cesa-Bianchi;
2019
Abstract
Gibbs-ERM learning is a natural idealized model of learning with stochastic optimization algorithms (such as SGLD and —to some extent— SGD), while it also arises in other contexts, including PAC-Bayesian theory, and sampling mechanisms. In this work we study the excess risk suffered by a Gibbs-ERM learner that uses non-convex, regularized empirical risk with the goal to understand the interplay between the data-generating distribution and learning in large hypothesis spaces. Our main results are emphdistribution-dependent upper bounds on several notions of excess risk. We show that, in all cases, the distribution-dependent excess risk is essentially controlled by the empheffective dimension $ exttrleft(oldsymbolH^star (oldsymbolH^star + lambda oldsymbolI)^-1 ight)$ of the problem, where $oldsymbolH^star$ is the Hessian matrix of the risk at a local minimum. This is a well-established notion of effective dimension appearing in several previous works, including the analyses of SGD and ridge regression, but ours is the first work that brings this dimension to the analysis of learning using Gibbs densities. The distribution-dependent view we advocate here improves upon earlier results of Raginsky et al. 2017, and can yield much tighter bounds depending on the interplay between the data-generating distribution and the loss function. The first part of our analysis focuses on the emphlocalized excess risk in the vicinity of a fixed local minimizer. This result is then extended to bounds on the emphglobal excess risk, by characterizing probabilities of local minima (and their complement) under Gibbs densities, a results which might be of independent interest.File | Dimensione | Formato | |
---|---|---|---|
kuzborskij19a.pdf
accesso riservato
Tipologia:
Post-print, accepted manuscript ecc. (versione accettata dall'editore)
Dimensione
420.67 kB
Formato
Adobe PDF
|
420.67 kB | Adobe PDF | Visualizza/Apri Richiedi una copia |
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.