Despite the practical success of deep neural networks, a comprehensive theoretical framework that can predict practically relevant scores, such as the test accuracy, from knowledge of the training data is currently lacking. Huge simplifications arise in the infinite-width limit, in which the number of units Nℓ in each hidden layer (ℓ = 1, …, L, where L is the depth of the network) far exceeds the number P of training examples. This idealization, however, blatantly departs from the reality of deep learning practice. Here we use the toolset of statistical mechanics to overcome these limitations and derive an approximate partition function for fully connected deep neural architectures, which encodes information on the trained models. The computation holds in the thermodynamic limit, where both Nℓ and P are large and their ratio αℓ = P/Nℓ is finite. This advance allows us to obtain: (1) a closed formula for the generalization error associated with a regression task in a one-hidden layer network with finite α 1; (2) an approximate expression of the partition function for deep architectures (via an effective action that depends on a finite number of order parameters); and (3) a link between deep neural networks in the proportional asymptotic limit and Student’s t-processes.
A statistical mechanics framework for Bayesian deep neural networks beyond the infinite-width limit / R. Pacelli, S. Ariosto, M. Pastore, F. Ginelli, M. Gherardi, P. Rotondo. - In: NATURE MACHINE INTELLIGENCE. - ISSN 2522-5839. - 5:12(2023 Dec 18), pp. 1497-1507. [10.1038/s42256-023-00767-6]
A statistical mechanics framework for Bayesian deep neural networks beyond the infinite-width limit
M. Pastore;M. GherardiPenultimo
;P. Rotondo
Ultimo
2023
Abstract
Despite the practical success of deep neural networks, a comprehensive theoretical framework that can predict practically relevant scores, such as the test accuracy, from knowledge of the training data is currently lacking. Huge simplifications arise in the infinite-width limit, in which the number of units Nℓ in each hidden layer (ℓ = 1, …, L, where L is the depth of the network) far exceeds the number P of training examples. This idealization, however, blatantly departs from the reality of deep learning practice. Here we use the toolset of statistical mechanics to overcome these limitations and derive an approximate partition function for fully connected deep neural architectures, which encodes information on the trained models. The computation holds in the thermodynamic limit, where both Nℓ and P are large and their ratio αℓ = P/Nℓ is finite. This advance allows us to obtain: (1) a closed formula for the generalization error associated with a regression task in a one-hidden layer network with finite α 1; (2) an approximate expression of the partition function for deep architectures (via an effective action that depends on a finite number of order parameters); and (3) a link between deep neural networks in the proportional asymptotic limit and Student’s t-processes.File | Dimensione | Formato | |
---|---|---|---|
s42256-023-00767-6.pdf
accesso riservato
Tipologia:
Publisher's version/PDF
Dimensione
1.97 MB
Formato
Adobe PDF
|
1.97 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
2209.04882.pdf
accesso aperto
Tipologia:
Pre-print (manoscritto inviato all'editore)
Dimensione
1.64 MB
Formato
Adobe PDF
|
1.64 MB | Adobe PDF | Visualizza/Apri |
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.