Despite the practical success of deep neural networks, a comprehensive theoretical framework that can predict practically relevant scores, such as the test accuracy, from knowledge of the training data is currently lacking. Huge simplifications arise in the infinite-width limit, in which the number of units Nℓ in each hidden layer (ℓ = 1, …, L, where L is the depth of the network) far exceeds the number P of training examples. This idealization, however, blatantly departs from the reality of deep learning practice. Here we use the toolset of statistical mechanics to overcome these limitations and derive an approximate partition function for fully connected deep neural architectures, which encodes information on the trained models. The computation holds in the thermodynamic limit, where both Nℓ and P are large and their ratio αℓ = P/Nℓ is finite. This advance allows us to obtain: (1) a closed formula for the generalization error associated with a regression task in a one-hidden layer network with finite α 1; (2) an approximate expression of the partition function for deep architectures (via an effective action that depends on a finite number of order parameters); and (3) a link between deep neural networks in the proportional asymptotic limit and Student’s t-processes.

A statistical mechanics framework for Bayesian deep neural networks beyond the infinite-width limit / R. Pacelli, S. Ariosto, M. Pastore, F. Ginelli, M. Gherardi, P. Rotondo. - In: NATURE MACHINE INTELLIGENCE. - ISSN 2522-5839. - 5:12(2023 Dec 18), pp. 1497-1507. [10.1038/s42256-023-00767-6]

A statistical mechanics framework for Bayesian deep neural networks beyond the infinite-width limit

M. Pastore;M. Gherardi
Penultimo
;
P. Rotondo
Ultimo
2023

Abstract

Despite the practical success of deep neural networks, a comprehensive theoretical framework that can predict practically relevant scores, such as the test accuracy, from knowledge of the training data is currently lacking. Huge simplifications arise in the infinite-width limit, in which the number of units Nℓ in each hidden layer (ℓ = 1, …, L, where L is the depth of the network) far exceeds the number P of training examples. This idealization, however, blatantly departs from the reality of deep learning practice. Here we use the toolset of statistical mechanics to overcome these limitations and derive an approximate partition function for fully connected deep neural architectures, which encodes information on the trained models. The computation holds in the thermodynamic limit, where both Nℓ and P are large and their ratio αℓ = P/Nℓ is finite. This advance allows us to obtain: (1) a closed formula for the generalization error associated with a regression task in a one-hidden layer network with finite α 1; (2) an approximate expression of the partition function for deep architectures (via an effective action that depends on a finite number of order parameters); and (3) a link between deep neural networks in the proportional asymptotic limit and Student’s t-processes.
Settore FIS/02 - Fisica Teorica, Modelli e Metodi Matematici
   FELLowship for Innovation at INFN
   FELLINI
   European Commission
   Horizon 2020 Framework Programme
   754496
18-dic-2023
Article (author)
File in questo prodotto:
File Dimensione Formato  
s42256-023-00767-6.pdf

accesso riservato

Tipologia: Publisher's version/PDF
Dimensione 1.97 MB
Formato Adobe PDF
1.97 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
2209.04882.pdf

accesso aperto

Tipologia: Pre-print (manoscritto inviato all'editore)
Dimensione 1.64 MB
Formato Adobe PDF
1.64 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/1031546
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 3
  • ???jsp.display-item.citation.isi??? ND
social impact