Geometric Deep Learning (GDL) unifies a broad class of machine learning techniques from the perspectives of symmetries, offering a framework for introducing problem-specific inductive biases like Graph Neural Networks (GNNs). However, the current formulation of GDL is limited to global symmetries that are not often found in real-world problems. We propose to relax GDL to allow for local symmetries, specifically fibration symmetries in graphs, to leverage regularities of realistic instances. We show that GNNs apply the inductive bias of fibration symmetries and derive a tighter upper bound for their expressive power. Additionally, by identifying symmetries in networks, we collapse network nodes, thereby increasing their computational efficiency during both inference and training of deep neural networks. The mathematical extension introduced here applies beyond graphs to manifolds, bundles, and grids for the development of models with inductive biases induced by local symmetries that can lead to better generalization.

The Role of Fibration Symmetries in Geometric Deep Learning / O. Velarde, L. Parra, P. Boldi, H. Makse. - (2024 Aug 28). [10.48550/arXiv.2408.15894]

The Role of Fibration Symmetries in Geometric Deep Learning

P. Boldi
Penultimo
;
2024

Abstract

Geometric Deep Learning (GDL) unifies a broad class of machine learning techniques from the perspectives of symmetries, offering a framework for introducing problem-specific inductive biases like Graph Neural Networks (GNNs). However, the current formulation of GDL is limited to global symmetries that are not often found in real-world problems. We propose to relax GDL to allow for local symmetries, specifically fibration symmetries in graphs, to leverage regularities of realistic instances. We show that GNNs apply the inductive bias of fibration symmetries and derive a tighter upper bound for their expressive power. Additionally, by identifying symmetries in networks, we collapse network nodes, thereby increasing their computational efficiency during both inference and training of deep neural networks. The mathematical extension introduced here applies beyond graphs to manifolds, bundles, and grids for the development of models with inductive biases induced by local symmetries that can lead to better generalization.
Computer Science - Learning; Computer Science - Learning
Settore INFO-01/A - Informatica
28-ago-2024
http://arxiv.org/abs/2408.15894v1
File in questo prodotto:
File Dimensione Formato  
2408.15894v1.pdf

accesso aperto

Tipologia: Post-print, accepted manuscript ecc. (versione accettata dall'editore)
Dimensione 11.97 MB
Formato Adobe PDF
11.97 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/1133396
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact