Learning machines for pattern recognition, such as neural networks or support vector machines, are usually conceived to process real–valued vectors with predefined dimensionality even if, in many real–world applications, relevant information is inherently organized into entities and relationships between them. Instead, Graph Neural Networks (GNNs) can directly process structured data, guaranteeing universal approximation of many practically useful functions on graphs. GNNs, that do not strictly meet the definition of deep architectures, are based on the unfolding mechanism during learning, that, in practice, yields networks that have the same depth of the data structures they process. However, GNNs may be hindered by the long–term dependency problem, i.e. the difficulty in taking into account information coming from peripheral nodes within graphs — due to the local nature of the procedures for updating the state and the weights. To overcome this limitation, GNNs may be cascaded to form layered architectures, called Layered GNNs (LGNNs). Each GNN in the cascade is trained based on the original graph “enriched” with the information computed by the previous layer, to implement a sort of incremental learning framework, able to take into account progressively further information. The applicability of LGNNs will be illustrated both with respect to a classical problem in graph–theory and to pattern recognition problems in bioinformatics.

Deep Neural Networks for Structured Data / M. Bianchini, G.M. Dimitri, M. Maggini, F. Scarselli (STUDIES IN COMPUTATIONAL INTELLIGENCE). - In: Computational Intelligence for Pattern Recognition / Pedrycz Witold and Chen Shyi-Ming ; [a cura di] W. Pedrycz, C. Shyi-Ming. - BERLIN : SPRINGER-VERLAG, 2018. - ISBN 978-3-319-89628-1. - pp. 29-51 [10.1007/978-3-319-89629-8_2]

Deep Neural Networks for Structured Data

G.M. Dimitri;
2018

Abstract

Learning machines for pattern recognition, such as neural networks or support vector machines, are usually conceived to process real–valued vectors with predefined dimensionality even if, in many real–world applications, relevant information is inherently organized into entities and relationships between them. Instead, Graph Neural Networks (GNNs) can directly process structured data, guaranteeing universal approximation of many practically useful functions on graphs. GNNs, that do not strictly meet the definition of deep architectures, are based on the unfolding mechanism during learning, that, in practice, yields networks that have the same depth of the data structures they process. However, GNNs may be hindered by the long–term dependency problem, i.e. the difficulty in taking into account information coming from peripheral nodes within graphs — due to the local nature of the procedures for updating the state and the weights. To overcome this limitation, GNNs may be cascaded to form layered architectures, called Layered GNNs (LGNNs). Each GNN in the cascade is trained based on the original graph “enriched” with the information computed by the previous layer, to implement a sort of incremental learning framework, able to take into account progressively further information. The applicability of LGNNs will be illustrated both with respect to a classical problem in graph–theory and to pattern recognition problems in bioinformatics.
Graph Neural Networks; Deep Neural Networks; Protein Structure Prediction
Settore INFO-01/A - Informatica
Settore IINF-05/A - Sistemi di elaborazione delle informazioni
2018
Book Part (author)
File in questo prodotto:
File Dimensione Formato  
456097_1_En_2_Chapter_Author.pdf

accesso aperto

Descrizione: Author proof
Tipologia: Post-print, accepted manuscript ecc. (versione accettata dall'editore)
Licenza: Non specificato
Dimensione 870.92 kB
Formato Adobe PDF
870.92 kB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/1187533
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 19
  • ???jsp.display-item.citation.isi??? 16
  • OpenAlex 21
social impact