Graph Structure Learning (GSL) methods address the limitations of real-world graphs by refining their structure and representation. This allows Graph Neural Networks (GNNs) to be applied to broader unstructured domains such as 3D face analysis. GSL can be considered as the dynamic learning of connection weights within a layer of message passing in a GNN, and particularly in a Graph Convolutional Network (GCN). A significant challenge for GSL methods arises in scenarios with limited availability of large datasets, a common issue in 3D face analysis, particularly in medical applications. This constraint limits the applicability of data-intensive GNN models, such as Graph Transformers, which, despite their effectiveness, require large amounts of training data. To address this limitation, we propose the Kernel-Attentive Graph Convolutional Network (KA-GCN). Our key finding is that integrating kernel-based and attention-based mechanisms to dynamically refine distances and learn the adjacency matrix within a Graph Structure Learning (GSL) framework enhances the model's adaptability, making it particularly effective for 3D face analysis tasks and delivering strong performance in data-scarce scenarios. Comprehensive experiments on the Facescape, Headspace, and Florence datasets, evaluating age, sexual dimorphism, and emotion, demonstrate that our approach outperforms state-of-the-art models in both effectiveness and robustness, achieving an average accuracy improvement of 2%. The project page is available on GitHub 1

KA-GCN: Kernel-Attentive Graph Convolutional Network for 3D face analysis / F. Agnelli, G. Facchi, G. Grossi, R. Lanzarotti. - In: ARRAY. - ISSN 2590-0056. - 26:(2025 Jul), pp. 100392.1-100392.9. [10.1016/j.array.2025.100392]

KA-GCN: Kernel-Attentive Graph Convolutional Network for 3D face analysis

F. Agnelli
Primo
;
G. Facchi
Secondo
;
G. Grossi
Penultimo
;
R. Lanzarotti
Ultimo
2025

Abstract

Graph Structure Learning (GSL) methods address the limitations of real-world graphs by refining their structure and representation. This allows Graph Neural Networks (GNNs) to be applied to broader unstructured domains such as 3D face analysis. GSL can be considered as the dynamic learning of connection weights within a layer of message passing in a GNN, and particularly in a Graph Convolutional Network (GCN). A significant challenge for GSL methods arises in scenarios with limited availability of large datasets, a common issue in 3D face analysis, particularly in medical applications. This constraint limits the applicability of data-intensive GNN models, such as Graph Transformers, which, despite their effectiveness, require large amounts of training data. To address this limitation, we propose the Kernel-Attentive Graph Convolutional Network (KA-GCN). Our key finding is that integrating kernel-based and attention-based mechanisms to dynamically refine distances and learn the adjacency matrix within a Graph Structure Learning (GSL) framework enhances the model's adaptability, making it particularly effective for 3D face analysis tasks and delivering strong performance in data-scarce scenarios. Comprehensive experiments on the Facescape, Headspace, and Florence datasets, evaluating age, sexual dimorphism, and emotion, demonstrate that our approach outperforms state-of-the-art models in both effectiveness and robustness, achieving an average accuracy improvement of 2%. The project page is available on GitHub 1
3D face analysis; Action unit recognition; Attention network; Expression recognition; FaceScape dataset; Florence dataset; Graph Structure Learning (GSL); Headspace dataset; Sexual dimorphism recognition; Small dataset;
Settore INFO-01/A - Informatica
lug-2025
7-apr-2025
Article (author)
File in questo prodotto:
File Dimensione Formato  
1-s2.0-S2590005625000190-main.pdf

accesso aperto

Tipologia: Publisher's version/PDF
Licenza: Creative commons
Dimensione 2.15 MB
Formato Adobe PDF
2.15 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/1159795
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
  • OpenAlex 0
social impact