Graph Structure Learning (GSL) techniques improve real-world graph representations, enabling Graph Neural Networks (GNNs) to be applied to unstructured domains like 3D face analysis. GSL dynamically learns connection weights within message-passing layers of a GNN, particularly in Graph Convolutional Networks (GCNs). This becomes crucial when working with small sample datasets, where methods like Transformers, which require large data for training, are less effective. In this paper, we introduce a kernel-attentive Graph Convolutional Network (KA-GCN) designed to integrate a positional bias through attention mechanisms into small-scale 3D face analysis tasks. This approach combines kerneland attention-based mechanisms to adjust different distance measures and learn the adjacency matrix. Extensive experiments on the FaceScape public dataset and a private dataset featuring facial dysmorphisms show that our method outperforms state-of-the-art models in both effectiveness and robustness. The code is available at https://github.com/phuselab/KA-CONV.

Enhancing 3D Face Analysis Using Graph Convolutional Networks with Kernel-Attentive Filters / F. Agnelli, O. Ghezzi, G. Blandano, J. Burger, G. Facchi, L. Schmid - In: SAC '25: Proceedings of the 40th ACM/SIGAPP Symposium on Applied Computing / [a cura di] J. Hong, S. Battiato, C. Esposito. - [s.l] : ACM, 2025 May 14. - ISBN 979-8-4007-0629-5. - pp. 638-644 (( Intervento presentato al 40. convegno ACM/SIGAPP Symposium on Applied Computing tenutosi a Catania nel 2025 [10.1145/3672608.3707910].

Enhancing 3D Face Analysis Using Graph Convolutional Networks with Kernel-Attentive Filters

F. Agnelli
Primo
;
G. Blandano;J. Burger;G. Facchi
Penultimo
;
L. Schmid
Ultimo
2025

Abstract

Graph Structure Learning (GSL) techniques improve real-world graph representations, enabling Graph Neural Networks (GNNs) to be applied to unstructured domains like 3D face analysis. GSL dynamically learns connection weights within message-passing layers of a GNN, particularly in Graph Convolutional Networks (GCNs). This becomes crucial when working with small sample datasets, where methods like Transformers, which require large data for training, are less effective. In this paper, we introduce a kernel-attentive Graph Convolutional Network (KA-GCN) designed to integrate a positional bias through attention mechanisms into small-scale 3D face analysis tasks. This approach combines kerneland attention-based mechanisms to adjust different distance measures and learn the adjacency matrix. Extensive experiments on the FaceScape public dataset and a private dataset featuring facial dysmorphisms show that our method outperforms state-of-the-art models in both effectiveness and robustness. The code is available at https://github.com/phuselab/KA-CONV.
Settore INFO-01/A - Informatica
14-mag-2025
Book Part (author)
File in questo prodotto:
File Dimensione Formato  
3672608.3707910.pdf

accesso aperto

Tipologia: Publisher's version/PDF
Dimensione 1.55 MB
Formato Adobe PDF
1.55 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/1164795
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
  • OpenAlex 0
social impact