As the leading cause of dementia worldwide, Alzheimer’s Disease (AD) has prompted significant interest in developing Deep Learning (DL) approaches for its classification. However, it currently remains unclear whether these models rely on established biological indicators. This work compares a novel DL model using structural connectivity (namely, BC-GCN-SE adapted from functional connectivity tasks) with an established model using structural magnetic resonance imaging (MRI) scans (namely, ResNet18). Unlike most studies primarily focusing on performance, our work places explainability at the forefront. Specifically, we define a novel Explainable Artificial Intelligence (XAI) metric, based on gradient-weighted class activation mapping. Its aim is quantitatively measuring how effectively these models fare against established AD biomarkers in their decision-making. The XAI assessment was conducted across 132 brain parcels. Results were compared to AD-relevant regions to measure adherence to domain knowledge. Then, differences in explainability patterns between the two models were assessed to explore the insights offered by each piece of data (i.e., MRI vs. connectivity). Classification performance was satisfactory in terms of both the median true positive (ResNet18: 0.817, BC-GCN-SE: 0.703) and true negative rates (ResNet18: 0.816; BC-GCN-SE: 0.738). Statistical tests (p < 0.05) and ranking of the 15% most relevant parcels revealed the involvement of target areas: the medial temporal lobe for ResNet18 and the default mode network for BC-GCN-SE. Additionally, our findings suggest that different imaging modalities provide complementary information to DL models. This lays the foundation for bioengineering advancements in developing more comprehensive and trustworthy DL models, potentially enhancing their applicability as diagnostic support tools for neurodegenerative diseases.

Biomarker Investigation Using Multiple Brain Measures from MRI Through Explainable Artificial Intelligence in Alzheimer’s Disease Classification / D. Coluzzi, V. Bordin, M.W. Rivolta, I. Fortel, L. Zhan, A. Leow, G. Baselli. - In: BIOENGINEERING. - ISSN 2306-5354. - 12:1(2025 Jan 17), pp. 82.1-82.28. [10.3390/bioengineering12010082]

Biomarker Investigation Using Multiple Brain Measures from MRI Through Explainable Artificial Intelligence in Alzheimer’s Disease Classification

D. Coluzzi
Co-primo
;
M.W. Rivolta;
2025

Abstract

As the leading cause of dementia worldwide, Alzheimer’s Disease (AD) has prompted significant interest in developing Deep Learning (DL) approaches for its classification. However, it currently remains unclear whether these models rely on established biological indicators. This work compares a novel DL model using structural connectivity (namely, BC-GCN-SE adapted from functional connectivity tasks) with an established model using structural magnetic resonance imaging (MRI) scans (namely, ResNet18). Unlike most studies primarily focusing on performance, our work places explainability at the forefront. Specifically, we define a novel Explainable Artificial Intelligence (XAI) metric, based on gradient-weighted class activation mapping. Its aim is quantitatively measuring how effectively these models fare against established AD biomarkers in their decision-making. The XAI assessment was conducted across 132 brain parcels. Results were compared to AD-relevant regions to measure adherence to domain knowledge. Then, differences in explainability patterns between the two models were assessed to explore the insights offered by each piece of data (i.e., MRI vs. connectivity). Classification performance was satisfactory in terms of both the median true positive (ResNet18: 0.817, BC-GCN-SE: 0.703) and true negative rates (ResNet18: 0.816; BC-GCN-SE: 0.738). Statistical tests (p < 0.05) and ranking of the 15% most relevant parcels revealed the involvement of target areas: the medial temporal lobe for ResNet18 and the default mode network for BC-GCN-SE. Additionally, our findings suggest that different imaging modalities provide complementary information to DL models. This lays the foundation for bioengineering advancements in developing more comprehensive and trustworthy DL models, potentially enhancing their applicability as diagnostic support tools for neurodegenerative diseases.
English
Alzheimer’s disease; explainable artificial intelligence; magnetic resonance imaging; neuroimaging biomarkers; structural connectivity;
Settore INFO-01/A - Informatica
Settore IBIO-01/A - Bioingegneria
Articolo
Esperti anonimi
Pubblicazione scientifica
Goal 3: Good health and well-being
   MUSA - Multilayered Urban Sustainability Actiona
   MUSA
   MINISTERO DELL'UNIVERSITA' E DELLA RICERCA
17-gen-2025
MDPI
12
1
82
1
28
28
Pubblicato
Periodico con rilevanza internazionale
crossref
Aderisco
info:eu-repo/semantics/article
Biomarker Investigation Using Multiple Brain Measures from MRI Through Explainable Artificial Intelligence in Alzheimer’s Disease Classification / D. Coluzzi, V. Bordin, M.W. Rivolta, I. Fortel, L. Zhan, A. Leow, G. Baselli. - In: BIOENGINEERING. - ISSN 2306-5354. - 12:1(2025 Jan 17), pp. 82.1-82.28. [10.3390/bioengineering12010082]
open
Prodotti della ricerca::01 - Articolo su periodico
7
262
Article (author)
Periodico con Impact Factor
D. Coluzzi, V. Bordin, M.W. Rivolta, I. Fortel, L. Zhan, A. Leow, G. Baselli
File in questo prodotto:
File Dimensione Formato  
J33_Bioengineering.pdf

accesso aperto

Tipologia: Publisher's version/PDF
Licenza: Creative commons
Dimensione 11.31 MB
Formato Adobe PDF
11.31 MB Adobe PDF Visualizza/Apri
J33_Bioengineering_compressed.pdf

accesso aperto

Descrizione: Compressed
Tipologia: Publisher's version/PDF
Licenza: Creative commons
Dimensione 625.25 kB
Formato Adobe PDF
625.25 kB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/1177015
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 7
  • ???jsp.display-item.citation.isi??? 7
  • OpenAlex ND
social impact