Unsupervised learning (UL) is a class of machine learning (ML) that learns data, reduces dimensionality, and visualizes decisions without labels. Among UL models, a variational autoencoder (VAE) is considered a UL model that is regulated by variational inference to approximate the posterior distribution of large datasets. In this paper, we propose a novel explainable artificial intelligence (XAI) method to visually explain the VAE behavior based on the second-order derivative of the latent space concerning the encoding layers, which reflects the amount of acceleration required from encoding to decoding space. Our model is termed as Grad2VAE and it is able to capture the local curvatures of the representations to build online attention that visually explains the model’s behavior. Besides the VAE explanation, we employ our method for anomaly detection, where our model outperforms the recent UL deep models when generalizing it for large-scale anomaly data.

Grad2VAE: An explainable variational autoencoder model based on online attentions preserving curvatures of representations / M. Abukmeil, S. Ferrari, A. Genovese, V. Piuri, F. Scotti (LECTURE NOTES IN COMPUTER SCIENCE). - In: Image Analysis and Processing / [a cura di] S. Sclaroff, C. Distante, M. Leo, G.M. Farinella, F. Tombari. - [s.l] : Springer, 2022. - ISBN 978-3-031-06427-2. - pp. 670-681 (( Intervento presentato al 21. convegno ICIAP tenutosi a Lecce nel 2022 [10.1007/978-3-031-06427-2_56].

Grad2VAE: An explainable variational autoencoder model based on online attentions preserving curvatures of representations

M. Abukmeil
Primo
;
S. Ferrari
Secondo
;
A. Genovese;V. Piuri
Penultimo
;
F. Scotti
Ultimo
2022

Abstract

Unsupervised learning (UL) is a class of machine learning (ML) that learns data, reduces dimensionality, and visualizes decisions without labels. Among UL models, a variational autoencoder (VAE) is considered a UL model that is regulated by variational inference to approximate the posterior distribution of large datasets. In this paper, we propose a novel explainable artificial intelligence (XAI) method to visually explain the VAE behavior based on the second-order derivative of the latent space concerning the encoding layers, which reflects the amount of acceleration required from encoding to decoding space. Our model is termed as Grad2VAE and it is able to capture the local curvatures of the representations to build online attention that visually explains the model’s behavior. Besides the VAE explanation, we employ our method for anomaly detection, where our model outperforms the recent UL deep models when generalizing it for large-scale anomaly data.
Unsupervised Learning; VAE; XAI; Anomaly Detection
Settore INF/01 - Informatica
Settore ING-INF/05 - Sistemi di Elaborazione delle Informazioni
2022
Book Part (author)
File in questo prodotto:
File Dimensione Formato  
iciap21.pdf

accesso aperto

Tipologia: Post-print, accepted manuscript ecc. (versione accettata dall'editore)
Dimensione 5.27 MB
Formato Adobe PDF
5.27 MB Adobe PDF Visualizza/Apri
Grad2VAE - An Explainable Variational Autoencoder Model Based on Online Attentions Preserving Curvatures of Representations (final published).pdf

accesso aperto

Tipologia: Publisher's version/PDF
Dimensione 1.72 MB
Formato Adobe PDF
1.72 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/883271
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact