Unsupervised learning (UL) is a class of machine learning (ML) that learns data, reduces dimensionality, and visualizes decisions without labels. Among UL models, a variational autoencoder (VAE) is considered a UL model that is regulated by variational inference to approximate the posterior distribution of large datasets. In this paper, we propose a novel explainable artificial intelligence (XAI) method to visually explain the VAE behavior based on the second-order derivative of the latent space concerning the encoding layers, which reflects the amount of acceleration required from encoding to decoding space. Our model is termed as Grad2VAE and it is able to capture the local curvatures of the representations to build online attention that visually explains the model’s behavior. Besides the VAE explanation, we employ our method for anomaly detection, where our model outperforms the recent UL deep models when generalizing it for large-scale anomaly data.
Grad2VAE: An explainable variational autoencoder model based on online attentions preserving curvatures of representations / M. Abukmeil, S. Ferrari, A. Genovese, V. Piuri, F. Scotti (LECTURE NOTES IN COMPUTER SCIENCE). - In: Image Analysis and Processing / [a cura di] S. Sclaroff, C. Distante, M. Leo, G.M. Farinella, F. Tombari. - [s.l] : Springer, 2022. - ISBN 978-3-031-06427-2. - pp. 670-681 (( Intervento presentato al 21. convegno ICIAP tenutosi a Lecce nel 2022 [10.1007/978-3-031-06427-2_56].
Grad2VAE: An explainable variational autoencoder model based on online attentions preserving curvatures of representations
M. AbukmeilPrimo
;S. FerrariSecondo
;A. Genovese;V. PiuriPenultimo
;F. ScottiUltimo
2022
Abstract
Unsupervised learning (UL) is a class of machine learning (ML) that learns data, reduces dimensionality, and visualizes decisions without labels. Among UL models, a variational autoencoder (VAE) is considered a UL model that is regulated by variational inference to approximate the posterior distribution of large datasets. In this paper, we propose a novel explainable artificial intelligence (XAI) method to visually explain the VAE behavior based on the second-order derivative of the latent space concerning the encoding layers, which reflects the amount of acceleration required from encoding to decoding space. Our model is termed as Grad2VAE and it is able to capture the local curvatures of the representations to build online attention that visually explains the model’s behavior. Besides the VAE explanation, we employ our method for anomaly detection, where our model outperforms the recent UL deep models when generalizing it for large-scale anomaly data.File | Dimensione | Formato | |
---|---|---|---|
iciap21.pdf
accesso aperto
Tipologia:
Post-print, accepted manuscript ecc. (versione accettata dall'editore)
Dimensione
5.27 MB
Formato
Adobe PDF
|
5.27 MB | Adobe PDF | Visualizza/Apri |
Grad2VAE - An Explainable Variational Autoencoder Model Based on Online Attentions Preserving Curvatures of Representations (final published).pdf
accesso aperto
Tipologia:
Publisher's version/PDF
Dimensione
1.72 MB
Formato
Adobe PDF
|
1.72 MB | Adobe PDF | Visualizza/Apri |
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.