Unsupervised Learning (UL) models are a class of Machine Learning (ML) which concerns with reducing dimensionality, data factorization, disentangling and learning the representations among the data. The UL models gain their popularity due to their abilities to learn without any predefined label, and they are able to reduce the noise and redundancy among the data samples. However, generalizing the UL models for different applications including image generation, compression, encoding, and recognition faces different challenges due to limited available data for learning, diversity, and complex dimensions. To overcome such challenges, we propose a partial learning procedure by utilizing the β-Non Negative Matrix Factorization (β-NMF), which maps the data into two complementary subspaces constituting generalized driven priors among the data. Moreover, we employ a dual-shallow Autoencoder (AE) to learn the subspaces separately or jointly for image reconstruction and visualization tasks, where our model performance shows superior results to the literary works when learning the model with a small amount of data and generalizing it for large-scale unseen data.
Unsupervised learning from limited available data by β-NMF and dual autoencoder / M. Abukmeil, S. Ferrari, A. Genovese, V. Piuri, F. Scotti (PROCEEDINGS - INTERNATIONAL CONFERENCE ON IMAGE PROCESSING). - In: 2020 IEEE International Conference on Image Processing (ICIP)[s.l] : IEEE, 2020. - ISBN 9781728163956. - pp. 81-85 (( Intervento presentato al 27. convegno IEEE Int. Conf. on Image Processing (ICIP 2020) tenutosi a Abu Dhabi nel 2020.
Unsupervised learning from limited available data by β-NMF and dual autoencoder
M. Abukmeil;S. Ferrari;A. Genovese;V. Piuri;F. Scotti
2020
Abstract
Unsupervised Learning (UL) models are a class of Machine Learning (ML) which concerns with reducing dimensionality, data factorization, disentangling and learning the representations among the data. The UL models gain their popularity due to their abilities to learn without any predefined label, and they are able to reduce the noise and redundancy among the data samples. However, generalizing the UL models for different applications including image generation, compression, encoding, and recognition faces different challenges due to limited available data for learning, diversity, and complex dimensions. To overcome such challenges, we propose a partial learning procedure by utilizing the β-Non Negative Matrix Factorization (β-NMF), which maps the data into two complementary subspaces constituting generalized driven priors among the data. Moreover, we employ a dual-shallow Autoencoder (AE) to learn the subspaces separately or jointly for image reconstruction and visualization tasks, where our model performance shows superior results to the literary works when learning the model with a small amount of data and generalizing it for large-scale unseen data.File | Dimensione | Formato | |
---|---|---|---|
icip20.pdf
accesso aperto
Tipologia:
Post-print, accepted manuscript ecc. (versione accettata dall'editore)
Dimensione
8.92 MB
Formato
Adobe PDF
|
8.92 MB | Adobe PDF | Visualizza/Apri |
09191252.pdf
accesso aperto
Tipologia:
Publisher's version/PDF
Dimensione
5.44 MB
Formato
Adobe PDF
|
5.44 MB | Adobe PDF | Visualizza/Apri |
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.