Face recognition using a single reference image per subject is challenging, above all when referring to a large gallery of subjects. Furthermore, the problem hardness seriously increases when the images are acquired in unconstrained conditions. In this paper we address the challenging Single Sample Per Person (SSPP) problem considering large datasets of images acquired in the wild, thus possibly featuring illumination, pose, face expression, partial occlusions, and low-resolution hurdles. The proposed technique alternates a sparse dictionary learning technique based on the method of optimal direction and the iterative ℓ 0 -norm minimization algorithm called k-LIMAPS. It works on robust deep-learned features, provided that the image variability is extended by standard augmentation techniques. Experiments show the effectiveness of our method against the hardness introduced above: first, we report extensive experiments on the unconstrained LFW dataset when referring to large galleries up to 1680 subjects; second, we present experiments on very low-resolution test images up to 8 × 8 pixels; third, tests on the AR dataset are analyzed against specific disguises such as partial occlusions, facial expressions, and illumination problems. In all the three scenarios our method outperforms the state-of-the-art approaches adopting similar configurations.

Robust single-sample face recognition by sparsity-driven sub-dictionary learning using deep features / V. Cuculo, A. D'Amelio, G. Grossi, R. Lanzarotti, J. Lin. - In: SENSORS. - ISSN 1424-8220. - 19:1(2019 Jan 03), pp. 146.1-146.19. [10.3390/s19010146]

Robust single-sample face recognition by sparsity-driven sub-dictionary learning using deep features

V. Cuculo
Primo
;
A. D'Amelio
Secondo
;
G. Grossi;R. Lanzarotti
Penultimo
;
J. Lin
Ultimo
2019

Abstract

Face recognition using a single reference image per subject is challenging, above all when referring to a large gallery of subjects. Furthermore, the problem hardness seriously increases when the images are acquired in unconstrained conditions. In this paper we address the challenging Single Sample Per Person (SSPP) problem considering large datasets of images acquired in the wild, thus possibly featuring illumination, pose, face expression, partial occlusions, and low-resolution hurdles. The proposed technique alternates a sparse dictionary learning technique based on the method of optimal direction and the iterative ℓ 0 -norm minimization algorithm called k-LIMAPS. It works on robust deep-learned features, provided that the image variability is extended by standard augmentation techniques. Experiments show the effectiveness of our method against the hardness introduced above: first, we report extensive experiments on the unconstrained LFW dataset when referring to large galleries up to 1680 subjects; second, we present experiments on very low-resolution test images up to 8 × 8 pixels; third, tests on the AR dataset are analyzed against specific disguises such as partial occlusions, facial expressions, and illumination problems. In all the three scenarios our method outperforms the state-of-the-art approaches adopting similar configurations.
face recognition; single sample per person; dictionary learning; optimal directions (MOD); Deep Convolutional Neural Network (DCNN) features; sparse recovery
Settore INF/01 - Informatica
Settore ING-INF/05 - Sistemi di Elaborazione delle Informazioni
   Le espressioni facciali e l'interpretazione delle emozioni: un approccio computazionale di integrazione tra acquisizione di immagine e segnali fisiologici basato sulla shape analysis e network bayesiani
   MINISTERO DELL'ISTRUZIONE E DEL MERITO
   RBFR12VHR7_003
3-gen-2019
Article (author)
File in questo prodotto:
File Dimensione Formato  
sensors-19-00146 (1).pdf

accesso aperto

Tipologia: Publisher's version/PDF
Dimensione 4.34 MB
Formato Adobe PDF
4.34 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/613254
Citazioni
  • ???jsp.display-item.citation.pmc??? 2
  • Scopus 28
  • ???jsp.display-item.citation.isi??? 17
social impact