Several decades ago, Support Vector Machines (SVMs) were introduced for performing binary classification tasks, under a supervised framework. Nowadays, they often outperform other supervised methods and remain one of the most popular approaches in the machine learning arena. In this work, we investigate the training of SVMs through a smooth sparse-promoting-regularized squared hinge loss minimization. This choice paves the way to the application of quick training methods built on majorization-minimization approaches, benefiting from the Lipschitz differentiabililty of the loss function. Moreover, the proposed approach allows us to handle sparsity-preserving regularizers promoting the selection of the most significant features, so enhancing the performance. Numerical tests and comparisons conducted on three different datasets demonstrate the good performance of the proposed methodology in terms of qualitative metrics (accuracy, precision, recall, and F score) as well as computational cost.

Majoration-Minimization for Sparse SVMs / A. Benfenati, E. Chouzenoux, G. Franchini, S. Latva-Äijö, D. Narnhofer, J. Pesquet, S. J. Scott, M. Yousefi (SPRINGER INDAM SERIES). - In: Advanced Techniques in Optimization for Machine Learning and Imaging / [a cura di] A. Benfenati, F. Porta, T.A. Bubba, M. Viola. - [s.l] : Springer Nature, 2024. - ISBN 9789819767687. - pp. 31-54 (( convegno Advanced Techniques in Optimization for Machine Learning and Imaging tenutosi a Roma [10.1007/978-981-97-6769-4_3].

Majoration-Minimization for Sparse SVMs

A. Benfenati;
2024

Abstract

Several decades ago, Support Vector Machines (SVMs) were introduced for performing binary classification tasks, under a supervised framework. Nowadays, they often outperform other supervised methods and remain one of the most popular approaches in the machine learning arena. In this work, we investigate the training of SVMs through a smooth sparse-promoting-regularized squared hinge loss minimization. This choice paves the way to the application of quick training methods built on majorization-minimization approaches, benefiting from the Lipschitz differentiabililty of the loss function. Moreover, the proposed approach allows us to handle sparsity-preserving regularizers promoting the selection of the most significant features, so enhancing the performance. Numerical tests and comparisons conducted on three different datasets demonstrate the good performance of the proposed methodology in terms of qualitative metrics (accuracy, precision, recall, and F score) as well as computational cost.
Settore MATH-05/A - Analisi numerica
2024
Book Part (author)
File in questo prodotto:
File Dimensione Formato  
2308.16858v1.pdf

accesso aperto

Tipologia: Pre-print (manoscritto inviato all'editore)
Dimensione 828.94 kB
Formato Adobe PDF
828.94 kB Adobe PDF Visualizza/Apri
978-981-97-6769-4_3.pdf

accesso riservato

Tipologia: Publisher's version/PDF
Dimensione 926.36 kB
Formato Adobe PDF
926.36 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/1116484
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact