The purpose of this paper is to compare different learnable frontends in medical acoustics tasks. A framework has been implemented to classify human respiratory sounds and heartbeats in two categories, i.e. healthy or affected by pathologies. After obtaining two suitable datasets, we proceeded to classify the sounds using two learnable state-of-art frontends – LEAF and nnAudio – plus a non-learnable baseline frontend, i.e. Mel-filterbanks. The computed features are then fed into two different CNN models, namely VGG16 and EfficientNet. The frontends are carefully benchmarked in terms of the number of parameters, computational resources, and effectiveness. This work demonstrates how the integration of learnable frontends in neuralaudio classification systems may improve performance, especially in the field of medical acoustics. However, the usage of such frameworks makes the needed amount of data even larger. Consequently, they are useful if the amount of data available for training is adequately large to assist the feature learning process.

Deep Feature Learning for Medical Acoustics / A.M. Poire, F. Simonetta, S. Ntalampiras (LECTURE NOTES IN COMPUTER SCIENCE). - In: ICANN 2022: Artificial Neural Networks and Machine Learning / [a cura di] E. Pimenidis, P. Angelov, C. Jayne, A. Papaleonidas, M. Aydin. - Prima edizione. - [s.l] : Springer, 2022. - ISBN 978-3-031-15936-7. - pp. 39-50 (( Intervento presentato al 31. convegno International Conference on Artificial Neural Networks : September, 6th - 9th tenutosi a Bristol, nel 2022 [10.1007/978-3-031-15937-4_4].

Deep Feature Learning for Medical Acoustics

F. Simonetta;S. Ntalampiras
2022

Abstract

The purpose of this paper is to compare different learnable frontends in medical acoustics tasks. A framework has been implemented to classify human respiratory sounds and heartbeats in two categories, i.e. healthy or affected by pathologies. After obtaining two suitable datasets, we proceeded to classify the sounds using two learnable state-of-art frontends – LEAF and nnAudio – plus a non-learnable baseline frontend, i.e. Mel-filterbanks. The computed features are then fed into two different CNN models, namely VGG16 and EfficientNet. The frontends are carefully benchmarked in terms of the number of parameters, computational resources, and effectiveness. This work demonstrates how the integration of learnable frontends in neuralaudio classification systems may improve performance, especially in the field of medical acoustics. However, the usage of such frameworks makes the needed amount of data even larger. Consequently, they are useful if the amount of data available for training is adequately large to assist the feature learning process.
Settore INF/01 - Informatica
Book Part (author)
File in questo prodotto:
File Dimensione Formato  
978-3-031-15937-4_4.pdf

accesso riservato

Tipologia: Publisher's version/PDF
Dimensione 650.59 kB
Formato Adobe PDF
650.59 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
2208.03084.pdf

embargo fino al 07/09/2023

Tipologia: Post-print, accepted manuscript ecc. (versione accettata dall'editore)
Dimensione 727.59 kB
Formato Adobe PDF
727.59 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

Caricamento pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/939593
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact