n this study, we aim to determine if generalized sounds and mu- sic can share a common emotional space, improving predictions of emotion in terms of arousal and valence. We propose the use of multiple datasets as a multi-domain learning technique. Our approach involves creating a common space encompassing fea- tures that characterize both generalized sounds and music, as they can evoke emotions in a similar manner. To achieve this, we uti- lized two publicly available datasets, namely IADS-E and PMEmo, following a standardized experimental protocol. We employed a wide variety of features that capture diverse aspects of the audio structure including key parameters of spectrum, energy, and voic- ing. Subsequently, we performed joint learning on the common feature space, leveraging heterogeneous model architectures. Inter- estingly, this synergistic scheme outperforms the state-of-the-art in both sound and music emotion prediction. The code enabling full replication of the presented experimental pipeline is available at https://github.com/LIMUNIMI/MusicSoundEmotions

Joint Learning of Emotions in Music and Generalized Sounds / F. Simonetta, F. Certo, S. Ntalampiras - In: AM '24: Proceedings / [a cura di] L. A, Ludovico, D.A. Mauro. - [s.l] : ACM, 2024. - ISBN 979-8-4007-0968-5. - pp. 302-307 (( Intervento presentato al 19. convegno Audio Mostly tenutosi a Milano nel 2024 [10.1145/3678299.3678328].

Joint Learning of Emotions in Music and Generalized Sounds

F. Simonetta;S. Ntalampiras
2024

Abstract

n this study, we aim to determine if generalized sounds and mu- sic can share a common emotional space, improving predictions of emotion in terms of arousal and valence. We propose the use of multiple datasets as a multi-domain learning technique. Our approach involves creating a common space encompassing fea- tures that characterize both generalized sounds and music, as they can evoke emotions in a similar manner. To achieve this, we uti- lized two publicly available datasets, namely IADS-E and PMEmo, following a standardized experimental protocol. We employed a wide variety of features that capture diverse aspects of the audio structure including key parameters of spectrum, energy, and voic- ing. Subsequently, we performed joint learning on the common feature space, leveraging heterogeneous model architectures. Inter- estingly, this synergistic scheme outperforms the state-of-the-art in both sound and music emotion prediction. The code enabling full replication of the presented experimental pipeline is available at https://github.com/LIMUNIMI/MusicSoundEmotions
music; emotions; generalized sounds; affective computing; automl
Settore INF/01 - Informatica
Settore INFO-01/A - Informatica
2024
https://dl.acm.org/doi/10.1145/3678299.3678328
Book Part (author)
File in questo prodotto:
File Dimensione Formato  
2024_Joint_Learning_of_Emotions_in_Music_and_Generalized_Sounds.pdf

accesso aperto

Tipologia: Publisher's version/PDF
Dimensione 733.76 kB
Formato Adobe PDF
733.76 kB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/1097070
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact