Search and retrieval of multimedia content based on the evoked emotion comprises an interesting scientific field with numerous applications. This paper proposes a method that fuses two heterogeneous modalities, i.e. music and electroencephalographic signals, both for predicting emotional dimensions in the valence-arousal plane and for addressing four binary classification tasks, namely i.e. high/low arousal, positive/negative valence, high/low dominance, high/low liking. The proposed solution exploits Mel-scaled and EEG spectrograms feeding a k-medoids clustering scheme based on canonical correlation analysis. A thorough experimental campaign carried out on a publicly available dataset confirms the efficacy of such an approach. Despite its low computational cost, it was able to surpass state of the art results, and most importantly, in a user-independent manner.
Titolo: | Fusing Acoustic and Electroencephalographic Modalities for User-Independent Emotion Prediction |
Autori: | |
Parole Chiave: | music emotion prediction; EEG emotion prediction; music EEG fusion; canonical correlation analysis; k-medoids clustering algorithm |
Settore Scientifico Disciplinare: | Settore INF/01 - Informatica |
Data di pubblicazione: | lug-2019 |
Enti collegati al convegno: | IEEE |
Digital Object Identifier (DOI): | 10.1109/ICCC.2019.00018 |
URL: | https://conferences.computer.org/serviceswp/2019/pdfs/ICCC2019-2zMzc10H4Ll2R40yDgIcLN/6OMU5r4LCJEvOUQA2xHDGt/36sHdhRzScA9BlnQRMmzzB.pdf |
Tipologia: | Book Part (author) |
Appare nelle tipologie: | 03 - Contributo in volume |
File in questo prodotto:
File | Descrizione | Tipologia | Licenza | |
---|---|---|---|---|
ICCC2019.pdf | Publisher's version/PDF | Administrator Richiedi una copia |