We present an approach to the problem of real-time generation of music, driven by the affective state of the user, estimated from their electroencephalogram (EEG). This work is aimed at exploring strategies for real-time music generation applications using sensor data. Applications can range from responsive music for x-reality to art installations, and music generation as feedback in pedagogical contexts. We developed a Brain-Computer Interface in the open-source platform OpenViBE. It manages communication with the EEG device and computes the relevant features. A benchmark dataset is used to evaluate the performance of supervised learning methods on the binary classification task of valence and arousal. We also assessed the performance using a reduced number of electrodes and frequency-bands, in order to address the problems of lower budgets and noisy environments. Then, we address the requirements for a real-time music generation model and propose a modification to Magenta's MusicVAE, introducing a parameter for controlling inter-batch memory. In the end, we discuss possible strategies to map desired music features to a model's native input features. We present a Probabilistic Graphical Model to model the mapping from valence/arousal to MusicVAE's latent variables. We also address dataset dimensionality problems proposing three probabilistic solutions.

Listen to your Mind’s (He)Art: A System for Affective Music Generation via Brain-Computer Interface / M. Tiraboschi, F. Avanzini, G. Boccignone - In: Proceedings of the 18th Sound and Music Computing Conference / [a cura di] D.A. Mauro, S. Spagnol, A. Valle. - [s.l] : SMC, 2021. - ISBN 9788894541540. - pp. 146-153 (( Intervento presentato al 18. convegno Sound and Music Computing Conference tenutosi a Torino nel 2021.

Listen to your Mind’s (He)Art: A System for Affective Music Generation via Brain-Computer Interface

M. Tiraboschi
Primo
;
F. Avanzini
Secondo
;
G. Boccignone
Ultimo
2021

Abstract

We present an approach to the problem of real-time generation of music, driven by the affective state of the user, estimated from their electroencephalogram (EEG). This work is aimed at exploring strategies for real-time music generation applications using sensor data. Applications can range from responsive music for x-reality to art installations, and music generation as feedback in pedagogical contexts. We developed a Brain-Computer Interface in the open-source platform OpenViBE. It manages communication with the EEG device and computes the relevant features. A benchmark dataset is used to evaluate the performance of supervised learning methods on the binary classification task of valence and arousal. We also assessed the performance using a reduced number of electrodes and frequency-bands, in order to address the problems of lower budgets and noisy environments. Then, we address the requirements for a real-time music generation model and propose a modification to Magenta's MusicVAE, introducing a parameter for controlling inter-batch memory. In the end, we discuss possible strategies to map desired music features to a model's native input features. We present a Probabilistic Graphical Model to model the mapping from valence/arousal to MusicVAE's latent variables. We also address dataset dimensionality problems proposing three probabilistic solutions.
bci; affective; music; bcmi; generation; ann; vae; magenta; model; smc; eeg; openViBE; pgm
Settore INF/01 - Informatica
2021
https://zenodo.org/record/5054146/files/SMC2021.pdf
Book Part (author)
File in questo prodotto:
File Dimensione Formato  
SMC_2021_paper_65.pdf

accesso aperto

Descrizione: Articolo principale
Tipologia: Publisher's version/PDF
Dimensione 478.63 kB
Formato Adobe PDF
478.63 kB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/855057
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? ND
social impact