Virtual and augmented realities are expected to become more and more important in everyday life in the next future; the role of spatial audio technologies over headphones will be pivotal for application scenarios which involve mobility. This paper introduces the SelfEar project, aimed at low-cost acquisition and personalization of Head-Related Transfer Functions (HRTFs) on mobile devices. This first version focuses on capturing individual spectral features which characterize external ear acoustics, through a self-adjustable procedure which guides users in collecting such information: their mobile device must be held with the stretched arm and positioned at several specific elevation points; acoustic data are acquired by an audio augmented reality headset which embeds a pair of microphones at listener ear-canals. A preliminary measurement session assesses the ability of the system to capture spectral features which are crucial for elevation perception. Moreover, a virtual experiment using a computational auditory model predicts clear vertical localization cues in the measured features.

Acoustic selfies for extraction of external ear features in mobile audio augmented reality / M. Geronazzo, J. Fantin, G. Sorato, G. Baldovino, F. Avanzini - In: VRST '16 : Proceedings / [a cura di] S.N. Spencer. - [s.l] : ACM, 2016. - ISBN 9781450344913. - pp. 23-26 (( Intervento presentato al 22. convegno Conference on Virtual Reality Software and Technology tenutosi a Munich nel 2016 [10.1145/2993369.2993376].

Acoustic selfies for extraction of external ear features in mobile audio augmented reality

F. Avanzini
2016

Abstract

Virtual and augmented realities are expected to become more and more important in everyday life in the next future; the role of spatial audio technologies over headphones will be pivotal for application scenarios which involve mobility. This paper introduces the SelfEar project, aimed at low-cost acquisition and personalization of Head-Related Transfer Functions (HRTFs) on mobile devices. This first version focuses on capturing individual spectral features which characterize external ear acoustics, through a self-adjustable procedure which guides users in collecting such information: their mobile device must be held with the stretched arm and positioned at several specific elevation points; acoustic data are acquired by an audio augmented reality headset which embeds a pair of microphones at listener ear-canals. A preliminary measurement session assesses the ability of the system to capture spectral features which are crucial for elevation perception. Moreover, a virtual experiment using a computational auditory model predicts clear vertical localization cues in the measured features.
binaural audio; head-related transfer function; headphones; mobile augmented reality; computational auditory model
Settore INF/01 - Informatica
Settore ING-INF/05 - Sistemi di Elaborazione delle Informazioni
2016
Book Part (author)
File in questo prodotto:
File Dimensione Formato  
geronazzo_vrst16.pdf

accesso riservato

Tipologia: Publisher's version/PDF
Dimensione 551.89 kB
Formato Adobe PDF
551.89 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/656658
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 10
  • ???jsp.display-item.citation.isi??? 6
  • OpenAlex ND
social impact