Generalized head-related transfer functions (HRTFs) represent a cheap and straightforward mean of providing 3D rendering in headphone reproduction. However, they are known to produce evident sound localization errors, including incorrect perception of elevation, front-back reversals, and lack of externalization, especially when head tracking is not utilized in the reproduction . Therefore, individual anthropometric features have a key role in characterizing HRTFs. On the other hand, HRTF measurements on a significant number of subjects are both expensive and inconvenient. This short paper briefly presents a structural HRTF model that, if properly rendered through a proposed hardware (wireless headphones augmented with motion and vision sensors), can be used for an efficient and immersive sound reproduction. Special care is reserved to the contribution of the external ear to the HRTF: data and results collected to date by the authors allow parametrization of the model according to individual anthropometric data, which in turn can be automatically estimated through straightforward image analysis. The proposed hardware and software can be used to render scenes with multiple audiovisual objects in a number of contexts such as computer games, cinema, edutainment, and many others.

Model-based customized binaural reproduction through headphones / M. Geronazzo, S. Spagnol, D. Rocchesso, F. Avanzini - In: Atti del XIX Colloquio di Informatica Musicale[s.l] : Associazione Informatica Musicale Italiana, 2012. - ISBN 9788890341304. - pp. 186-187 (( Intervento presentato al 19. convegno Colloquio di Informatica Musicale tenutosi a Trieste nel 2012.

Model-based customized binaural reproduction through headphones

D. Rocchesso;F. Avanzini
2012

Abstract

Generalized head-related transfer functions (HRTFs) represent a cheap and straightforward mean of providing 3D rendering in headphone reproduction. However, they are known to produce evident sound localization errors, including incorrect perception of elevation, front-back reversals, and lack of externalization, especially when head tracking is not utilized in the reproduction . Therefore, individual anthropometric features have a key role in characterizing HRTFs. On the other hand, HRTF measurements on a significant number of subjects are both expensive and inconvenient. This short paper briefly presents a structural HRTF model that, if properly rendered through a proposed hardware (wireless headphones augmented with motion and vision sensors), can be used for an efficient and immersive sound reproduction. Special care is reserved to the contribution of the external ear to the HRTF: data and results collected to date by the authors allow parametrization of the model according to individual anthropometric data, which in turn can be automatically estimated through straightforward image analysis. The proposed hardware and software can be used to render scenes with multiple audiovisual objects in a number of contexts such as computer games, cinema, edutainment, and many others.
Settore INF/01 - Informatica
Settore ING-INF/05 - Sistemi di Elaborazione delle Informazioni
2012
Book Part (author)
File in questo prodotto:
File Dimensione Formato  
geronazzo_cim12.pdf

accesso aperto

Tipologia: Publisher's version/PDF
Dimensione 50.38 kB
Formato Adobe PDF
50.38 kB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/657921
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact