We present an architectural model for the interaction between top-down, object-based information and bottom-up, spatial-based information in determining visual attention shifts. We focus in particular on how the attentive process can take into account the processing of faces and multiple moving objects. To validate the model, experiments with eye-tracked human subjects are presented and discussed.

An architectural model for combining spatial-based and object-based information for attentive video analysis / G. Boccignone, V. Caggiano, A. Marcelli, P. Napoletano, G.D. Fiore - In: Computer Architecture for Machine Perception, 2005. CAMP 2005. Proceedings. Seventh International Workshop on[s.l] : IEEE, 2005. - ISBN 0769522556. - pp. 116-121 (( Intervento presentato al 7. convegno International Workshop on Computer Architecture for Machine Perception tenutosi a Palermo nel 2005.

An architectural model for combining spatial-based and object-based information for attentive video analysis

G. Boccignone
;
2005

Abstract

We present an architectural model for the interaction between top-down, object-based information and bottom-up, spatial-based information in determining visual attention shifts. We focus in particular on how the attentive process can take into account the processing of faces and multiple moving objects. To validate the model, experiments with eye-tracked human subjects are presented and discussed.
visula-attention; scene analysis; vision
Settore ING-INF/05 - Sistemi di Elaborazione delle Informazioni
2005
IEEE Computer Society, TCPAMI
IEEE Computer Society Technical Committee on Parallel Processing
IEEE Computer Society Technical Commit. on Comput. Architecture
Book Part (author)
File in questo prodotto:
File Dimensione Formato  
01508174.pdf

accesso riservato

Tipologia: Publisher's version/PDF
Dimensione 359.61 kB
Formato Adobe PDF
359.61 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2434/494341
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 0
social impact