In this paper we show how to achieve a more effective Query By Example processing, by using active mechanisms of biological vision, such as saccadic eye movements and fixations. In particular, we discuss the way to generate two fixation sequences from a query image Iq and a test image It of the data set, respectively, and how to compare the two sequences in order to compute a similarity measure between the two images. Meanwhile, we show how the approach can be used to discover and represent the hidden semantic associations among images, in terms of categories, which in turn drive the query process.
|Titolo:||Context-sensitive Queries for Image Retrieval in Digital Libraries|
BOCCIGNONE, GIUSEPPE (Primo)
|Parole Chiave:||Animate vision; Image indexing; Image retrieval|
|Data di pubblicazione:||ago-2008|
|Digital Object Identifier (DOI):||10.1007/s10844-007-0040-5|
|Appare nelle tipologie:||01 - Articolo su periodico|