Exploiting Visual Concepts to Improve Text-Based Image Retrieval
Sabrina Tollari(1), Marcin Detyniecki(1), Christophe Marsala(1), Ali Fakeri(1), Massih-Reza Amini(2), Patrick Gallinari(1)
(1) Laboratoire d'Informatique Paris 6
(2) National Research Council Canada
104, avenue du président
Kennedy
123,
boulevard Alexandre Taché
75016 Paris
Gatineau, Canada
In this paper, we study how to automatically exploit
visual concepts in a text-based image retrieval task. First, we use
Forest of Fuzzy Decision Trees (FFDTs) to automatically annotate
images with visual concepts. Second, using optionally WordNet, we
match visual concepts and textual query. Finally, we filter the textbased
image retrieval result list using the FFDTs. This study is
performed in the context of two tasks of the CLEF2008 international
campaign: the Visual Concept Detection Task (VCDT) (17
visual concepts) and the photographic retrieval task (ImageCLEFphoto)
(39 queries and 20k images). Our best VCDT run is the 4th
best of the 53 submitted runs. The ImageCLEFphoto results show
that there is a clear improvement, in terms of precision at 20, when
using the visual concepts explicitly appearing in the query.