This paper reports our multimedia information retrieval experiments carried out for the ImageCLEF track (ImageCLEFwiki). We propose a new multimedia model combining textual and/or visual information which enables to perform textual, visual, or multimedia queries. We experiment the model on ImageCLEF data and we compare the results obtained using the different modalities.
Our multimedia document model is based on a vector of textual and visual terms. Textual terms correspond to textual words while the visual ones are computed using local colour features. We obtain good results using only the textual part and we show that the visual information is useful in some particular cases.
Bitte loggen Sie sich ein, um Zugang zu diesem Inhalt zu erhalten