2009 | OriginalPaper | Chapter
CLEF 2008 Ad-Hoc Track: Comparing and Combining Different IR Approaches
Authors : Jens Kürsten, Thomas Wilhelm, Maximilian Eibl
Published in: Evaluating Systems for Multilingual and Multimodal Information Access
Publisher: Springer Berlin Heidelberg
Activate our intelligent search to find suitable subject content or patents.
Select sections of text to find matching patents with Artificial Intelligence. powered by
Select sections of text to find additional relevant content using AI-assisted search. powered by
This article describes post workshop experiments that were conducted after our first participation at the
TEL@CLEF task
. We used the
Xtrieval
framework [5], [4] for the preparation and execution of the experiments. We ran 69 experiments in the setting of the CLEF 2008 task, whereof 39 were monolingual and 30 were cross-lingual. We investigated the capabilities of the current version of Xtrieval, which could use the two retrieval cores Lucene and Lemur from now on. Our main goal was to compare and combine the results from those retrieval engines. The translation of the topics for the cross-lingual experiments was realized with a plug-in to access the Google AJAX language API. The performance of our monolingual experiments was better than the best experiments we submitted during the evaluation campaign. Our cross-lingual experiments performed very well for all target collections and achieved between 87% and 100% of the monolingual retrieval effectiveness. The combination of the results from the Lucene and the Lemur retrieval core showed very consistent performance.