2015 | OriginalPaper | Buchkapitel
Question Answering Track Evaluation in TREC, CLEF and NTCIR
verfasst von : María-Dolores Olvera-Lobo, Juncal Gutiérrez-Artacho
Erschienen in: New Contributions in Information Systems and Technologies
Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
Question Answering (QA) Systems are put forward as a real alternative to Information Retrieval systems as they provide the user with a fast and comprehensible answer to his or her information need. It has been 15 years since TREC introduced the first QA track. The principal campaigns in the evaluation of Information Retrieval have been specific tracks focusing on the development and evaluation of this type of system. This study is a brief review of the TREC, CLEF and NTCIR Conferences from the QA perspective. We present a historical overview of 15 years of QA evaluation tracks using the method of systematic review. We have examined identified the different tasks or specific labs created in each QA track, the types of evaluation question used, as well as the evaluation measures used in the different competitions analyzed. Of the conferences, it is CLEF that has applied the greater variety of types of test question (factoid, definition, list, causal, yes/no, amongst others). NTCIR, held on 13 occasions, is the conference which has made use of a greater number of different evaluation measures. Accuracy, precision and recall have been the three most used evaluation measures in the three campaigns.