2014 | OriginalPaper | Buchkapitel
Agreement between Crowdsourced Workers and Expert Assessors in Making Relevance Judgment for System Based IR Evaluation
verfasst von : Parnia Samimi, Sri Devi Ravana
Erschienen in: Recent Advances on Soft Computing and Data Mining
Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
Creating a gold standard dataset for relevance judgments in IR evaluation is a pricey and time consuming task. Recently, crowdsourcing, a low cost and fast approach, draws a lot of attention in creating relevance judgments. This study investigates the agreement of the relevance judgments, between crowdsourced workers and human assessors (e.g TREC assessors), validating the use of crowdsourcing for creating relevance judgments. The agreement is calculated for both individual and group agreements through percentage agreement and kappa statistics. The results show a high agreement between crowdsourcing and human assessors in group assessment while the individual agreement is not acceptable. In addition, we investigate how the rank ordering of systems change while replacing human assessors’ judgments with crowdsourcing by different evaluation metrics. The conclusion, supported by the results, is that relevance judgments generated through crowdsourcing produces is more reliable systems ranking when it involves measuring of low performing systems.