skip to main content
10.1145/2911451.2914756acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
short-paper

Identifying Careless Workers in Crowdsourcing Platforms: A Game Theory Approach

Published:07 July 2016Publication History

ABSTRACT

In this paper we introduce a game scenario for crowdsourcing (CS) using incentives as a bait for careless (gambler) workers, who respond to them in a characteristic way. We hypothesise that careless workers are risk-inclined and can be detected in the game scenario by their use of time, and test this hypothesis in two steps: first, we formulate and prove a theorem stating that a risk-inclined worker will react to competition with shorter Task Completion Time (TCT) than a risk-neutral or risk-averse worker. Second, we check if the game scenario introduces a link between TCT and performance, by performing a crowdsourced evaluation using 35 topics from the TREC-8 collection. Experimental evidence confirms our hypothesis, showing that TCT can be used as a powerful discrimination factor to detect careless workers. This is a valuable result in the quest for quality assurance in CS-based micro tasks such as relevance assessment.

References

  1. O. Alonso. Implementing crowdsourcing-based relevance experimentation: an industrial perspective. Inf. Retr., pages 1--20, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. B. Carterette and I. Soboroff. The Effect of Assessor Error on IR System Evaluation. In SIGIR '10, pages 539--546, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. C. Eickhoff, C. G. Harris, A. P. de Vries, and P. Srinivasan. Quality through flow and immersion: gamifying crowdsourced relevance assessments. In SIGIR '12, pages 871--880, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. C. Grady and M. Lease. Crowdsourcing document relevance assessment with Mechanical Turk. In CSLDAMT '10, pages 172--179, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. P. G. Ipeirotis, F. Provost, and J. Wang.Google ScholarGoogle Scholar
  6. G. Kazai, J. Kamps, and N. Milic-Frayling. The face of quality in crowdsourcing relevance labels: Demographics, personality and labeling accuracy. In CIKM '12, pages 2583--2586, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. G. Kazai, J. Kamps, and N. Milic-Frayling. An Analysis of Human Factors and Label Accuracy in Crowdsourcing Relevance Judgments. Inf. Retr., 16(2):138--178, Apr. 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. J. Le, A. Edmonds, V. Hester, and L. Biewald. Ensuring quality in crowdsourced search relevance evaluation: The effects of training question distribution. In SIGIR 2010 workshop on crowdsourcing for search evaluation, pages 21--26, 2010.Google ScholarGoogle Scholar
  9. Y. Moshfeghi, A. F. H. Rosero, and J. M. Jose. A Game-Theory Approach for Effective Crowdsource-Based Relevance Assessment. ACM Trans. Intell. Syst. Technol., 7(4):55:1--55:25, Mar. 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. T. Straub, H. Gimpel, and F. Teschner. The negative effect of feedback on performance in crowd labor tournaments. Collective Intelligence 2014, 2014.Google ScholarGoogle Scholar
  11. M. Szilagyi. Agent-Based Simulation of the N-Person Chicken Game. In Advances in Dynamic Game Theory, volume 9, pages 696--703. Birkhäuser Boston, 2007.Google ScholarGoogle ScholarCross RefCross Ref
  12. L. Von Ahn and L. Dabbish. Designing games with a purpose. Communications of the ACM, 51(8):58--67, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. E. Voorhees, D. Harman, N. I. of Standards, and T. (US). TREC: Experiment and evaluation in information retrieval, volume 63. MIT press Cambridge, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Y. Zhao and Q. Zhu. Evaluation on crowdsourcing research: Current status and future direction. Information Systems Frontiers, pages 1--18, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Identifying Careless Workers in Crowdsourcing Platforms: A Game Theory Approach

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          SIGIR '16: Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval
          July 2016
          1296 pages
          ISBN:9781450340694
          DOI:10.1145/2911451

          Copyright © 2016 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 7 July 2016

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • short-paper

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader