skip to main content
10.1145/2063576.2063860acmconferencesArticle/Chapter ViewAbstractPublication PagescikmConference Proceedingsconference-collections
poster

Worker types and personality traits in crowdsourcing relevance labels

Published:24 October 2011Publication History

ABSTRACT

Crowdsourcing platforms offer unprecedented opportunities for creating evaluation benchmarks, but suffer from varied output quality from crowd workers who possess different levels of competence and aspiration. This raises new challenges for quality control and requires an in-depth understanding of how workers' characteristics relate to the quality of their work.

In this paper, we use behavioral observations (HIT completion time, fraction of useful labels, label accuracy) to define five worker types: Spammer, Sloppy, Incompetent, Competent, Diligent. Using data collected from workers engaged in the crowdsourced evaluation of the INEX 2010 Book Track Prove It task, we relate the worker types to label accuracy and personality trait information along the `Big Five' personality dimensions.

We expect that these new insights about the types of crowd workers and the quality of their work will inform how to design HITs to attract the best workers to a task and explain why certain HIT designs are more effective than others.

References

  1. O. Alonso and R. A. Baeza-Yates. Design and implementation of relevance assessments using crowdsourcing. In Proc. ECIR'11, pages 153--164, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. O. Alonso, D. E. Rose, and B. Stewart. Crowdsourcing for relevance evaluation. SIGIR Forum, 42: 9--15, November 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. B. Carterette and I. Soboroff. The effect of assessor error on ir system evaluation. In Proc. SIGIR'10, pages 539--546. ACM, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. J. S. Downs, M. B. Holbrook, S. Sheng, and L. F. Cranor. Are your participants gaming the system?: screening Mechanical Turk workers. In Proc. CHI'10, pages 2399--2402, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. S. D. Gosling, S. Gaddis, and S. Vazire. Personality impressions based on Facebook profiles. Psychology, pages 1--4, 2007.Google ScholarGoogle Scholar
  6. C. Grady and M. Lease. Crowdsourcing document relevance assessment with Mechanical Turk. In Proc. CSLDAMT'10, pages 172--179, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. J. Howe. Crowdsourcing: Why the Power of the Crowd Is Driving the Future of Business. Crown Publishing Group, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. J.-H. Huang and Y.-C. Yang. The relationship between personality traits and online shopping motivations. Social Behavior and Personality, 38: 673--680, 2010.Google ScholarGoogle ScholarCross RefCross Ref
  9. P. G. Ipeirotis. Analyzing the Amazon Mechanical Turk marketplace. XRDS, 17: 16--21, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. O. P. John, L. P. Naumann, and C. J. Soto. Paradigm shift to the integrative big-five trait taxonomy. In Handbook of personality, chapter 4, pages 114--212. Guilford Press, New York NY, 2008.Google ScholarGoogle Scholar
  11. G. Kazai. In search of quality in crowdsourcing for search engine evaluation. In Proc. ECIR'11, pages 165--176, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Kazai, Kamps, Koolen, and Milic-Frayling}kazai11sigirG. Kazai, J. Kamps, M. Koolen, and N. Milic-Frayling. Crowdsourcing for book search evaluation: impact of HIT design on comparative system ranking. In Proc. SIGIR'11, pages 205--214, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Kazai, Koolen, Kamps, Doucet, and Landoni}kaza:over11G. Kazai, M. Koolen, J. Kamps, A. Doucet, and M. Landoni. Overview of the INEX 2010 book track: Scaling up the evaluation using crowdsourcing. In Proc. INEX'10, pages 101--120, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. A. Kittur, E. H. Chi, and B. Suh. Crowdsourcing user studies with Mechanical Turk. In Proc. CHI'08, CHI '08, pages 453--456, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. M. Kosinski, F. Radlinski, and P. Kohli. Personality and online behavior. In Proc. CIKM'11, 2011. ACM.Google ScholarGoogle Scholar
  16. J. Le, A. Edmonds, V. Hester, and L. Biewald. Ensuring quality in crowdsourced search relevance evaluation: The effects of training question distribution. In Proc. CSE'10, pages 21--26, 2010.Google ScholarGoogle Scholar
  17. B. Rammstedt and O. P. John. Measuring personality in one minute or less: A 10-item short version of the Big Five Inventory in English and German. Journal of Research in Personality, 41: 203--212, 2007.Google ScholarGoogle ScholarCross RefCross Ref
  18. J. Ross, L. Irani, M. S. Silberman, A. Zaldivar, and B. Tomlinson. Who are the crowdworkers?: shifting demographics in Mechanical Turk. In Proc. CHI 2010, Extended Abstracts Volume, pages 2863--2872. ACM, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. R. Snow, B. O'Connor, D. Jurafsky, and A. Y. Ng. Cheap and fast--but is it good?: evaluating non-expert annotations for natural language tasks. In Proc. EMNLP'08, pages 254--263, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. J. Vuurens, A. P. de Vries, and C. Eickhoff. How much spam can you take? an analysis of crowdsourcing results to increase accuracy. In Proc. ACM SIGIR Workshop on Crowdsourcing for Information Retrieval (CIR'11), pages 21--26, 2011. ACM.Google ScholarGoogle Scholar
  21. D. Zhu and B. Carterette. An analysis of assessor behavior in crowdsourced preference judgments. In Proc. CSE'10, pages 17--20, 2010.Google ScholarGoogle Scholar

Index Terms

  1. Worker types and personality traits in crowdsourcing relevance labels

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      CIKM '11: Proceedings of the 20th ACM international conference on Information and knowledge management
      October 2011
      2712 pages
      ISBN:9781450307178
      DOI:10.1145/2063576

      Copyright © 2011 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 24 October 2011

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • poster

      Acceptance Rates

      Overall Acceptance Rate1,861of8,427submissions,22%

      Upcoming Conference

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader