skip to main content
10.1145/2499149.2499168acmconferencesArticle/Chapter ViewAbstractPublication PageschitalyConference Proceedingsconference-collections
research-article

Keep it simple: reward and task design in crowdsourcing

Published:16 September 2013Publication History

ABSTRACT

Crowdsourcing is emerging as an effective method for performing tasks that require human abilities, such as tagging photos, transcribing handwriting and categorising data. Crowd workers perform small chunks of larger tasks in return for a reward, which is generally monetary. Reward can be one factor for motivating workers to produce higher quality results. Yet, as highlighted by previous research, the task design, in terms of its instructions and user interface, can also affect the workers' perception of the task, thus affecting the quality of results. In this study we investigate both factors, reward and task design, to better understand their role in relation to the quality of work in crowdsourcing. In Experiment 1 we test a variety of reward schemas while in Experiment 2 we measure the effects of the complexity of tasks and interface on attention. The long-term goal is to establish guidelines for designing tasks with the aim to maximize workers' performance.

References

  1. Chipman, S. F., Schraagen, J. M., & Shalin, V. L. Introduction to cognitive task analysis. In J. M. Schraagen et.al. (Eds.), Cognitive task analysis (pp. 3--23). Mahwah, NJ: Lawrence Erlbaum Associates (2000).Google ScholarGoogle Scholar
  2. Crump MJC, McDonnell JV, Gureckis TM Evaluating Amazon's Mechanical Turk as a Tool for Experimental Behavioral Research. PLoS ONE 8 (3). (2013).Google ScholarGoogle Scholar
  3. Dow, S., Kulkarni, A. P., Bunge, B., Nguyen, T., Klemmer, S. R., and Hartmann, B. Shepherding the crowd: managing and providing feedback to crowd workers. In CHI Extend. Abstracts (2011), 1669--1674. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Heer, J. and Bostock, M. Crowdsourcing AMT graphical perception: using mechanical turk to assess visualization design. In CHI (2010), 203--212. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Howe, J. The rise of crowdsourcing Wired Magazine 14 (6). (2006)Google ScholarGoogle Scholar
  6. Kaufmann, N., and Schulze, T. Worker motivation in crowdsourcing and human computation. Education 17, (2011).Google ScholarGoogle Scholar
  7. Kieras, D. and Polson P. G., An approach to the formal analysis of user complexity. International Journal of Man-Machine Studies, 22 (1985), 365--394.Google ScholarGoogle ScholarCross RefCross Ref
  8. Kittur, A. Crowdsourcing, Collaboration and Creativity. XRDS, 17 (2010), 22--26. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Kittur, A. Nickerson, J V. Bernstein, Ml Gerber, E. Shaw, A. Zimmerman, J. Lease, M. and Horton, J. The future of crowd work. In CSCW (2013)., 1301--1318. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Komarov, S. Reinecke, K. and Gajos K. Z. Crowdsourcing Performance Evaluations of User Interfaces. In CHI (2013), 207--216. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Mason, W. A., and Watts, D. J. Financial incentives and the "performance of crowds". In KDD Workshop on Human Computation (2009), 77--85. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Rogstadius, J., Kostakos, V., Kittur, A., Smus, B., Laredo, J., and Vukovic, M. An assessment of intrinsic and extrinsic motivation on task performance in crowdsourcing markets. In ICWSM (2011).Google ScholarGoogle Scholar
  13. Shaw, A. D., Horton, J. J., and Chen, D. L. Designing incentives for inexpert human raters. In CSCW (2011), 275--284. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Sweller, J. Cognitive Load During Problem Solving: Effects on Learning. Cognitive Science, 12 (1988) 257--85.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Keep it simple: reward and task design in crowdsourcing

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      CHItaly '13: Proceedings of the Biannual Conference of the Italian Chapter of SIGCHI
      September 2013
      215 pages
      ISBN:9781450320610
      DOI:10.1145/2499149

      Copyright © 2013 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 16 September 2013

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate109of242submissions,45%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader