skip to main content
research-article

Incentivizing high quality crowdwork

Published:16 March 2016Publication History
Skip Abstract Section

Abstract

We study the causal effects of financial incentives on the quality of crowdwork. We focus on performance-based payments (PBPs), bonus payments awarded to workers for producing high quality work. We design and run randomized behavioral experiments on the popular crowdsourcing platform Amazon Mechanical Turk with the goal of understanding when, where, and why PBPs help, identifying properties of the payment, payment structure, and the task itself that make them most effective. We provide examples of tasks for which PBPs do improve quality. For such tasks, the effectiveness of PBPs is not too sensitive to the threshold for quality required to receive the bonus, while the magnitude of the bonus must be large enough to make the reward salient. We also present examples of tasks for which PBPs do not improve quality. Our results suggest that for PBPs to improve quality, the task must be effort-responsive: the task must allow workers to produce higher quality work by exerting more effort. We also give a simple method to determine if a task is effort-responsive a priori. Furthermore, our experiments suggest that all payments on Mechanical Turk are, to some degree, implicitly performance-based in that workers believe their work may be rejected if their performance is sufficiently poor. In the full version of this paper, we propose a new model of worker behavior that extends the standard principal-agent model from economics to include a worker's subjective beliefs about his likelihood of being paid, and show that the predictions of this model are in line with our experimental findings. This model may be useful as a foundation for theoretical studies of incentives in crowdsourcing markets.

References

  1. Alonso, O. 2013. Implementing crowdsourcing-based relevance experimentation: An industrial perspective. Information Retrieval 16, 2, 101--120. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Buhrmester, M., Kwang, T., and Gosling, S. D. 2011. Amazon's Mechanical Turk: A new source of inexpensive, yet high-quality, data? Perspectives on Psychological Science.Google ScholarGoogle Scholar
  3. Camerer, C. F. and Hogarth, R. 1999. The effects of financial incentives in economics experiments: A review and capital-labor-production framework. Journal of Risk and Uncertainty 19, 1, 7--42.Google ScholarGoogle ScholarCross RefCross Ref
  4. Gilchrist, D., Luca, M., and Malhotra, D. 2014. When 3+1 > 4: Gift structure and reciprocity in the field. Tech. rep. Working Paper.Google ScholarGoogle Scholar
  5. Harris, C. G. 2011. You're hired! An examination of crowdsourcing incentive models in human resource tasks. In WSDM 2011 Workshop on Crwdsourcing for Search and Data Mining.Google ScholarGoogle Scholar
  6. Ho, C.-J., Jabbari, S., and Vaughan, J. W. 2013. Adaptive task assignment for crowdsourced classification. In ICML.Google ScholarGoogle Scholar
  7. Ho, C.-J. and Vaughan, J. W. 2012. Online task assignment in crowdsourcing markets. In AAAI. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Horton, J. J. and Chilton, L. B. 2010. The labor economics of paid crowdsourcing. In ACM EC. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Horton, J. J., Rand, D., and Zeckhauser, R. 2011. The online laboratory: Conducting experiments in a real labor market. Experimental Economics 14, 3, 399--425.Google ScholarGoogle ScholarCross RefCross Ref
  10. Ipeirotis, P. G., Provost, F., and Wang, J. 2010. Quality management on Amazon Mechanical Turk. In HCOMP. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Karger, D., Oh, S., and Shah, D. 2011. Iterative learning for reliable crowdsourcing systems. In NIPS.Google ScholarGoogle Scholar
  12. Kittur, A., Chi, E., and Suh, B. 2008. Crowdsourcing user studies with Mechanical Turk. In CHI. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Liu, Q., Peng, J., and Ihler, A. 2012. Variational inference for crowdsourcing. In NIPS.Google ScholarGoogle Scholar
  14. Mason, W. and Suri, S. 2012. Conducting behavioral research on Amazon's Mechanical Turk. Behavior Research Methods 44, 1, 1--23.Google ScholarGoogle ScholarCross RefCross Ref
  15. Mason, W. and Watts, D. J. 2009. Financial incentives and the "performance of crowds". In HCOMP. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Rogstadius, J., Kostakos, V., Kittur, A., Smus, B., Laredo, J., and Vukovic, M. 2011. An assessment of intrinsic and extrinsic motivation on task performance in crowdsourcing markets. In ICWSM.Google ScholarGoogle Scholar
  17. Schall, D. 2012. Service-Oriented Crowdsourcing - Architecture, Protocols and Algorithms. Springer Briefs in Computer Science. Springer. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Seltman, H. J. 2014. Experimental design and analysis. http://www.stat.cmu.edu/~hseltman/309/Book/Book.pdf.Google ScholarGoogle Scholar
  19. Shaw, A. D., Horton, J. J., and Chen, D. L. 2011. Designing incentives for inexpert human raters. In CSCW. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Sheng, V., Provost, F., and Ipeirotis, P. 2008. Get another label? Improving data quality using multiple, noisy labelers. In KDD. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Wah, C., Branson, S., Welinder, P., Perona, P., and Belongie, S. 2011. The Caltech-UCSD Birds-200-2011 Dataset. Tech. Rep. CNS-TR-2011-001, California Institute of Technology.Google ScholarGoogle Scholar
  22. Yin, M., Chen, Y., and Sun, Y.-A. 2013. The effects of performance-contingent financial incentives in online labor markets. In AAAI. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Yin, M., Chen, Y., and Sun, Y.-A. 2014. Monetary interventions in crowdsourcing task switching. In HCOMP.Google ScholarGoogle Scholar

Index Terms

  1. Incentivizing high quality crowdwork

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader