ABSTRACT
We present a model of workers supplying labor to paid crowdsourcing projects. We also introduce a novel method for estimating a worker's reservation wage - the key parameter in our labor supply model. We tested our model by presenting experimental subjects with real-effort work scenarios that varied in the offered payment and difficulty. As predicted, subjects worked less when the pay was lower. However, they did not work less when the task was more time-consuming. Interestingly, at least some subjects appear to be "target earners," contrary to the assumptions of the rational model. The strongest evidence for target earning is an observed preference for earning total amounts evenly divisible by 5, presumably because these amounts make good targets. Despite its predictive failures, we calibrate our model with data pooled from both experiments. We find that the reservation wages of our sample are approximately log normally distributed, with a median wage of $1.38/hour. We discuss how to use our calibrated model in applications.
- L. A. Adamic, J. Zhang, E. Bakshy, and M. S. Ackerman. Knowledge sharing and yahoo answers: everyone knows something. In Proc. 17th Intl. Conf. on the World Wide Web (WWW), 2008. Google ScholarDigital Library
- Y. Benkler. The Wealth of Networks: How Social Production Transforms Markets and Freedom. Yale University Press, 2007. Google ScholarDigital Library
- C. Camerer, L. Babcock, G. Loewenstein, and R. Thaler. Labor supply of new york city cabdrivers: One day at a time. The Quarterly Journal of Economics, 112(2):407--441, 1997.Google ScholarCross Ref
- D. L. Chen and J. J. Horton. The wages of paycuts: Evidence from a field experiment. Working Paper, 2009.Google Scholar
- D. DiPalantino and M. Vojnovic. Crowdsourcing and all-pay auctions. In Proc. 10th ACM Conf. on Electronic Commerce (EC), 2009. Google ScholarDigital Library
- H. S. Farber. Reference-dependent preferences and labor supply: The case of new york city taxi drivers. American Economic Review, 98(3):1069--1082, 2008.Google ScholarCross Ref
- E. Fehr and L. Goette. Do workers work more if wages are high? evidence from a randomized field experiment. American Economic Review, 97(1):298--317, 2007.Google ScholarCross Ref
- B. Frei. Paid crowdsourcing: Current state & progress toward mainstream business use. Whitepaper Produced by Smartsheet.com, 2009.Google Scholar
- J. J. Horton, D. G. Rand, and R. J. Zeckhauser. The online laboratory. Working Paper, 2010.Google Scholar
- J. J. Horton and X. Zhu. The allocation of decision rights in designed markets. Working Paper, 2010.Google Scholar
- J. Howe. Crowdsourcing: Why the power of the crowd is driving the future of business. Three Rivers Press, 2009. Google ScholarDigital Library
- B. A. Huberman, D. M. Romero, and F. Wu. Crowdsourcing, attention and productivity. J. Inf. Sci., 35(6):758--765, 2009. Google ScholarDigital Library
- S. Jain, Y. Chen, and D. C. Parkes. Designing incentives for online question and answer forums. In Proc. 10th ACM Conf. on Electronic commerce (EC), pages 129--138, 2009. Google ScholarDigital Library
- S. Jain and D. C. Parkes. The role of game theory in human computation systems. In Proc. ACM SIGKDD Workshop on Human Computation (HCOMP), 2009. Google ScholarDigital Library
- J. Liebman and R. Zeckhauser. Schmeduling. Working Paper, 2004.Google Scholar
- R. Lionel. An Essay on the Nature and Significance of Economic Science. London, Macmillan, 1935.Google Scholar
- W. Mason and D. J. Watts. Financial incentives and the 'performance of crowds'. In Proc. ACM SIGKDD Workshop on Human Computation (HCOMP), 2009. Google ScholarDigital Library
- A. E. Roth. The economist as engineer: Game theory, experimentation, and computation as tools for design economics. Econometrica, 70:1341--1378, 2002.Google ScholarCross Ref
- V. S. Sheng, F. Provost, and P. G. Ipeirotis. Get another label? Improving data quality and data mining using multiple, noisy labelers. In Proc. 14th ACM SIGKDD Intl. Conf. on Knowledge Discovery and Data Mining (KDD), 2008. Google ScholarDigital Library
- R. Snow, B. O'Connor, D. Jurafsky, and A. Y. Ng. Cheap and fast--but is it good?: evaluating non-expert annotations for natural language tasks. In Proc. Conf. on Empirical Methods in Natural Language Processing (EMNLP), 2008. Google ScholarDigital Library
- L. Von Ahn. Games with a purpose. IEEE Computer Magazine, 39(6):96--98, 2006. Google ScholarDigital Library
- H. Wickham. ggplot2: An implementation of the grammar of graphics. R package version 0.7, URL: http://CRAN.R-project.org/package=ggplot2, 2008.Google Scholar
Index Terms
- The labor economics of paid crowdsourcing
Recommendations
Labor allocation in paid crowdsourcing: experimental evidence on positioning, nudges and prices
AAAIWS'11-11: Proceedings of the 11th AAAI Conference on Human ComputationThis paper reports the results of a natural field experiment where workers from a paid crowdsourcing environment selfselect into tasks and are presumed to have limited attention. In our experiment, workers labeled any of six pictures from a 2 × 3 grid ...
Work experiences on MTurk
Amazon's Mechanical Turk (MTurk) is an online marketplace for work, where Requesters post Human Intelligence Tasks (HITs) for Workers to complete for varying compensation. Past research has focused on the quality and generalizability of social and ...
Pricing mechanisms for crowdsourcing markets
WWW '13: Proceedings of the 22nd international conference on World Wide WebEvery day millions of crowdsourcing tasks are performed in exchange for payments. Despite the important role pricing plays in crowdsourcing campaigns and the complexity of the market, most platforms do not provide requesters appropriate tools for ...
Comments