ABSTRACT
Crowdsourcing via paid microtasks has been successfully applied in a plethora of domains and tasks. Previous efforts for making such crowdsourcing more effective have considered aspects as diverse as task and workflow design, spam detection, quality control, and pricing models. Our work expands upon such efforts by examining the potential of adding gamification to microtask interfaces as a means of improving both worker engagement and effectiveness. We run a series of experiments in image labeling, one of the most common use cases for microtask crowdsourcing, and analyse worker behavior in terms of number of images completed, quality of annotations compared against a gold standard, and response to financial and game-specific rewards. Each experiment studies these parameters in two settings: one based on a state-of-the-art, non-gamified task on CrowdFlower and another one using an alternative interface incorporating several game elements. Our findings show that gamification leads to better accuracy and lower costs than conventional approaches that use only monetary incentives. In addition, it seems to make paid microtask work more rewarding and engaging, especially when sociality features are introduced. Following these initial insights, we define a predictive model for estimating the most appropriate incentives for individual workers, based on their previous contributions. This allows us to build a personalised game experience, with gains seen on the volume and quality of work completed.
- P. K. Bhowmick, P. Mitra, and A. Basu. An agreement measure for determining inter-annotator reliability of human judgements on affective text. In Proceedings of the Workshop on Human Judgements in Computational Linguistics, pages 58--65. Association for Computational Linguistics, 2008. Google ScholarDigital Library
- R. Burkett. An alternative framework for agent recruitment: From mice to rascls. In Studies in Intelligence, volume 57 of 1, pages 7--17, mar 2013.Google Scholar
- R. Dawson and S. Bynghall. Getting Results from Crowds. Advanced Human Technologies, 2012.Google Scholar
- E. Deci and R. Ryan. Intrinsic Motivation and Self-Determination in Human Behavior. Perspectives in Social Psychology. Springer, 1985.Google ScholarCross Ref
- E. L. Deci, R. Koestner, and R. M. Ryan. A meta-analytic review of experiments examining the effects of extrinsic rewards on intrinsic motivation. Psychological bulletin, 125(6):627, 1999.Google ScholarCross Ref
- G. Demartini, D. E. Difallah, and P. Cudré-Mauroux. Large-scale linked data integration using probabilistic reasoning and crowdsourcing. The VLDB Journal, 22(5):665--687, Oct. 2013. Google ScholarDigital Library
- S. Deterding, R. Khaled, L. Nacke, and D. Dixon. Gamification: Toward a definition. In CHI 2011 Gamification Workshop Proceedings, pages 12--15, 2011.Google Scholar
- C. Eickhoff, C. G. Harris, A. P. de Vries, and P. Srinivasan. Quality through flow and immersion: Gamifying crowdsourced relevance assessments. In Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '12, pages 871--880, New York, NY, USA, 2012. ACM. Google ScholarDigital Library
- C. Harris and P. Srinivasan. Comparing crowd-based, game-based, and machine-based approaches in initial query and query refinement tasks. In P. Serdyukov, P. Braslavski, S. Kuznetsov, J. Kamps, S. RÃijger, E. Agichtein, I. Segalovich, and E. Yilmaz, editors, Advances in Information Retrieval, volume 7814 of Lecture Notes in Computer Science, pages 495--506. Springer Berlin Heidelberg, 2013. Google ScholarDigital Library
- J. Howe. The rise of crowdsourcing. Wired magazine, 14(6):1--4, 2006.Google Scholar
- L. C. Irani and M. Silberman. Turkopticon: Interrupting worker invisibility in amazon mechanical turk. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 611--620. ACM, 2013. Google ScholarDigital Library
- D. Jurgens and R. Navigli. It's All Fun and Games until Someone Annotates: Video Games with a Purpose for Linguistic Annotation. Transactions of the Association for Computational Linguistics (TACL), 2:449--464, 2014.Google Scholar
- N. Kaufmann, T. Schulze, and D. Veit. More than fun and money. worker motivation in crowdsourcing - a study on mechanical turk. In AMCIS'11, pages --1--1, 2011.Google Scholar
- N. Kaufmann, T. Schulze, and D. Veit. More than fun and money. worker motivation in crowdsourcing: a study on mechanical turk. In Proceedings of the Seventeenth Americas Conference on Information Systems, pages 1--11, 2011.Google Scholar
- M. Kearns. Experiments in social computation. Commun. ACM, 55(10):56--67, 2012. Google ScholarDigital Library
- A. Kittur, E. H. Chi, and B. Suh. Crowdsourcing user studies with mechanical turk. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '08, pages 453--456, 2008. Google ScholarDigital Library
- R. E. Kraut, P. Resnick, S. Kiesler, M. Burke, Y. Chen, N. Kittur, J. Konstan, Y. Ren, and J. Riedl. Building successful online communities: Evidence-based social design. Mit Press, 2012. Google ScholarDigital Library
- M. R. Lepper, D. Greene, and R. E. Nisbett. Undermining children's intrinsic interest with extrinsic reward: A test of the" overjustification" hypothesis. Journal of Personality and social Psychology, 28(1):129, 1973.Google Scholar
- A. Mao, E. Kamar, Y. Chen, E. Horvitz, M. E. Schwamb, C. J. Lintott, and A. M. Smith. Volunteering versus work for pay: Incentives and tradeoffs in crowdsourcing. In B. Hartman and E. Horvitz, editors, HCOMP. AAAI, 2013.Google Scholar
- W. Mason and D. J. Watts. Financial incentives and the "performance of crowds". In Proceedings of the ACM SIGKDD Workshop on Human Computation, HCOMP '09, pages 77--85, New York, NY, USA, 2009. ACM. Google ScholarDigital Library
- J. McGonigal. Reality Is Broken: Why Games Make Us Better and How They Can Change the World. Penguin Group, The, 2011. Google ScholarDigital Library
- C. Mellström and M. Johannesson. Crowding out in blood donation: was titmuss right? Journal of the European Economic Association, 6(4):845--863, 2008.Google ScholarCross Ref
- P. Michelucci. Handbook of human computation. Springer, 2013. Google ScholarDigital Library
- V. S. Sheng, F. Provost, and P. G. Ipeirotis. Get another label, improving data quality and data mining using multiple, noisy labelers. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 614--622. ACM, 2008. Google ScholarDigital Library
- S. Thaler, E. Simperl, and S. Wölger. An experiment in comparing human-computation techniques. IEEE Internet Computing, pages 52--58, 2012. Google ScholarDigital Library
- L. von Ahn and L. Dabbish. Labeling images with a computer game. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '04, pages 319--326, New York, NY, USA, 2004. ACM. Google ScholarDigital Library
- L. von Ahn and L. Dabbish. Designing games with a purpose. Commun. ACM, 51(8):58--67, Aug. 2008. Google ScholarDigital Library
- H. Xie and J. C. Lui. Modeling crowdsourcing systems: Design and analysis of incentive mechanism and rating system. SIGMETRICS Perform. Eval. Rev., 42(2):52--54, Sept. 2014. Google ScholarDigital Library
- D. Yang, G. Xue, X. Fang, and J. Tang. Crowdsourcing to smartphones: Incentive mechanism design for mobile phone sensing. In Proceedings of the 18th Annual International Conference on Mobile Computing and Networking, Mobicom '12, pages 173--184, New York, NY, USA, 2012. ACM. Google ScholarDigital Library
- M. Yin, Y. Chen, and Y. Sun. The effects of performance-contingent financial incentives in online labor markets. In Proceedings of the Twenty-Seventh AAAI Conference on Artificial Intelligence, July 14-18, 2013, Bellevue, Washington, USA., 2013.Google ScholarDigital Library
- G. Zichermann and C. Cunningham. Gamification by Design: Implementing Game Mechanics in Web and Mobile Apps. 2011. Google ScholarDigital Library
Index Terms
- Improving Paid Microtasks through Gamification and Adaptive Furtherance Incentives
Recommendations
Qrowdsmith: Enhancing Paid Microtask Crowdsourcing with Gamification and Furtherance Incentives
Microtask crowdsourcing platforms are social intelligence systems in which volunteers, called crowdworkers, complete small, repetitive tasks in return for a small fee. Beyond payments, task requesters are considering non-monetary incentives such as points,...
Modus Operandi of Crowd Workers: The Invisible Role of Microtask Work Environments
The ubiquity of the Internet and the widespread proliferation of electronic devices has resulted in flourishing microtask crowdsourcing marketplaces, such as Amazon MTurk. An aspect that has remained largely invisible in microtask crowdsourcing is that ...
Make Hay While the Crowd Shines: Towards Efficient Crowdsourcing on the Web
WWW '15 Companion: Proceedings of the 24th International Conference on World Wide WebWithin the scope of this PhD proposal, we set out to investigate two pivotal aspects that influence the effectiveness of crowdsourcing: (i) microtask design, and (ii) workers behavior. Leveraging the dynamics of tasks that are crowdsourced on the one ...
Comments