skip to main content
10.1145/2736277.2741639acmotherconferencesArticle/Chapter ViewAbstractPublication PageswwwConference Proceedingsconference-collections
research-article

Improving Paid Microtasks through Gamification and Adaptive Furtherance Incentives

Authors Info & Claims
Published:18 May 2015Publication History

ABSTRACT

Crowdsourcing via paid microtasks has been successfully applied in a plethora of domains and tasks. Previous efforts for making such crowdsourcing more effective have considered aspects as diverse as task and workflow design, spam detection, quality control, and pricing models. Our work expands upon such efforts by examining the potential of adding gamification to microtask interfaces as a means of improving both worker engagement and effectiveness. We run a series of experiments in image labeling, one of the most common use cases for microtask crowdsourcing, and analyse worker behavior in terms of number of images completed, quality of annotations compared against a gold standard, and response to financial and game-specific rewards. Each experiment studies these parameters in two settings: one based on a state-of-the-art, non-gamified task on CrowdFlower and another one using an alternative interface incorporating several game elements. Our findings show that gamification leads to better accuracy and lower costs than conventional approaches that use only monetary incentives. In addition, it seems to make paid microtask work more rewarding and engaging, especially when sociality features are introduced. Following these initial insights, we define a predictive model for estimating the most appropriate incentives for individual workers, based on their previous contributions. This allows us to build a personalised game experience, with gains seen on the volume and quality of work completed.

References

  1. P. K. Bhowmick, P. Mitra, and A. Basu. An agreement measure for determining inter-annotator reliability of human judgements on affective text. In Proceedings of the Workshop on Human Judgements in Computational Linguistics, pages 58--65. Association for Computational Linguistics, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. R. Burkett. An alternative framework for agent recruitment: From mice to rascls. In Studies in Intelligence, volume 57 of 1, pages 7--17, mar 2013.Google ScholarGoogle Scholar
  3. R. Dawson and S. Bynghall. Getting Results from Crowds. Advanced Human Technologies, 2012.Google ScholarGoogle Scholar
  4. E. Deci and R. Ryan. Intrinsic Motivation and Self-Determination in Human Behavior. Perspectives in Social Psychology. Springer, 1985.Google ScholarGoogle ScholarCross RefCross Ref
  5. E. L. Deci, R. Koestner, and R. M. Ryan. A meta-analytic review of experiments examining the effects of extrinsic rewards on intrinsic motivation. Psychological bulletin, 125(6):627, 1999.Google ScholarGoogle ScholarCross RefCross Ref
  6. G. Demartini, D. E. Difallah, and P. Cudré-Mauroux. Large-scale linked data integration using probabilistic reasoning and crowdsourcing. The VLDB Journal, 22(5):665--687, Oct. 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. S. Deterding, R. Khaled, L. Nacke, and D. Dixon. Gamification: Toward a definition. In CHI 2011 Gamification Workshop Proceedings, pages 12--15, 2011.Google ScholarGoogle Scholar
  8. C. Eickhoff, C. G. Harris, A. P. de Vries, and P. Srinivasan. Quality through flow and immersion: Gamifying crowdsourced relevance assessments. In Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '12, pages 871--880, New York, NY, USA, 2012. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. C. Harris and P. Srinivasan. Comparing crowd-based, game-based, and machine-based approaches in initial query and query refinement tasks. In P. Serdyukov, P. Braslavski, S. Kuznetsov, J. Kamps, S. RÃijger, E. Agichtein, I. Segalovich, and E. Yilmaz, editors, Advances in Information Retrieval, volume 7814 of Lecture Notes in Computer Science, pages 495--506. Springer Berlin Heidelberg, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. J. Howe. The rise of crowdsourcing. Wired magazine, 14(6):1--4, 2006.Google ScholarGoogle Scholar
  11. L. C. Irani and M. Silberman. Turkopticon: Interrupting worker invisibility in amazon mechanical turk. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 611--620. ACM, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. D. Jurgens and R. Navigli. It's All Fun and Games until Someone Annotates: Video Games with a Purpose for Linguistic Annotation. Transactions of the Association for Computational Linguistics (TACL), 2:449--464, 2014.Google ScholarGoogle Scholar
  13. N. Kaufmann, T. Schulze, and D. Veit. More than fun and money. worker motivation in crowdsourcing - a study on mechanical turk. In AMCIS'11, pages --1--1, 2011.Google ScholarGoogle Scholar
  14. N. Kaufmann, T. Schulze, and D. Veit. More than fun and money. worker motivation in crowdsourcing: a study on mechanical turk. In Proceedings of the Seventeenth Americas Conference on Information Systems, pages 1--11, 2011.Google ScholarGoogle Scholar
  15. M. Kearns. Experiments in social computation. Commun. ACM, 55(10):56--67, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. A. Kittur, E. H. Chi, and B. Suh. Crowdsourcing user studies with mechanical turk. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '08, pages 453--456, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. R. E. Kraut, P. Resnick, S. Kiesler, M. Burke, Y. Chen, N. Kittur, J. Konstan, Y. Ren, and J. Riedl. Building successful online communities: Evidence-based social design. Mit Press, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. M. R. Lepper, D. Greene, and R. E. Nisbett. Undermining children's intrinsic interest with extrinsic reward: A test of the" overjustification" hypothesis. Journal of Personality and social Psychology, 28(1):129, 1973.Google ScholarGoogle Scholar
  19. A. Mao, E. Kamar, Y. Chen, E. Horvitz, M. E. Schwamb, C. J. Lintott, and A. M. Smith. Volunteering versus work for pay: Incentives and tradeoffs in crowdsourcing. In B. Hartman and E. Horvitz, editors, HCOMP. AAAI, 2013.Google ScholarGoogle Scholar
  20. W. Mason and D. J. Watts. Financial incentives and the "performance of crowds". In Proceedings of the ACM SIGKDD Workshop on Human Computation, HCOMP '09, pages 77--85, New York, NY, USA, 2009. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. J. McGonigal. Reality Is Broken: Why Games Make Us Better and How They Can Change the World. Penguin Group, The, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. C. Mellström and M. Johannesson. Crowding out in blood donation: was titmuss right? Journal of the European Economic Association, 6(4):845--863, 2008.Google ScholarGoogle ScholarCross RefCross Ref
  23. P. Michelucci. Handbook of human computation. Springer, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. V. S. Sheng, F. Provost, and P. G. Ipeirotis. Get another label, improving data quality and data mining using multiple, noisy labelers. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 614--622. ACM, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. S. Thaler, E. Simperl, and S. Wölger. An experiment in comparing human-computation techniques. IEEE Internet Computing, pages 52--58, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. L. von Ahn and L. Dabbish. Labeling images with a computer game. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '04, pages 319--326, New York, NY, USA, 2004. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. L. von Ahn and L. Dabbish. Designing games with a purpose. Commun. ACM, 51(8):58--67, Aug. 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. H. Xie and J. C. Lui. Modeling crowdsourcing systems: Design and analysis of incentive mechanism and rating system. SIGMETRICS Perform. Eval. Rev., 42(2):52--54, Sept. 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. D. Yang, G. Xue, X. Fang, and J. Tang. Crowdsourcing to smartphones: Incentive mechanism design for mobile phone sensing. In Proceedings of the 18th Annual International Conference on Mobile Computing and Networking, Mobicom '12, pages 173--184, New York, NY, USA, 2012. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. M. Yin, Y. Chen, and Y. Sun. The effects of performance-contingent financial incentives in online labor markets. In Proceedings of the Twenty-Seventh AAAI Conference on Artificial Intelligence, July 14-18, 2013, Bellevue, Washington, USA., 2013.Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. G. Zichermann and C. Cunningham. Gamification by Design: Implementing Game Mechanics in Web and Mobile Apps. 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Improving Paid Microtasks through Gamification and Adaptive Furtherance Incentives

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Other conferences
      WWW '15: Proceedings of the 24th International Conference on World Wide Web
      May 2015
      1460 pages
      ISBN:9781450334693

      Copyright © 2015 Copyright is held by the International World Wide Web Conference Committee (IW3C2)

      Publisher

      International World Wide Web Conferences Steering Committee

      Republic and Canton of Geneva, Switzerland

      Publication History

      • Published: 18 May 2015

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      WWW '15 Paper Acceptance Rate131of929submissions,14%Overall Acceptance Rate1,899of8,196submissions,23%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader