Abstract
Paid microtask crowdsourcing has traditionally been approached as an individual activity, with units of work created and completed independently by the members of the crowd. Other forms of crowdsourcing have, however, embraced more varied models, which allow for a greater level of participant interaction and collaboration. This article studies the feasibility and uptake of such an approach in the context of paid microtasks. Specifically, we compare engagement, task output, and task accuracy in a paired-worker model with the traditional, single-worker version. Our experiments indicate that collaboration leads to better accuracy and more output, which, in turn, translates into lower costs. We then explore the role of the social flow and social pressure generated by collaborating partners as sources of incentives for improved performance. We utilise a Bayesian method in conjunction with interface interaction behaviours to detect when one of the workers in a pair tries to exit the task. Upon this realisation, the other worker is presented with the opportunity to contact the exiting partner to stay: either for personal financial reasons (i.e., they have not completed enough tasks to qualify for a payment) or for fun (i.e., they are enjoying the task). The findings reveal that: (1) these socially motivated incentives can act as furtherance mechanisms to help workers attain and exceed their task requirements and produce better results than baseline collaborations; (2) microtask crowd workers are empathic (as opposed to selfish) agents, willing to go the extra mile to help their partners get paid; and, (3) social furtherance incentives create a win-win scenario for the requester and for the workers by helping more workers get paid by re-engaging them before they drop out.
- Vamshi Ambati, Stephan Vogel, and Jaime Carbonell. 2012. Collaborative workflow for crowdsourcing translation. In Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work. ACM, 1191--1194. Google ScholarDigital Library
- C. Daniel Batson and Nadia Ahmad. 2001. Empathy-induced altruism in a prisoner’s dilemma II: What if the target of empathy has defected? European Journal of Social Psychology 31, 1 (2001), 25--36. Google ScholarCross Ref
- James Bennett and Stan Lanning. 2007. The Netflix prize. In Proceedings of KDD Cup and Workshop, Vol. 2007. 35.Google Scholar
- Michael S. Bernstein, Greg Little, Robert C. Miller, Björn Hartmann, Mark S. Ackerman, David R. Karger, David Crowell, and Katrina Panovich. 2010. Soylent: A word processor with a crowd inside. In Proceedings of the 23nd Annual ACM Symposium on User Interface Software and Technology. ACM, 313--322. Google ScholarDigital Library
- Michael J. Brzozowski, Thomas Sandholm, and Tad Hogg. 2009. Effects of feedback and peer pressure on contributions to enterprise social media. In Proceedings of the ACM 2009 International Conference on Supporting Group Work. ACM, 61--70. Google ScholarDigital Library
- Antoni Calvó-Armengol and Matthew O. Jackson. 2010. Peer pressure. Journal of the European Economic Association 8, 1 (2010), 62--89. Google ScholarCross Ref
- Jon Chamberlain. 2014. Groupsourcing: Distributed problem solving using social networks. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing.Google ScholarCross Ref
- Mihaly Csikszentmihalyi. 1991. Flow: The Psychology of Optimal Experience. Vol. 41. HarperPerennial New York.Google Scholar
- Kohei Daido. 2004. Risk-averse agents with peer pressure. Applied Economics Letters 11, 6 (2004), 383--386. Google ScholarCross Ref
- Kohei Daido. 2006. Peer pressure and incentives. Bulletin of Economic Research 58, 1 (2006), 51--60. Google ScholarCross Ref
- Ross Dawson and Steve Bynghall. 2012. Getting Results from Crowds. Advanced Human Technologies San Francisco.Google Scholar
- Sebastian Deterding, Dan Dixon, Rilla Khaled, and Lennart Nacke. 2011. From game design elements to gamefulness: Defining gamification. In Proceedings of the 15th International Academic MindTrek Conference: Envisioning Future Media Environments. ACM, 9--15. Google ScholarDigital Library
- Sebastian Deterding, Rilla Khaled, Lennart Nacke, and Dan Dixon. 2011. Gamification: Toward a definition. In CHI 2011 Gamification Workshop Proceedings. 12--15.Google Scholar
- Djellel Eddine Difallah, Michele Catasta, Gianluca Demartini, and Philippe Cudré-Mauroux. 2014. Scaling-up the crowd: Micro-task pricing schemes for worker retention and latency improvement. In Proceedings of the 2nd AAAI Conference on Human Computation and Crowdsourcing.Google ScholarCross Ref
- Steven Dow, Anand Kulkarni, Scott Klemmer, and Björn Hartmann. 2012. Shepherding the crowd yields better work. In Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work. ACM, 1013--1022. Google ScholarDigital Library
- Nancy Eisenberg and Paul A. Miller. 1987. The relation of empathy to prosocial and related behaviors. Psychological Bulletin 101, 1 (1987), 91.Google ScholarCross Ref
- Oluwaseyi Feyisetan and Elena Simperl. 2016. Please stay vs let’s play: Social pressure incentives in paid collaborative crowdsourcing. In Proceedings of the International Conference on Web Engineering. Springer, 405--412. Google ScholarCross Ref
- Oluwaseyi Feyisetan, Elena Simperl, Max Van Kleek, and Nigel Shadbolt. 2015. Improving paid microtasks through gamification and adaptive furtherance incentives. In Proceedings of the 24th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee, 333--343. Google ScholarDigital Library
- R. K. Ganti, F. Ye, and H. Lei. 2011. Mobile crowdsensing: Current state and future challenges. IEEE Communications Magazine 49, 11 (November 2011), 32--39. https://doi.org/10.1109/MCOM.2011.6069707 Google ScholarCross Ref
- Mary L. Gray, Siddharth Suri, Syed Shoaib Ali, and Deepti Kulkarni. 2016. The crowd is a collaborative network. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing (CSCW’16). 134--147. Google ScholarDigital Library
- Nathan Green, Paul Breimyer, Vinay Kumar, and Nagiza F. Samatova. 2010. PackPlay: Mining semantic data in collaborative games. In Proceedings of the 4th Linguistic Annotation Workshop. Association for Computational Linguistics, 227--234.Google Scholar
- Caroline Haythornthwaite. 2009. Crowds and communities: Light and heavyweight models of peer production. In System Sciences, 2009. HICSS’09. 42nd Hawaii International Conference on. IEEE, 1--10.Google Scholar
- Gary Hsieh and Rafał Kocielnik. 2016. You get who you pay for: The impact of incentives on participation bias. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing. ACM, 823--835. Google ScholarDigital Library
- Shih-Wen Huang and Wai-Tat Fu. 2013. Don’t hide in the crowd!: Increasing social transparency between peer workers improves crowdsourcing outcomes. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 621--630. Google ScholarDigital Library
- Lilly C. Irani and M. Silberman. 2013. Turkopticon: Interrupting worker invisibility in amazon mechanical turk. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 611--620. Google ScholarDigital Library
- Susan A. Jackson and Mihaly Csikszentmihalyi. 1999. Flow in Sports. Human Kinetics.Google Scholar
- Eugene Kandel and Edward P. Lazear. 1992. Peer pressure and partnerships. Journal of Political Economy (1992), 801--817. Google ScholarCross Ref
- Nicolas Kaufmann, Thimo Schulze, and Daniel Veit. 2011. More than fun and money. Worker motivation in crowdsourcing - a study on mechanical turk. In AMCIS’11.Google Scholar
- Aniket Kittur. 2010. Crowdsourcing, Collaboration and Creativity. ACM Crossroads 17, 2 (2010), 22--26. Google ScholarDigital Library
- Aniket Kittur, Boris Smus, Susheel Khamkar, and Robert E. Kraut. 2011. Crowdforge: Crowdsourcing complex work. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology. ACM, 43--52.Google Scholar
- Ari Kobren, Chun How Tan, Panagiotis Ipeirotis, and Evgeniy Gabrilovich. 2015. Getting more for less: Optimized crowdsourcing with dynamic tasks and goals. In Proceedings of the 24th International Conference on World Wide Web. ACM, 592--602. Google ScholarDigital Library
- Anand Kulkarni, Matthew Can, and Björn Hartmann. 2012. Collaboratively Crowdsourcing Workflows with Turkomatic. In Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work. ACM, 1003--1012. Google ScholarDigital Library
- Bibb Latané, Kipling Williams, and Stephen Harkins. 1979. Many hands make light the work: The causes and consequences of social loafing. Journal of Personality and Social Psychology 37, 6 (1979), 822.Google ScholarCross Ref
- Edith Law and Luis von Ahn. 2009. Input-agreement: A new mechanism for collecting data using human computation games. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’09). ACM, New York, NY, 1197--1206. https://doi.org/10.1145/1518701.1518881 Google ScholarDigital Library
- David Martin, Benjamin V. Hanrahan, Jacki O’Neill, and Neha Gupta. 2014. Being a Turker. In Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing. ACM, 224--235. Google ScholarDigital Library
- Winter Mason and Duncan J. Watts. 2010. Financial incentives and the performance of crowds. ACM SigKDD Explorations Newsletter 11, 2 (2010), 100--108. Google ScholarDigital Library
- Jane McGonigal. 2011. Reality Is Broken: Why Games Make Us Better and How They Can Change the World. The Penguin Group.Google ScholarDigital Library
- Carol A. Mockros and Mihaly Csikszentmihalyi. 2014. The social construction of creative lives. In The Systems Model of Creativity. Springer, 127--160. Google ScholarCross Ref
- Alwine Mohnen, Kathrin Pokorny, and Dirk Sliwka. 2008. Transparency, inequity aversion, and the dynamics of peer pressure in teams: Theory and evidence. Journal of Labor Economics 26, 4 (2008), 693--720. Google ScholarCross Ref
- Stefanie Nowak and Stefan Rüger. 2010. How reliable are annotations via crowdsourcing: A study about inter-annotator agreement for multi-label image annotation. In Proceedings of the International Conference on Multimedia Information Retrieval. ACM, 557--566. Google ScholarDigital Library
- Ory Okolloh. 2009. Ushahidi, or fittestimonyfi: Web 2.0 tools for crowdsourcing crisis information. Participatory Learning and Action 59, 1 (2009), 65--70.Google Scholar
- M. Jordan Raddick, Georgia Bracey, Karen Carney, Geza Gyuk, Kirk Borne, John Wallin, Suzanne Jacoby, and Adler Planetarium. 2009. Citizen science: Status and research directions for the Coming Decade. AGB Stars and Related Phenomenastro 2010: The Astronomy and Astrophysics Decadal Survey (2009), 46P.Google Scholar
- Markus Rokicki, Sergiu Chelaru, Sergej Zerr, and Stefan Siersdorfer. 2014. Competitive game designs for improving the cost effectiveness of crowdsourcing. In Proceedings of the 23rd ACM International Conference on Information and Knowledge Management. ACM, 1469--1478. Google ScholarDigital Library
- Markus Rokicki, Sergej Zerr, and Stefan Siersdorfer. 2015. Groupsourcing: Team Competition Designs for Crowdsourcing. In Proceedings of the 24th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee, 906--915. Google ScholarDigital Library
- Marisa Salanova, Alma M. Rodríguez-Sánchez, Wilmar B. Schaufeli, and Eva Cifre. 2014. Flowing together: A longitudinal study of collective efficacy and collective flow among workgroups. The Journal of Psychology 148, 4 (2014), 435--455. Google ScholarCross Ref
- John C. Tang, Manuel Cebrian, Nicklaus A. Giacobe, Hyun-Woo Kim, Taemie Kim, and Douglas Beaker Wickert. 2011. Reflecting on the DARPA red balloon challenge. Commun. ACM 54, 4 (2011), 78--85. Google ScholarDigital Library
- June Price Tangney and Ronda L. Dearing. 2003. Shame and Guilt. Guilford Press.Google Scholar
- Ramine Tinati, Max Van Kleek, Elena Simperl, Markus Luczak-Rösch, Robert Simpson, and Nigel Shadbolt. 2015. Designing for citizen data analysis: A cross-sectional case study of a multi-domain citizen science platform. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, 4069--4078. Google ScholarDigital Library
- Paul Upchurch, Daniel Sedra, Andrew Mullen, Haym Hirsh, and Kavita Bala. 2016. Interactive Consensus Agreement Games for Labeling Images. In Proceedings of the 4th AAAI Conference on Human Computation and Crowdsourcing.Google ScholarCross Ref
- Luis von Ahn and Laura Dabbish. 2004. Labeling images with a computer game. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’04). ACM, New York, NY, 319--326. https://doi.org/10.1145/985692.985733 Google ScholarDigital Library
- Luis von Ahn and Laura Dabbish. 2008. Designing games with a purpose. Commun. ACM 51, 8 (Aug. 2008), 58--67. https://doi.org/10.1145/1378704.1378719 Google ScholarDigital Library
- Luis Von Ahn, Shiry Ginosar, Mihir Kedia, and Manuel Blum. 2007. Improving Image Search with Phetch. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP’07), Vol. 4. IEEE, IV--1209. Google ScholarCross Ref
- Charles J. Walker. 2010. Experiencing flow: Is doing it together better than doing it alone? The Journal of Positive Psychology 5, 1 (2010), 3--11. Google ScholarCross Ref
- Jiangtao Wang, Yasha Wang, Daqing Zhang, Feng Wang, Yuanduo He, and Liantao Ma. 2017. PSAllocator: Multi-task allocation for participatory sensing with sensing capability constraints. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW’17). ACM, New York, NY, 1139--1151. https://doi.org/10.1145/2998181.2998193 Google ScholarDigital Library
- Jiangtao Wang, Yasha Wang, Daqing Zhang, Leye Wang, Chao Chen, Jae Woong Lee, and Yuanduo He. 2016. Real-time and generic queue time estimation based on mobile crowdsensing. Frontiers of Computer Science (2016), 1--12.Google Scholar
- J. Wang, Y. Wang, D. Zhang, L. Wang, H. Xiong, A. Helal, Y. He, and F. Wang. 2016. Fine-grained multitask allocation for participatory sensing with a shared budget. IEEE Internet of Things Journal 3, 6 (Dec 2016), 1395--1405. https://doi.org/10.1109/JIOT.2016.2608141Google ScholarCross Ref
- Dejun Yang, Guoliang Xue, Xi Fang, and Jian Tang. 2012. Crowdsourcing to smartphones: Incentive mechanism design for mobile phone sensing. In Proceedings of the 18th Annual International Conference on Mobile Computing and Networking (Mobicom’12). ACM, 173--184. Google ScholarDigital Library
- Lixiu Yu, Paul André, Aniket Kittur, and Robert Kraut. 2014. A comparison of social, learning, and financial strategies on crowd engagement and output quality. In Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing. ACM, 967--978. Google ScholarDigital Library
- D. Zhang, L. Wang, H. Xiong, and B. Guo. 2014. 4W1H in Mobile Crowd Sensing. IEEE Communications Magazine 52, 8 (Aug 2014), 42--48. https://doi.org/10.1109/MCOM.2014.6871668 Google ScholarCross Ref
- Yu Zhang and Mihaela van der Schaar. 2012. Reputation-based incentive protocols in crowdsourcing applications. In Proceedings of the 2012 IEEE INFOCOM. IEEE, 2140--2148. Google ScholarCross Ref
- Gabe Zichermann and Christopher Cunningham. 2011. Gamification by Design: Implementing Game Mechanics in Web and Mobile Apps. http://shop.oreilly.com/product/0636920014614.doGoogle Scholar
- Matthew Zook, Mark Graham, Taylor Shelton, and Sean Gorman. 2010. Volunteered geographic information and crowdsourcing disaster relief: A case study of the Haitian earthquake. World Medical & Health Policy 2, 2 (2010), 7--33. Google ScholarCross Ref
Index Terms
- Social Incentives in Paid Collaborative Crowdsourcing
Recommendations
Improving Paid Microtasks through Gamification and Adaptive Furtherance Incentives
WWW '15: Proceedings of the 24th International Conference on World Wide WebCrowdsourcing via paid microtasks has been successfully applied in a plethora of domains and tasks. Previous efforts for making such crowdsourcing more effective have considered aspects as diverse as task and workflow design, spam detection, quality ...
Beyond Monetary Incentives: Experiments in Paid Microtask Contests
In this article, we aim to gain a better understanding into how paid microtask crowdsourcing could leverage its appeal and scaling power by using contests to boost crowd performance and engagement. We introduce our microtask-based annotation platform ...
Truthful Team Formation for Crowdsourcing in Social Networks: (Extended Abstract)
AAMAS '16: Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent SystemsThis paper studies complex task crowdsourcing by team formation in social networks (SNs), where the requester wishes to hire a group of socially connected workers that can work together as a team. Previous social team crowdsourcing approaches mainly ...
Comments