skip to main content
10.1145/2441776.2441923acmconferencesArticle/Chapter ViewAbstractPublication PagescscwConference Proceedingsconference-collections
research-article

The future of crowd work

Authors Info & Claims
Published:23 February 2013Publication History

ABSTRACT

Paid crowd work offers remarkable opportunities for improving productivity, social mobility, and the global economy by engaging a geographically distributed workforce to complete complex tasks on demand and at scale. But it is also possible that crowd work will fail to achieve its potential, focusing on assembly-line piecework. Can we foresee a future crowd workplace in which we would want our children to participate? This paper frames the major challenges that stand in the way of this goal. Drawing on theory from organizational behavior and distributed computing, as well as direct feedback from workers, we outline a framework that will enable crowd work that is complex, collaborative, and sustainable. The framework lays out research challenges in twelve major areas: workflow, task assignment, hierarchy, real-time response, synchronous collaboration, quality control, crowds guiding AIs, AIs guiding crowds, platforms, job design, reputation, and motivation.

References

  1. Ahmad, S., Battle, A., Malkani, Z., and Kamvar, S. The jabberwocky programming environment for structured social computing. Proc. UIST '11, (2011). Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Von Ahn, L. and Dabbish, L. Labeling images with a computer game. Proceedings of the SIGCHI conference on Human factors in computing systems, ACM (2004), 319--326. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Ahn, L. V., Blum, M., Hopper, N. J., and Langford, J. CAPTCHA: Using hard AI problems for security. Proceedings of the 22nd international conference on Theory and applications of cryptographic techniques, (2003), 294--311. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Albrecht, J. The 2011 Nobel Memorial Prize in Search Theory. Department of Economics, Georgetown University. http://www9.georgetown.edu/faculty/albrecht/SJESurvey.pdf, (2011).Google ScholarGoogle Scholar
  5. Anagnostopoulos, A., Becchetti, L., Castillo, C., Gionis, A., and Leonardi, S. Online team formation in social networks. Proceedings of the 21st international conference on World Wide Web, ACM (2012), 839--848. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Anderson, M. Crowdsourcing Higher Education: A Design Proposal for Distributed Learning. MERLOT Journal of Online Learning and Teaching 7, 4 (2011), 576--590.Google ScholarGoogle Scholar
  7. Antin, J. and Shaw, A. Social desirability bias and self-reports of motivation: a study of amazon mechanical turk in the US and India. Proc. CHI '12, (2012). Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Archak, N. and Sundararajan, A. Optimal Design of Crowdsourcing Contests. ICIS 2009 Proceedings, (2009), 200.Google ScholarGoogle Scholar
  9. Bal, H. E., Steiner, J. G., and Tanenbaum, A. S. Programming languages for distributed computing systems. ACM Computing Surveys (CSUR) 21, 3 (1989), 261--322. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Becker, G. S. and Murphy, K. M. The division of labor, coordination costs, and knowledge. The Quarterly Journal of Economics 107, 4 (1992), 1137--1160.Google ScholarGoogle ScholarCross RefCross Ref
  11. Bederson, B. B. and Quinn, A. J. Web workers unite! addressing challenges of online laborers. Extended Abstracts CHI '11, (2011). Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Benkler, Y. The wealth of networks: How social production transforms markets and freedom. Yale Univ Pr, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Bernstein, M. S., Brandt, J., Miller, R. C., and Karger, D. R. Crowds in two seconds: Enabling realtime crowd-powered interfaces. Proc. UIST '11, (2011). Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Bernstein, M. S., Karger, D. R., Miller, R. C., and Brandt, J. Analytic Methods for Optimizing Realtime Crowdsourcing. Proc. Collective Intelligence 2012, (2012).Google ScholarGoogle Scholar
  15. Bernstein, M. S., Little, G., Miller, R. C., et al. Soylent: A Word Processor with a Crowd Inside. Proc. UIST '10, (2010). Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Bernstein, M. S., Monroy-Hernández, A., Harry, D., André, P., Panovich, K., and Vargas, G. 4chan and /b/: An Analysis of Anonymity and Ephemerality in a Large Online Community. Fifth International AAAI Conference on Weblogs and Social Media, AAAI Publications (2011).Google ScholarGoogle Scholar
  17. Bigham, J. P., Jayant, C., Ji, H., et al. VizWiz: Nearly Real-time Answers to Visual Questions. Proc. UIST '10, (2010). Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Boud, D., Cohen, R., and Sampson, J. Peer learning and assessment. Assessment & Evaluation in Higher Education 24, 4 (1999), 413--426.Google ScholarGoogle ScholarCross RefCross Ref
  19. Boudreau, K. J., Lacetera, N., and Lakhani, K. R. Incentives and problem uncertainty in innovation contests: An empirical analysis. Management Science 57, 5 (2011), 843. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Brabham, D. C. Moving the crowd at iStockphoto: The composition of the crowd and motivations for participation in a crowdsourcing application. First Monday 13, 6 (2008), 1--22.Google ScholarGoogle ScholarCross RefCross Ref
  21. Brin, S. and Page, L. The anatomy of a large-scale hypertextual Web search engine. Computer networks and ISDN systems 30, 1--7 (1998), 107--117. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Bryant, S. L., Forte, A., and Bruckman, A. Becoming Wikipedian: transformation of participation in a collaborative online encyclopedia. GROUP '05, ACM Press (2005), 1--10. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Callison-Burch, C. and Dredze, M. Creating speech and language data with Amazon's Mechanical Turk. Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk, Association for Computational Linguistics (2010), 1--12. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Callison-Burch, C. Fast, cheap, and creative: evaluating translation quality using Amazon's Mechanical Turk. Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1-Volume 1, (2009), 286--295. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Casavant, T. L., Braun, T. A., Kaliannan, S., Scheetz, T. E., Munn, K. J., and Birkett, C. L. A parallel/distributed architecture for hierarchically heterogeneous web-based cooperative applications. Future Generation Computer Systems 17, 6 (2001), 783--793. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Chandler, D. and Kapelner, A. Breaking monotony with meaning: Motivation in crowdsourcing markets. University of Chicago mimeo, (2010).Google ScholarGoogle Scholar
  27. Chen, J. J., Menezes, N. J., and Bradley, A. D. Opportunities for Crowdsourcing Research on Amazon Mechanical Turk. Interfaces 5, (2011), 3.Google ScholarGoogle Scholar
  28. Cheshire, C. Online Trust, Trustworthiness, or Assurance? Daedalus 140, 4 (2011), 49--58.Google ScholarGoogle ScholarCross RefCross Ref
  29. Chilton, L., Horton, J., Miller, R. C., and Azenkot, S. Task search in a human computation market. Proc. HCOMP '10, (2010). Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Coase, R. H. The Nature of the Firm. Economica 4, 16 (1937), 386--405.Google ScholarGoogle ScholarCross RefCross Ref
  31. Cooper, S., Khatib, F., Treuille, A., et al. Predicting protein structures with a multiplayer online game. Nature 466, 7307 (2010), 756--760.Google ScholarGoogle ScholarCross RefCross Ref
  32. Dai, P., Mausam, and Weld, D. Decision-theoretic control of crowd-sourced workflows. Proc. AAAI '10, (2010).Google ScholarGoogle Scholar
  33. Dai, P., Mausam, and Weld, D. S. Artificial intelligence for artificial artificial intelligence. Twenty-Fifth AAAI Conference on Artificial Intelligence, (2011).Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Dean, J. and Ghemawat, S. MapReduce: Simplified Data Processing on Large Clusters. To appear in OSDI, (2004), 1. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Debeauvais, T., Nardi, B. A., Lopes, C. V., Yee, N., and Ducheneaut, N. 10,000 Gold for 20 Dollars: An exploratory study of World of Warcraft gold buyers. Proceedings of the International Conference on the Foundations of Digital Games, (2012), 105--112. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Dekel, O. and Shamir, O. Vox populi: Collecting high-quality labels from a crowd. Proc. 22nd Annual Conference on Learning Theory, (2009).Google ScholarGoogle Scholar
  37. Della Penn, N. and Reid, M. D. Crowd & Prejudice: An Impossibility Theorem for Crowd Labelling without a Gold Standard. Collective Intelligence, (2012).Google ScholarGoogle Scholar
  38. Dellarocas, C. Analyzing the economic efficiency of eBay-like online reputation reporting mechanisms. Proceedings of the 3rd ACM Conference on Electronic Commerce, (2001), 171--179. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Van Der Aalst, W. M. P., Ter Hofstede, A. H. M., Kiepuszewski, B., and Barros, A. P. Workflow patterns. Distributed and parallel databases 14, 1 (2003), 5--51. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Doctorow, C. For the Win. Voyager, 2010.Google ScholarGoogle Scholar
  41. Douceur, J. The sybil attack. Peer-to-peer Systems, (2002), 251--260. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Dow, S., Kulkarni, A., Klemmer, S., and Hartmann, B. Shepherding the crowd yields better work. Proc. CSCW '12, (2012). Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Downs, J. S., Holbrook, M. B., Sheng, S., and Cranor, L. F. Are your participants gaming the system?: screening mechanical turk workers. Proceedings of the 28th international conference on Human factors in computing systems, (2010), 2399--2402. Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Felstiner, A. Sweatshop or Paper Route?: Child Labor Laws and In-Game Work. Proceedings of CrowdConf, (2010).Google ScholarGoogle Scholar
  45. Felstiner, A. Working the Crowd: Employment and Labor Law in the Crowdsourcing Industry. Berkeley J. Emp. & Lab. L. 32, (2011), 143--143.Google ScholarGoogle Scholar
  46. Franklin, M., Kossmann, D., Kraska, T., Ramesh, S., and Xin, R. CrowdDB: answering queries with crowdsourcing. Proc. SIGMOD '11, (2011). Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Gal, Y., Yamangil, E., Shieber, S., Rubin, A., and Grosz, B. Towards collaborative intelligent tutors: Automated recognition of users' strategies. Intelligent Tutoring Systems, (2008), 162--172. Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Gerber, E. and Dontcheva, M. Career Aspirations for Crowdworkers. In preparation.Google ScholarGoogle Scholar
  49. Gerber, E., Hui, J., and Kuo, P. Crowdfunding: Why creators and supporters participate. Segal Design Institute, Evanston, IL, 2012.Google ScholarGoogle Scholar
  50. Ghosh, R. and Glott, R. Free/Libre and Open Source Software: Survey and Study. European Commission, 2002.Google ScholarGoogle Scholar
  51. Gingold, Y., Shamir, A., and Cohen-Or, D. Micro Perceptual Human Computation. To appear in ACM Transactions on Graphics (TOG), (2012). Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Glott, R., Ghosh, R., and Schmidt, P. Wikipedia Survey. UNU-MERIT, Maastricht, Netherlands, 2010.Google ScholarGoogle Scholar
  53. Greenberg, S. and Bohnet, R. GroupSketch: A multi-user sketchpad for geographically-distributed small groups. Proc. Graphics Interface'91, (1991).Google ScholarGoogle Scholar
  54. Grier, D. A. When Computers Were Human. Princeton University Press, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Grier, D. A. Error Identification and Correction in Human Computation: Lessons from the WPA. Proc. HCOMP '11, (2011).Google ScholarGoogle Scholar
  56. Gupta, A., Thies, W., Cutrell, E., and Balakrishnan, R. mClerk: Enabling Mobile Crowdsourcing in Developing Regions. Proc. CHI '12, (2012). Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. Hackman, J. R. and Oldham, G. R. Motivation through the design of work: Test of a theory. Organizational behavior and human performance 16, 2 (1976), 250--279.Google ScholarGoogle Scholar
  58. Heimerl, K., Gawalt, B., Chen, K., Parikh, T. S., and Hartmann, B. Communitysourcing: Engaging Local Crowds to Perform Expert Work Via Physical Kiosks. Proc. CHI '12, (2012). Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. Hellerstein, J. M. and Tennenhouse, D. L. Searching for Jim Gray: a technical overview. Communcations of the ACM 54, 7 (2011), 77--87. Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. Hinds, P. Distributed work. The MIT Press, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. Von Hippel, E. Task partitioning: An innovation process variable. Research policy 19, 5 (1990), 407--418.Google ScholarGoogle Scholar
  62. Holmstrom, B. and Milgrom, P. Multitask principal-agent analyses: Incentive contracts, asset ownership, and job design. JL Econ. & Org. 7, (1991), 24.Google ScholarGoogle ScholarCross RefCross Ref
  63. Horton, J. J. and Chilton, L. B. The labor economics of paid crowdsourcing. Proceedings of the 11th ACM conference on Electronic commerce, (2010), 209--218. Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. Ipeirotis, P. G., Provost, F., and Wang, J. Quality management on amazon mechanical turk. Proc. HCOMP '10, (2010). Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. Ipeirotis, P. G. Mechanical Turk, Low Wages, and the Market for Lemons. http://www.behind-the-enemy-lines.com/2010/07/mechanical-turk-low-wages-and-market.html, 2010.Google ScholarGoogle Scholar
  66. Ishii, H. and Kobayashi, M. ClearBoard: a seamless medium for shared drawing and conversation with eye contact. Proc. CHI '92, (1992), 525--532. Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. Kamar, E., Hacker, S., and Horvitz, E. Combining Human and Machine Intelligence in Large-scale Crowdsourcing. Proc. AAMAS '12, (2012). Google ScholarGoogle ScholarDigital LibraryDigital Library
  68. Kearns, M., Suri, S., and Montfort, N. An experimental study of the coloring problem on human subject networks. Science 313, 5788 (2006), 824--827.Google ScholarGoogle ScholarCross RefCross Ref
  69. Kensing, F. and Blomberg, J. Participatory Design: Issues and Concerns. Computer Supported Cooperative Work (CSCW) 7, 3 (1998), 167--185. Google ScholarGoogle ScholarDigital LibraryDigital Library
  70. Kerr, S. On the folly of rewarding A, while hoping for B. Academy of Management Journal, (1975), 769--783.Google ScholarGoogle Scholar
  71. Khanna, S., Ratan, A., Davis, J., and Thies, W. Evaluating and improving the usability of Mechanical Turk for low-income workers in India. Proc. ACM Symposium on Computing for Development '10, (2010). Google ScholarGoogle ScholarDigital LibraryDigital Library
  72. Kittur, A., Chi, E. H., and Suh, B. Crowdsourcing user studies with Mechanical Turk. Proceedings of the twenty-sixth annual SIGCHI conference on Human factors in computing systems, (2008), 453--456. Google ScholarGoogle ScholarDigital LibraryDigital Library
  73. Kittur, A., Khamkar, S., André, P., and Kraut, R. E. CrowdWeaver: visually managing complex crowd work. Proc. CSCW '12, (2012). Google ScholarGoogle ScholarDigital LibraryDigital Library
  74. Kittur, A., Lee, B., and Kraut, R. E. Coordination in collective intelligence: the role of team structure and task interdependence. Proceedings of the 27th international conference on Human factors in computing systems, (2009), 1495--1504. Google ScholarGoogle ScholarDigital LibraryDigital Library
  75. Kittur, A., Smus, B., Khamkar, S., and Kraut, R. E. Crowdforge: Crowdsourcing complex work. Proceedings of the 24th annual ACM symposium on User interface software and technology, (2011), 43--52. Google ScholarGoogle ScholarDigital LibraryDigital Library
  76. Kittur, A., Suh, B., and Chi, E. H. Can you ever trust a wiki?: impacting perceived trustworthiness in wikipedia. Proceedings of the 2008 ACM conference on Computer supported cooperative work, (2008), 477--480. Google ScholarGoogle ScholarDigital LibraryDigital Library
  77. Kittur, A. Crowdsourcing, collaboration and creativity. XRDS 17, 2 (2010), 22--26. Google ScholarGoogle ScholarDigital LibraryDigital Library
  78. Kochhar, S., Mazzocchi, S., and Paritosh, P. The anatomy of a large-scale human computation engine. Proceedings of the ACM SIGKDD Workshop on Human Computation, (2010), 10--17. Google ScholarGoogle ScholarDigital LibraryDigital Library
  79. Kulkarni, A., Gutheim, P., Narula, P., Rolnitzky, D., Parikh, T. S., and Hartmann, B. MobileWorks: Designing for Quality in a Managed Crowdsourcing Architecture. IEEE Internet Computing To appear, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  80. Kulkarni, A. P., Can, M., and Hartmann, B. Turkomatic: automatic recursive task and workflow design for mechanical turk. Proceedings of the 2011 annual conference extended abstracts on Human factors in computing systems, (2011), 2053--2058. Google ScholarGoogle ScholarDigital LibraryDigital Library
  81. Kuznetsov, S. Motivations of contributors to Wikipedia. SIGCAS Comput. Soc. 36, 2 (2006), 1. Google ScholarGoogle ScholarDigital LibraryDigital Library
  82. Lakhani, K. and Wolf, B. Why hackers do what they do: Understanding motivation and effort in free/open source software projects. In J. Feller, B. Fitzgerald, S. A. Hissam and K. R. Lakhani, eds., Perspectives on Free and Open Source Software. MIT Press, 2005, 3--22.Google ScholarGoogle Scholar
  83. Lasecki, W. S., Murray, K. I., White, S., Miller, R. ., and Bigham, J. P. Real-time crowd control of existing interfaces. Proc. UIST '11, ACM Press (2011). Google ScholarGoogle ScholarDigital LibraryDigital Library
  84. Lasecki, W. S., White, S. C., Murray, K. I., and Bigham, J. P. Crowd memory: Learning in the collective. Proc. Collective Intelligence 2012, (2012).Google ScholarGoogle Scholar
  85. Le, J., Edmonds, A., Hester, V., and Biewald, L. Ensuring quality in crowdsourced search relevance evaluation: The effects of training question distribution. Proc. SIGIR 2010 Workshop on Crowdsourcing for Search Evaluation, (2010), 21--26.Google ScholarGoogle Scholar
  86. Lewis, S., Dontcheva, M., and Gerber, E. Affective computational priming and creativity. Proceedings of the 2011 annual conference on Human factors in computing systems, (2011), 735--744. Google ScholarGoogle ScholarDigital LibraryDigital Library
  87. Little, G., Chilton, L. B., Goldman, M., and Miller, R. C. Turkit: human computation algorithms on mechanical turk. Proceedings of the 23nd annual ACM symposium on User interface software and technology, (2010), 57--66. Google ScholarGoogle ScholarDigital LibraryDigital Library
  88. Littler, C. R. Understanding Taylorism. British Journal of Sociology, (1978), 185--202.Google ScholarGoogle Scholar
  89. Malone, T. W. and Crowston, K. The interdisciplinary study of coordination. ACM Computing Surveys (CSUR) 26, 1 (1994), 87--119. Google ScholarGoogle ScholarDigital LibraryDigital Library
  90. Malone, T. W., Yates, J., and Benjamin, R. I. Electronic markets and electronic hierarchies. Communications of the ACM 30, 6 (1987), 484--497. Google ScholarGoogle ScholarDigital LibraryDigital Library
  91. Mao, A., Parkes, D. C., Procaccia, A. D., and Zhang, H. Human Computation and Multiagent Systems: An Algorithmic Perspective. Proc. AAAI '11, (2011).Google ScholarGoogle Scholar
  92. March, J. G. and Simon, H. A. Organizations. (1958).Google ScholarGoogle Scholar
  93. Marcus, A., Karger, D. R., Madden, S., Miller, R. C., and Oh, S. Counting with the Crowd. In Submission to VLDB, (2012). Google ScholarGoogle ScholarDigital LibraryDigital Library
  94. Mason, W. and Watts, D. J. Financial Incentives and the "Performance of Crowds". Proc. HCOMP '09, ACM Press (2009). Google ScholarGoogle ScholarDigital LibraryDigital Library
  95. McGraw, I., Lee, C., Hetherington, L., Seneff, S., and Glass, J. R. Collecting voices from the cloud. Proc. LREC, (2010).Google ScholarGoogle Scholar
  96. Milner, R. Communicating and mobile systems: the {symbol for pi}-calculus. Cambridge Univ Pr, 1999. Google ScholarGoogle ScholarDigital LibraryDigital Library
  97. Mintzberg, H. An emerging strategy of "direct" research. Administrative science quarterly 24, 4 (1979), 582--589.Google ScholarGoogle Scholar
  98. Moran, T. P. and Anderson, R. J. The workaday world as a paradigm for CSCW design. Proceedings of the 1990 ACM Conference on Computer-supported Cooperative Work, (1990), 381--393. Google ScholarGoogle ScholarDigital LibraryDigital Library
  99. Nalimov, E. V., Wirth, C., Haworth, G. M. C., and Others. KQQKQQ and the Kasparov-World Game. ICGA Journal 22, 4 (1999), 195--212.Google ScholarGoogle ScholarCross RefCross Ref
  100. Narula, P., Gutheim1, P., Rolnitzky, D., Kulkarni, A., and Hartmann, B. MobileWorks: A Mobile Crowdsourcing Platform for Workers at the Bottom of the Pyramid. Proc. HCOMP '11, (2011).Google ScholarGoogle Scholar
  101. Nickerson, J. V. and Monroy-Hernandez, A. Appropriation and creativity: User-Initiated contests in scratch. System Sciences (HICSS), 2011 44th Hawaii International Conference on, (2011), 1--10. Google ScholarGoogle ScholarDigital LibraryDigital Library
  102. Norman, D. A. and Draper, S. W. User centered system design; new perspectives on human-computer interaction. L. Erlbaum Associates Inc., 1986. Google ScholarGoogle ScholarDigital LibraryDigital Library
  103. Noronha, J., Hysen, E., Zhang, H., and Gajos, K. Z. Platemate: crowdsourcing nutritional analysis from food photographs. Proc. UIST '11, (2011). Google ScholarGoogle ScholarDigital LibraryDigital Library
  104. Preece, J. and Shneiderman, B. The Reader-to-Leader Framework: Motivating Technology-Mediated Social Participation. AIS Transactions on Human-Computer Interaction 1, 1 (2009), 13--32.Google ScholarGoogle ScholarCross RefCross Ref
  105. Quinn, A. J. and Bederson, B. B. Human computation: a survey and taxonomy of a growing field. Proc. CHI '11, (2011). Google ScholarGoogle ScholarDigital LibraryDigital Library
  106. Raddick, J., Lintott, C., Bamford, S., et al. Galaxy Zoo: Motivations of Citizen Scientists. Bulletin of the American Astronomical Society, (2008), 240.Google ScholarGoogle Scholar
  107. Rafaeli, S. and Ariel, Y. Online Motivational Factors: Incentives for Participation and Contribution in Wikipedia. In A. Barak, ed., Psychological Aspects of Cyberspace. Cambridge University Press, New York, NY, 2008.Google ScholarGoogle ScholarCross RefCross Ref
  108. Raymond, E. The cathedral and the bazaar. Knowledge, Technology & Policy 12, 3 (1999), 23--49.Google ScholarGoogle ScholarCross RefCross Ref
  109. Redmiles, D. and Nakakoji, K. Supporting reflective practitioners. Software Engineering, 2004. ICSE 2004. Proceedings. 26th International Conference on, (2004), 688--690. Google ScholarGoogle ScholarDigital LibraryDigital Library
  110. Reijers, H., Jansen-Vullers, M., Zur Muehlen, M., and Appl, W. Workflow management systems + swarm intelligence = dynamic task assignment for emergency management applications. Business Process Management, (2007), 125--140. Google ScholarGoogle ScholarDigital LibraryDigital Library
  111. Resnick, P. and Zeckhauser, R. Trust among strangers in Internet transactions: Empirical analysis of eBay's reputation system. Advances in Applied Microeconomics 11, (2002), 127--157.Google ScholarGoogle ScholarCross RefCross Ref
  112. Rittel, H. W. J. and Webber, M. M. Dilemmas in a general theory of planning. Policy sciences 4, 2 (1973), 155--169.Google ScholarGoogle Scholar
  113. Ritter, S., Anderson, J. R., Koedinger, K. R., and Corbett, A. Cognitive Tutor: Applied research in mathematics education. Psychonomic bulletin & review 14, 2 (2007), 249--255.Google ScholarGoogle Scholar
  114. Rogers, Y. Exploring obstacles: integrating CSCW in evolving organisations. Proceedings of the 1994 ACM conference on Computer supported cooperative work, (1994), 67--77. Google ScholarGoogle ScholarDigital LibraryDigital Library
  115. Rogstadius, J., Kostakos, V., Kittur, A., Smus, B., Laredo, J., and Vukovic, M. An Assessment of Intrinsic and Extrinsic Motivation on Task Performance in Crowdsourcing Markets. Proceedings of the Fifth International AAAI Conference on Weblogs and Social Media: Barcelona, Spain, (2011).Google ScholarGoogle Scholar
  116. Ross, J., Irani, L., Silberman, M. S., Zaldivar, A., and Tomlinson, B. Who Are the Crowdworkers? Shifting Demographics in Amazon Mechanical Turk. alt.chi '10, ACM Press (2010). Google ScholarGoogle ScholarDigital LibraryDigital Library
  117. Roth, A. E. The economist as engineer: Game theory, experimentation, and computation as tools for design economics. Econometrica 70, 4 (2002), 1341--1378.Google ScholarGoogle ScholarCross RefCross Ref
  118. Roy, D. F. Work satisfaction and social reward in quota achievement: An analysis of piecework incentive. American Sociological Review 18, 5 (1953), 507--514.Google ScholarGoogle ScholarCross RefCross Ref
  119. Rzeszotarski, J. and Kittur, A. CrowdScape: interactively visualizing user behavior and output. Proceedings of the 25th annual ACM symposium on User interface software and technology, (2012), 55--62. Google ScholarGoogle ScholarDigital LibraryDigital Library
  120. Rzeszotarski, J. M. and Kittur, A. Instrumenting the crowd: using implicit behavioral measures to predict task performance. Proc. UIST '11, (2011). Google ScholarGoogle ScholarDigital LibraryDigital Library
  121. Sadlon, E., Barrett, S., Sakamoto, Y., and Nickerson, J. V. The Karma of Digg: Reciprocity in Online Social Networks. Paris, December 2008. Proceedings of the 18th Annual Workshop on Information Technologies and Systems (WITS), (2008).Google ScholarGoogle Scholar
  122. Savage, N. Gaining wisdom from crowds. Communications of the ACM 55, 3 (2012), 13--15. Google ScholarGoogle ScholarDigital LibraryDigital Library
  123. Schmidt, K. and Bannon, L. Taking CSCW seriously. Computer Supported Cooperative Work (CSCW) 1, 1 (1992), 7--40.Google ScholarGoogle Scholar
  124. Schroer, J. and Hertel, G. Voluntary Engagement in an Open Web-Based Encyclopedia: Wikipedians and Why They Do It. Media Psychology 12, 1 (2009), 96.Google ScholarGoogle Scholar
  125. Shahaf, D. and Horvitz, E. Generalized task markets for human and machine computation. (2010).Google ScholarGoogle Scholar
  126. Shamir, B. and Salomon, I. Work-at-Home and the Quality of Working Life. The Academy of Management Review 10, 3 (1985), 455--464.Google ScholarGoogle Scholar
  127. Shaw, A. D., Horton, J. J., and Chen, D. L. Designing incentives for inexpert human raters. Proceedings of the ACM 2011 conference on Computer supported cooperative work, (2011), 275--284. Google ScholarGoogle ScholarDigital LibraryDigital Library
  128. Shen, M., Tzeng, G. H., and Liu, D. R. Multi-criteria task assignment in workflow management systems. System Sciences, 2003. Proceedings of the 36th Annual Hawaii International Conference on Social Sciences, (2003).Google ScholarGoogle Scholar
  129. Sheng, V. S., Provost, F., and Ipeirotis, P. G. Get another label? improving data quality and data mining using multiple, noisy labelers. Proc. KDD '08, ACM (2008), 614--622. Google ScholarGoogle ScholarDigital LibraryDigital Library
  130. Silberman, M. S., Irani, L., and Ross, J. Ethics and tactics of professional crowdwork. XRDS 17, 2 (2010), 39--43. Google ScholarGoogle ScholarDigital LibraryDigital Library
  131. Silberman, M. S., Ross, J., Irani, L., and Tomlinson, B. Sellers' problems in human computation markets. Proc. HCOMP '10, (2010). Google ScholarGoogle ScholarDigital LibraryDigital Library
  132. Skillicorn, D. B. and Talia, D. Models and languages for parallel computation. ACM Computing Surveys (CSUR) 30, 2 (1998), 123--169. Google ScholarGoogle ScholarDigital LibraryDigital Library
  133. Smith, A. The Wealth of Nations (1776). New York: Modern Library, (1937), 740.Google ScholarGoogle Scholar
  134. Snow, R., O'Connor, B., Jurafsky, D., and Ng, A. Y. Cheap and fast - but is it good?: evaluating non-expert annotations for natural language tasks. Proc. ACL '08, (2008). Google ScholarGoogle ScholarDigital LibraryDigital Library
  135. Sorokin, A. and Forsyth, D. Utility data annotation with Amazon Mechanical Turk. Proc. CVPR '08, (2008).Google ScholarGoogle ScholarCross RefCross Ref
  136. Sterling, B. A good old-fashioned future: stories. Spectra, 1999.Google ScholarGoogle Scholar
  137. Sterling, B. Shaping things. 2005.Google ScholarGoogle Scholar
  138. Stohr, E. A. and Zhao, J. L. Workflow automation: Overview and research issues. Information Systems Frontiers 3, 3 (2001), 281--296. Google ScholarGoogle ScholarDigital LibraryDigital Library
  139. Surowiecki, J. The Wisdom of Crowds. Random House, New York, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  140. Tang, J. C., Cebrian, M., Giacobe, N. A., Kim, H.-W., Kim, T., and Wickert, D. "Beaker". Reflecting on the DARPA Red Balloon Challenge. Communications of the ACM 54, 4 (2011), 78. Google ScholarGoogle ScholarDigital LibraryDigital Library
  141. Taylor, F. W. The principles of scientific management. Harper & Brothers, New York, 1911.Google ScholarGoogle Scholar
  142. Trist, E. L. The evolution of socio-technical systems: A conceptual framework and an action research program. Ontario Quality of Working Life Center.Google ScholarGoogle Scholar
  143. Van de Ven, A. H., Delbecq, A. L., and Koenig Jr, R. Determinants of coordination modes within organizations. American sociological review, (1976), 322--338.Google ScholarGoogle Scholar
  144. Viswanath, B., Mondal, M., Clement, A., et al. Exploring the design space of social network-based Sybil defenses. Proceedings of the 4th International Conference on Communication Systems and Network (COMSNETS'12), (2012).Google ScholarGoogle ScholarCross RefCross Ref
  145. Wang, G., Wilson, C., Zhao, X., et al. Serf and Turf: Crowdturfing for Fun and Profit. Arxiv preprint arXiv:1111.5654, (2011). Google ScholarGoogle ScholarDigital LibraryDigital Library
  146. Weld, D. S., Adar, E., Chilton, L., Hoffmann, R., and Horvitz, E. Personalized Online Education - A Crowdsourcing Challenge. Workshops at the Twenty-Sixth AAAI Conference on Artificial Intelligence, (2012).Google ScholarGoogle Scholar
  147. Welinder, P., Branson, S., Belongie, S., and Perona, P. The multidimensional wisdom of crowds. Neural Information Processing Systems 6, 7 (2010), 1--9.Google ScholarGoogle Scholar
  148. Williamson, O. E. The economics of organization: The transaction cost approach. American journal of sociology 87, 3 (1981), 548--577.Google ScholarGoogle Scholar
  149. Woolley, A. W., Chabris, C. F., Pentland, A., Hashmi, N., and Malone, T. W. Evidence for a collective intelligence factor in the performance of human groups. science 330, 6004 (2010), 686--688.Google ScholarGoogle Scholar
  150. Yu, H., Estrin, D., and Govindan, R. A hierarchical proxy architecture for Internet-scale event services. Enabling Technologies: Infrastructure for Collaborative Enterprises, 1999. (WET ICE'99) Proceedings. IEEE 8th International Workshops on, (1999), 78--83. Google ScholarGoogle ScholarDigital LibraryDigital Library
  151. Yu, J. and Buyya, R. A taxonomy of scientific workflow systems for grid computing. ACM Sigmod Record 34, 3 (2005), 44--49. Google ScholarGoogle ScholarDigital LibraryDigital Library
  152. Yu, L. and Nickerson, J. V. Cooks or cobblers?: crowd creativity through combination. Proceedings of the 2011 annual conference on Human factors in computing systems, (2011), 1393--1402. Google ScholarGoogle ScholarDigital LibraryDigital Library
  153. Yu, L. and Nickerson, J. V. An Intenet-Scale Idea Generation System. ACM Transactions on interactive Intelligent Systems 3, 1 (2013). Google ScholarGoogle ScholarDigital LibraryDigital Library
  154. Zhang, H., Law, E., Miller, R. C., Gajos, K. Z., Parkes, D. C., and Horvitz, E. Human Computation Tasks with Global Constraints. Proc. CHI '12, (2012). Google ScholarGoogle ScholarDigital LibraryDigital Library
  155. Zhao, L., Sukthankar, G., and Sukthankar, R. Robust Active Learning Using Crowdsourced Annotations for Activity Recognition. Workshops at the Twenty-Fifth AAAI Conference on Artificial Intelligence, (2011).Google ScholarGoogle Scholar
  156. Zhu, H., Kraut, R. E., Wang, Y.-C., and Kittur, A. Identifying shared leadership in Wikipedia. Proceedings of the 2011 annual conference on Human factors in computing systems, ACM (2011), 3431--3434. Google ScholarGoogle ScholarDigital LibraryDigital Library
  157. Zittrain, J. Ubiquitous human computing. Philosophical Transactions of the Royal Society A 366, (2008), 3813--3821.Google ScholarGoogle ScholarCross RefCross Ref
  158. Zittrain, J. Human Computing's Oppenheimer Question. Proceedings of Collective Intelligence, (2012).Google ScholarGoogle Scholar

Index Terms

  1. The future of crowd work

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        CSCW '13: Proceedings of the 2013 conference on Computer supported cooperative work
        February 2013
        1594 pages
        ISBN:9781450313315
        DOI:10.1145/2441776

        Copyright © 2013 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 23 February 2013

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        Overall Acceptance Rate2,235of8,521submissions,26%

        Upcoming Conference

        CSCW '24

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader