ABSTRACT
Crowdsourcing allows to build hybrid online platforms that combine scalable information systems with the power of human intelligence to complete tasks that are difficult to tackle for current algorithms. Examples include hybrid database systems that use the crowd to fill missing values or to sort items according to subjective dimensions such as picture attractiveness. Current approaches to Crowdsourcing adopt a pull methodology where tasks are published on specialized Web platforms where workers can pick their preferred tasks on a first-come-first-served basis. While this approach has many advantages, such as simplicity and short completion times, it does not guarantee that the task is performed by the most suitable worker. In this paper, we propose and extensively evaluate a different Crowdsourcing approach based on a push methodology. Our proposed system carefully selects which workers should perform a given task based on worker profiles extracted from social networks. Workers and tasks are automatically matched using an underlying categorization structure that exploits entities extracted from the task descriptions on one hand, and categories liked by the user on social platforms on the other hand. We experimentally evaluate our approach on tasks of varying complexity and show that our push methodology consistently yield better results than usual pull strategies.
- M. Agrawal, M. Karimzadehgan, and C. Zhai. An online news recommender system for social networks. In Proceedings of ACM SIGIR workshop on Search in Social Media, 2009.Google Scholar
- O. Alonso and R. A. Baeza-Yates. Design and Implementation of Relevance Assessments Using Crowdsourcing. In ECIR, pages 153--164, 2011. Google ScholarDigital Library
- K. Balog, Y. Fang, M. de Rijke, P. Serdyukov, and L. Si. Expertise retrieval. Foundations and Trends in Information Retrieval, 6(2-3):127--256, 2012. Google ScholarDigital Library
- K. Balog, P. Thomas, N. Craswell, I. Soboroff, P. Bailey, and A. De Vries. Overview of the trec 2008 enterprise track. Technical report, DTIC Document, 2008.Google Scholar
- R. Blanco, H. Halpin, D. Herzig, P. Mika, J. Pound, H. S. Thompson, and D. T. Tran. Repeatable and reliable search system evaluation using crowdsourcing. In SIGIR, pages 923--932, 2011. Google ScholarDigital Library
- A. Bozzon, M. Brambilla, and S. Ceri. Answering search queries with CrowdSearcher. In WWW, pages 1009--1018, New York, NY, USA, 2012. ACM. Google ScholarDigital Library
- A. Bozzon, M. Brambilla, S. Ceri, and A. Mauri. Extending search to crowds: A model-driven approach. In SeCO Book, pages 207--222. 2012. Google ScholarDigital Library
- A. Bozzon, M. Brambilla, and A. Mauri. A model-driven approach for crowdsourcing search. In CrowdSearch, pages 31--35, 2012.Google Scholar
- A. Bozzon, I. Catallo, E. Ciceri, P. Fraternali, D. Martinenghi, and M. Tagliasacchi. A framework for crowdsourced multimedia processing and querying. In CrowdSearch, pages 42--47, 2012.Google Scholar
- G. Demartini, D. E. Difallah, and P. Cudre-Mauroux. ZenCrowd: leveraging probabilistic reasoning and crowdsourcing techniques for large-scale entity linking. In WWW, pages 469--478, New York, NY, USA, 2012. Google ScholarDigital Library
- E. Diaz-Aviles and R. Kawase. Exploiting twitter as a social channel for human computation. In CrowdSearch, pages 15--19, 2012.Google Scholar
- A. Feng, M. J. Franklin, D. Kossmann, T. Kraska, S. Madden, S. Ramesh, A. Wang, and R. Xin. CrowdDB: Query Processing with the VLDB Crowd. PVLDB, 4(11):1387--1390, 2011.Google Scholar
- J. A. Golbeck. Computing and applying trust in web-based social networks. PhD thesis, College Park, MD, USA, 2005. AAI3178583. Google ScholarDigital Library
- X. Han, L. Sun, and J. Zhao. Collective entity linking in web text: a graph-based method. In SIGIR, pages 765--774, New York, NY, USA, 2011. ACM. Google ScholarDigital Library
- R. Jurca and B. Faltings. Mechanisms for making crowds truthful. J. Artif. Intell. Res. (JAIR), 34:209--253, 2009. Google ScholarDigital Library
- G. Kazai. In Search of Quality in Crowdsourcing for Search Engine Evaluation. In ECIR, pages 165--176, 2011. Google ScholarDigital Library
- G. Kazai, J. Kamps, M. Koolen, and N. Milic-Frayling. Crowdsourcing for book search evaluation: impact of hit design on comparative system ranking. In SIGIR, pages 205--214, 2011. Google ScholarDigital Library
- C. Macdonald and I. Ounis. Voting techniques for expert search. Knowl. Inf. Syst., 16(3):259--280, 2008. Google ScholarDigital Library
- S. Perugini, M. A. Goncalves, and E. A. Fox. Recommender systems research: A connection-centric survey. J. Intell. Inf. Syst., 23(2):107--143, Sept. 2004. Google ScholarDigital Library
- J. Pound, P. Mika, and H. Zaragoza. Ad-hoc object retrieval in the web of data. In WWW, pages 771--780, 2010. Google ScholarDigital Library
- C. Sarasua, E. Simperl, and N. F. Noy. Crowdmap: Crowdsourcing ontology alignment with microtasks. In ISWC, pages 525--541, 2012. Google ScholarDigital Library
- N. Seemakurty, J. Chu, L. von Ahn, and A. Tomasic. Word sense disambiguation via human computation. In Proceedings of the ACM SIGKDD Workshop on Human Computation, HCOMP '10, pages 60--63, New York, NY, USA, 2010. ACM. Google ScholarDigital Library
- J. Selke, C. Lofi, and W.-T. Balke. Pushing the boundaries of crowd-enabled databases with query-driven schema expansion. Proc. VLDB Endow., 5(6):538--549, Feb. 2012. Google ScholarDigital Library
- A. Tonon, G. Demartini, and P. Cudre-Mauroux. Combining inverted indices and structured search for ad-hoc object retrieval. In SIGIR, pages 125--134, 2012. Google ScholarDigital Library
- L. von Ahn and L. Dabbish. Designing games with a purpose. Commun. ACM, 51(8):58--67, Aug. 2008. Google ScholarDigital Library
- L. von Ahn, R. Liu, and M. Blum. Peekaboom: a game for locating objects in images. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '06, pages 55--64, New York, NY, USA, 2006. ACM. Google ScholarDigital Library
- L. Von Ahn, B. Maurer, C. McMillen, D. Abraham, and M. Blum. recaptcha: Human-based character recognition via web security measures. Science, 321(5895):1465--1468, 2008.Google Scholar
Index Terms
- Pick-a-crowd: tell me what you like, and i'll tell you what to do
Recommendations
Make Hay While the Crowd Shines: Towards Efficient Crowdsourcing on the Web
WWW '15 Companion: Proceedings of the 24th International Conference on World Wide WebWithin the scope of this PhD proposal, we set out to investigate two pivotal aspects that influence the effectiveness of crowdsourcing: (i) microtask design, and (ii) workers behavior. Leveraging the dynamics of tasks that are crowdsourced on the one ...
Modus Operandi of Crowd Workers: The Invisible Role of Microtask Work Environments
The ubiquity of the Internet and the widespread proliferation of electronic devices has resulted in flourishing microtask crowdsourcing marketplaces, such as Amazon MTurk. An aspect that has remained largely invisible in microtask crowdsourcing is that ...
SocialTransfer: Transferring Social Knowledge for Cold-Start Cowdsourcing
CIKM '14: Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge ManagementAn essential component of building a successful crowdsourcing market is effective task matching, which matches a given task to the right crowdworkers. In order to provide high- quality task matching, crowdsourcing systems rely on past task-solving ...
Comments