skip to main content
10.1145/2872427.2883070acmotherconferencesArticle/Chapter ViewAbstractPublication PageswwwConference Proceedingsconference-collections
research-article

Using Hierarchical Skills for Optimized Task Assignment in Knowledge-Intensive Crowdsourcing

Published:11 April 2016Publication History

ABSTRACT

Besides the simple human intelligence tasks such as image labeling, crowdsourcing platforms propose more and more tasks that require very specific skills, especially in participative science projects. In this context, there is a need to reason about the required skills for a task and the set of available skills in the crowd, in order to increase the resulting quality. Most of the existing solutions rely on unstructured tags to model skills (vector of skills). In this paper we propose to finely model tasks and participants using a skill tree, that is a taxonomy of skills equipped with a similarity distance within skills. This model of skills enables to map participants to tasks in a way that exploits the natural hierarchy among the skills. We illustrate the effectiveness of our model and algorithms through extensive experimentation with synthetic and real data sets.

References

  1. Acemoglu, D., Mostagir, M., and Ozdaglar, A. Managing innovation in a crowd. In Proceedings of the Sixteenth ACM Conference on Economics and Computation (New York, NY, USA, 2015), EC '15, ACM, pp. 283--283. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Amsterdamer, Y., Davidson, S. B., Milo, T., Novgorodov, S., and Somech, A. OASSIS: query driven crowd mining. In International Conference on Management of Data, SIGMOD 2014, Snowbird, UT, USA, June 22--27, 2014 (2014), pp. 589--600. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Bender, M. A., Farach-Colton, M., Pemmasani, G., Skiena, S., and Sumazin, P. Lowest common ancestors in trees and directed acyclic graphs. J. Algorithms 57, 2 (2005), 75--94. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Bozzon, A., Brambilla, M., Ceri, S., Silvestri, M., and Vesci, G. Choosing the right crowd: Expert finding in social networks. In Proceedings of the 16th International Conference on Extending Database Technology (2013), EDBT '13, pp. 637--648. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Bradley, K., Rafter, R., and Smyth, B. Case-based user profiling for content personalization. In Proceedings of the International Conference on Adaptive Hypermedia and Adaptive Web-based Systems (2000), Springer-Verlag, pp. 62--72. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Campion, M. A., Fink, A. A., Ruggeberg, B. J., Carr, L., Phillips, G. M., and Odman, R. B. Doing competencies well: Best practices in competency modeling. Personnel Psychology 64, 1 (2011), 225--262. Google ScholarGoogle ScholarCross RefCross Ref
  7. Cao, C. C., She, J., Tong, Y., and Chen, L. Whom to ask?: Jury selection for decision making tasks on micro-blog services. Proc. VLDB Endow. 5, 11 (July 2012), 1495--1506. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Desmarais, M. C., and d Baker, R. S. A review of recent advances in learner and skill modeling in intelligent learning environments. User Modeling and User-Adapted Interaction 22, 1--2 (2012), 9--38. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Enrich, M., Braunhofer, M., and Ricci, F. Cold-start management with cross-domain collaborative filtering and tags. In E-Commerce and Web Technologies (2013), C. Huemer and P. Lops, Eds., vol. 152 of Lecture Notes in Business Information Processing, Springer Berlin Heidelberg, pp. 101--112.Google ScholarGoogle ScholarCross RefCross Ref
  10. Fan, J., Li, G., Ooi, B. C., Tan, K.-l., and Feng, J. icrowd: An adaptive crowdsourcing framework. In Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data (2015), SIGMOD '15, pp. 1015--1030. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Karger, D. R., Oh, S., and Shah, D. Budget-optimal task allocation for reliable crowdsourcing systems. Operations Research 62, 1 (February 2014), 1--24. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Kuhn, H. W. The hungarian method for the assignment problem. Naval Research Logistics Quarterly 2 (1955), 83--97. Google ScholarGoogle ScholarCross RefCross Ref
  13. Liu, X., Lu, M., Ooi, B. C., Shen, Y., Wu, S., and Zhang, M. Cdas: a crowdsourcing data analytics system. Proceedings of the VLDB Endowment 5, 10 (2012), 1040--1051. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Maarry, K., Balke, W.-T., Cho, H., Hwang, S.-w., and Baba, Y. Skill ontology-based model for quality assurance in crowdsourcing. In Database Systems for Advanced Applications, Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2014, pp. 376--387. Google ScholarGoogle ScholarCross RefCross Ref
  15. Middleton, S. E., Shadbolt, N. R., and De Roure, D. C. Ontological user profiling in recommender systems. ACM Transactions on Information Systems (TOIS) 22, 1 (2004), 54--88. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Mo, L., Cheng, R., Kao, B., Yang, X. S., Ren, C., Lei, S., Cheung, D. W., and Lo, E. Optimizing plurality for human intelligence tasks. In 22nd ACM International Conference on Information and Knowledge Management, CIKM'13, San Francisco, CA, USA, October 27 - November 1, 2013 (2013), pp. 1929--1938. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Rahman, H., Thirumuruganathan, S., Roy, S. B., Amer-Yahia, S., and Das, G. Worker skill estimation in team-based tasks. PVLDB 8, 11 (2015), 1142--1153. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Resnik, P. Semantic similarity in a taxonomy: An information-based measure and its application to problems of ambiguity in natural language. J. Artif. Intell. Res. (JAIR) 11 (1999), 95--130.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Roy, S. B., Lykourentzou, I., Thirumuruganathan, S., Amer-Yahia, S., and Das, G. Task assignment optimization in knowledge-intensive crowdsourcing. VLDB J. 24, 4 (2015), 467--491. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Tamir, D. 500000 worldwide mechanical turk workers. In Techlist (Retrieved September 17, 2014).Google ScholarGoogle Scholar
  21. Victor, P., Cornelis, C., Teredesai, A. M., and De Cock, M. Whom should i trust?: The impact of key figures on cold start recommendations. In Proceedings of the 2008 ACM Symposium on Applied Computing (New York, NY, USA, 2008), SAC '08, ACM, pp. 2014--2018. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Vuurens, J., and de Vries, A. Obtaining High-Quality Relevance Judgments Using Crowdsourcing. IEEE Internet Computing 16, 5 (2012), 20--27. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Wang, D., Abdelzaher, T., Kaplan, L., and Aggarwal, C. C. Recursive fact-finding: A streaming approach to truth estimation in crowdsourcing applications. In Distributed Computing Systems (ICDCS), 2013 IEEE 33rd International Conference on (2013), IEEE, pp. 530--539. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Zhang, J., Tang, J., and Li, J.-Z. Expert finding in a social network. In DASFAA (2007), K. Ramamohanarao, P. R. Krishna, M. K. Mohania, and E. Nantajeewarawat, Eds., vol. 4443 of Lecture Notes in Computer Science, Springer, pp. 1066--1069.Google ScholarGoogle ScholarCross RefCross Ref
  25. Zhang, W., and Wang, J. A collective bayesian poisson factorization model for cold-start local event recommendation. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (New York, NY, USA, 2015), KDD '15, ACM, pp. 1455--1464. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Zhao, Z., Cheng, J., Wei, F., Zhou, M., Ng, W., and Wu, Y. Socialtransfer: Transferring social knowledge for cold-start cowdsourcing. In Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management (New York, NY, USA, 2014), CIKM '14, ACM, pp. 779--788. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Zheng, Y., Cheng, R., Maniu, S., and Mo, L. On optimality of jury selection in crowdsourcing. In Proceedings of the 18th International Conference on Extending Database Technology, EDBT 2015, Brussels, Belgium, March 23--27, 2015. (2015), pp. 193--204.Google ScholarGoogle Scholar

Index Terms

  1. Using Hierarchical Skills for Optimized Task Assignment in Knowledge-Intensive Crowdsourcing

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Other conferences
          WWW '16: Proceedings of the 25th International Conference on World Wide Web
          April 2016
          1482 pages
          ISBN:9781450341431

          Copyright © 2016 Copyright is held by the International World Wide Web Conference Committee (IW3C2)

          Publisher

          International World Wide Web Conferences Steering Committee

          Republic and Canton of Geneva, Switzerland

          Publication History

          • Published: 11 April 2016

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

          Acceptance Rates

          WWW '16 Paper Acceptance Rate115of727submissions,16%Overall Acceptance Rate1,899of8,196submissions,23%

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader