skip to main content
10.1145/3269206.3272030acmconferencesArticle/Chapter ViewAbstractPublication PagescikmConference Proceedingsconference-collections
research-article

Towards Deep and Representation Learning for Talent Search at LinkedIn

Published:17 October 2018Publication History

ABSTRACT

Talent search and recommendation systems at LinkedIn strive to match the potential candidates to the hiring needs of a recruiter or a hiring manager expressed in terms of a search query or a job posting. Recent work in this domain has mainly focused on linear models, which do not take complex relationships between features into account, as well as ensemble tree models, which introduce non-linearity but are still insufficient for exploring all the potential feature interactions, and strictly separate feature generation from modeling. In this paper, we present the results of our application of deep and representation learning models on LinkedIn Recruiter. Our key contributions include: (i) Learning semantic representations of sparse entities within the talent search domain, such as recruiter ids, candidate ids, and skill entity ids, for which we utilize neural network models that take advantage of LinkedIn Economic Graph, and (ii) Deep models for learning recruiter engagement and candidate response in talent search applications. We also explore learning to rank approaches applied to deep models, and show the benefits for the talent search use case. Finally, we present offline and online evaluation results for LinkedIn talent search and recommendation systems, and discuss potential challenges along the path to a fully deep model architecture. The challenges and approaches discussed generalize to any multi-faceted search engine.

References

  1. M. Abadi et al. TensorFlow: A system for large-scale machine learning. In OSDI, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. A. Ahmed, N. Shervashidze, S. Narayanamurthy, V. Josifovski, and A. J. Smola. Distributed large-scale natural graph factorization. In WWW, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. M. Belkin and P. Niyogi. Laplacian eigenmaps and spectral techniques for embedding and clustering. In NIPS, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. C. Burges et al. Learning to rank using gradient descent. In ICML, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. R. Caruana, S. Baluja, and T. Mitchell. Using the future to “sort out” the present: Rankprop and multitask learning for medical risk evaluation. In NIPS, 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. T. Chen and C. Guestrin. XGBoost: A scalable tree boosting system. In KDD, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. H.-T. Cheng et al. Wide & deep learning for recommender systems. In Workshop on Deep Learning for Recommender Systems, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. T. F. Cox and M. A. Cox. Multidimensional scaling. CRC press, 2000.Google ScholarGoogle ScholarCross RefCross Ref
  9. J. H. Friedman. Greedy function approximation: a gradient boosting machine. Annals of Statistics, 2001.Google ScholarGoogle ScholarCross RefCross Ref
  10. W. Gao and Z.-H. Zhou. On the consistency of auc pairwise optimization. Int. Conf. on Artifical Intelligence, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. J. Guo, Y. Fan, Q. Ai, and W. B. Croft. A deep relevance matching model for ad-hoc retrieval. In CIKM, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. V. Ha-Thuc et al. Personalized expertise search at LinkedIn. In Big Data, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. V. Ha-Thuc et al. Search by ideal candidates: Next generation of talent search at LinkedIn. In WWW Companion, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. B. Hu, Z. Lu, H. Li, and Q. Chen. Convolutional neural network architectures for matching natural language sentences. In NIPS, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. P.-S. Huang et al. Learning deep structured semantic models for web search using clickthrough data. In CIKM, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. K. Jarvelin and J. Kekalainen. Cumulated gain-based evaluation of IR techniques. ACM TOIS, 20(4), 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. T. Joachims. Optimizing search engines using clickthrough data. In KDD, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. T. Joachims, L. Granka, B. Pan, H. Hembrooke, and G. Gay. Accurately interpreting clickthrough data as implicit feedback. In SIGIR, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. T. Y. Liu. Learning to rank for information retrieval. Foundations and Trends in Information Retrieval, 3(3), 2009.Google ScholarGoogle Scholar
  20. T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. Distributed representations of words and phrases and their compositionality. In NIPS, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. B. Perozzi, R. Al-Rfou, and S. Skiena. DeepWalk: Online learning of social representations. In KDD, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. J. Qiu et al. Network embedding as matrix factorization: Unifying DeepWalk, LINE, PTE, and node2vec. In WSDM, 2018. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. S. T. Roweis and L. K. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500), 2000.Google ScholarGoogle Scholar
  24. Y. Shan et al. Deep Crossing: Web-scale modeling without manually crafted combinatorial features. In CIKM, 2016.Google ScholarGoogle Scholar
  25. Y. Shen, X. He, J. Gao, L. Deng, and G. Mesnil. Learning semantic representations using convolutional neural networks for web search. In WWW, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. S. Sriram and A. Makhani. LinkedIn's Galene search engine, https://engineering.linkedin.com/search/did-you-mean-galene. 2014.Google ScholarGoogle Scholar
  27. N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. JMLR, 15(1), 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. J. Tang et al. LINE: Large-scale information network embedding. In WWW, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. J. B. Tenenbaum, V. De Silva, and J. C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500), 2000.Google ScholarGoogle Scholar
  30. J. Weiner. The future of LinkedIn and the Economic Graph. LinkedIn Pulse, 2012.Google ScholarGoogle Scholar

Index Terms

  1. Towards Deep and Representation Learning for Talent Search at LinkedIn

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        CIKM '18: Proceedings of the 27th ACM International Conference on Information and Knowledge Management
        October 2018
        2362 pages
        ISBN:9781450360142
        DOI:10.1145/3269206

        Copyright © 2018 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 17 October 2018

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        CIKM '18 Paper Acceptance Rate147of826submissions,18%Overall Acceptance Rate1,861of8,427submissions,22%

        Upcoming Conference

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader