Skip to main content
Top

2023 | OriginalPaper | Chapter

Resume Shortlisting and Ranking with Transformers

Authors : Vinaya James, Akshay Kulkarni, Rashmi Agarwal

Published in: Intelligent Systems and Machine Learning

Publisher: Springer Nature Switzerland

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

The study shown in this paper helps the human resource domain eliminate the time-consuming recruitment process task. Screening resume is the most critical and challenging task for human resource personnel. Natural Language Processing (NLP) techniques are the computer’s ability to understand spoken/written language. Now a day’s, online recruitment platform is more vigorous along with consultancies. A single job opening will get hundreds of applications. To discover the finest candidate for the position, Human Resource (HR) employees devote extra time to the candidate selection process. Most of the time, shortlisting the best fit for the job is time-consuming and finding an apt person is hectic. The proposed study helps to shortlist the candidates with a better match for the job based on the skills provided in the resume. As it is an automated process, the candidate’s personalized favor and soft skills are not affected by the hiring process. The Sentence-BERT (SBERT) network is a Siamese and triplet network-based variant of the Bidirectional Encoder Representations from Transformers (BERT) architecture, which may generate semantically significant sentence embeddings. An end-to-end tool for the HR domain, which takes hundreds of resumes along with required skills for the job as input and provides the better-ranked candidate fit for the job as output. The SBERT is compared with BERT and proved that it is superior to BERT.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Siddique, C.M.: Job analysis: a strategic human resource management practice. Int. J. Human Resour. Manage. 15(1), 219–244 (2004)CrossRef Siddique, C.M.: Job analysis: a strategic human resource management practice. Int. J. Human Resour. Manage. 15(1), 219–244 (2004)CrossRef
2.
3.
go back to reference Arora, S., Li, Y., Liang, Y., Ma, T., Risteski, A.: A latent variable model approach to PMIbased word embeddings. Trans. Assoc. Comput. Linguist. 4, 385–399 (2016)CrossRef Arora, S., Li, Y., Liang, Y., Ma, T., Risteski, A.: A latent variable model approach to PMIbased word embeddings. Trans. Assoc. Comput. Linguist. 4, 385–399 (2016)CrossRef
4.
go back to reference Rieck, B., Leitte, H.: Persistent homology for the evaluation of dimensionality reduction schemes. Comput. Graph. Forum 34(3) (2015) Rieck, B., Leitte, H.: Persistent homology for the evaluation of dimensionality reduction schemes. Comput. Graph. Forum 34(3) (2015)
5.
go back to reference Stein, R.A., Jaques, P.A., Valiati, J.F.: An analysis of hierarchical text classification using word embeddings. Inf. Sci. 471, 216–232 (2019)CrossRef Stein, R.A., Jaques, P.A., Valiati, J.F.: An analysis of hierarchical text classification using word embeddings. Inf. Sci. 471, 216–232 (2019)CrossRef
6.
go back to reference Mishra, M.K., Viradiya, J.: Survey of sentence embedding methods. Int. J. Appl. Sci. Comput. 6(3), 592 (2019) Mishra, M.K., Viradiya, J.: Survey of sentence embedding methods. Int. J. Appl. Sci. Comput. 6(3), 592 (2019)
8.
go back to reference Suryadjaja, P.S., Mandala, R.: Improving the performance of the extractive text summarization by a novel topic modeling and sentence embedding technique using SBERT. In: 2021 8th International Conference on Advanced Informatics: Concepts, Theory and Applications (ICAICTA), pp. 1–6 (2021). https://doi.org/10.1109/ICAICTA53211.2021.9640295 Suryadjaja, P.S., Mandala, R.: Improving the performance of the extractive text summarization by a novel topic modeling and sentence embedding technique using SBERT. In: 2021 8th International Conference on Advanced Informatics: Concepts, Theory and Applications (ICAICTA), pp. 1–6 (2021). https://​doi.​org/​10.​1109/​ICAICTA53211.​2021.​9640295
9.
11.
go back to reference Dharma, E.M., et al.: The accuracy comparison among word2vec, glove, and fasttext towards convolution neural network (CNN) text classification. J. Theor. Appl. Inf. Technol. 100(2), 31 (2022) Dharma, E.M., et al.: The accuracy comparison among word2vec, glove, and fasttext towards convolution neural network (CNN) text classification. J. Theor. Appl. Inf. Technol. 100(2), 31 (2022)
12.
go back to reference Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30 (2017) Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30 (2017)
13.
go back to reference Liu, Y., Sun, C., Lin, L., Wang, X.: Learning natural language inference using bidirectional LSTM model and inner-attention. arXiv preprint arXiv:1605.09090 (2016) Liu, Y., Sun, C., Lin, L., Wang, X.: Learning natural language inference using bidirectional LSTM model and inner-attention. arXiv preprint arXiv:​1605.​09090 (2016)
14.
go back to reference Choi, H., Cho, K., Bengio, Y.: Fine-grained attention mechanism for neural machine translation. Neurocomputing 284, 171–176 (2018). ISSN 0925-2312 Choi, H., Cho, K., Bengio, Y.: Fine-grained attention mechanism for neural machine translation. Neurocomputing 284, 171–176 (2018). ISSN 0925-2312
15.
go back to reference Giorgi, J., et al.: DeCLUTR: deep contrastive learning for unsupervised textual representations. arXiv preprint arXiv:2006.03659 (2020) Giorgi, J., et al.: DeCLUTR: deep contrastive learning for unsupervised textual representations. arXiv preprint arXiv:​2006.​03659 (2020)
16.
go back to reference Conneau, A., et al.: Supervised learning of universal sentence representations from natural language inference data. arXiv preprint arXiv:1705.02364 (2017) Conneau, A., et al.: Supervised learning of universal sentence representations from natural language inference data. arXiv preprint arXiv:​1705.​02364 (2017)
17.
go back to reference Parameswaran, P., Trotman, A., Liesaputra, V., Eyers, D.: Detecting the target of sarcasm is hard: really?? Inf. Process. Manage. 58(4), 102599 (2021)CrossRef Parameswaran, P., Trotman, A., Liesaputra, V., Eyers, D.: Detecting the target of sarcasm is hard: really?? Inf. Process. Manage. 58(4), 102599 (2021)CrossRef
20.
go back to reference Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, vol. 1 (Long and Short Papers), Minneapolis, Minnesota, pp. 4171–4186. Association for Computational Linguistics (2019) Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, vol. 1 (Long and Short Papers), Minneapolis, Minnesota, pp. 4171–4186. Association for Computational Linguistics (2019)
21.
go back to reference Dai, Z., Yang, Z., Yang, Y., Carbonell, J., Le, Q.V., Salakhutdinov, R.: Transformer-Xl: attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860 (2019) Dai, Z., Yang, Z., Yang, Y., Carbonell, J., Le, Q.V., Salakhutdinov, R.: Transformer-Xl: attentive language models beyond a fixed-length context. arXiv preprint arXiv:​1901.​02860 (2019)
23.
go back to reference Jo, T., Lee, J.H.: Latent keyphrase extraction using deep belief networks. Int. J. Fuzzy Logic Intell. Syst. 15(3), 153–158 (2015)CrossRef Jo, T., Lee, J.H.: Latent keyphrase extraction using deep belief networks. Int. J. Fuzzy Logic Intell. Syst. 15(3), 153–158 (2015)CrossRef
24.
go back to reference Papagiannopoulou, E., Tsoumakas, G.: Local word vectors guiding keyphrase extraction. Inf. Process. Manage. 54(6), 888–902 (2018)CrossRef Papagiannopoulou, E., Tsoumakas, G.: Local word vectors guiding keyphrase extraction. Inf. Process. Manage. 54(6), 888–902 (2018)CrossRef
25.
go back to reference Bennani-Smires, K., Musat, C., Hossmann, A., Baeriswyl, M., Jaggi, M.: Simple unsupervised keyphrase extraction using sentence embeddings. In: Proceedings of the 22nd Conference on Computational Natural Language Learning, Brussels, Belgium, pp. 221–229. Association for Computational Linguistics (2018) Bennani-Smires, K., Musat, C., Hossmann, A., Baeriswyl, M., Jaggi, M.: Simple unsupervised keyphrase extraction using sentence embeddings. In: Proceedings of the 22nd Conference on Computational Natural Language Learning, Brussels, Belgium, pp. 221–229. Association for Computational Linguistics (2018)
26.
Metadata
Title
Resume Shortlisting and Ranking with Transformers
Authors
Vinaya James
Akshay Kulkarni
Rashmi Agarwal
Copyright Year
2023
DOI
https://doi.org/10.1007/978-3-031-35081-8_8

Premium Partner