Skip to main content
Erschienen in: Automated Software Engineering 1/2024

01.05.2024

Large language models for qualitative research in software engineering: exploring opportunities and challenges

verfasst von: Muneera Bano, Rashina Hoda, Didar Zowghi, Christoph Treude

Erschienen in: Automated Software Engineering | Ausgabe 1/2024

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The recent surge in the integration of Large Language Models (LLMs) like ChatGPT into qualitative research in software engineering, much like in other professional domains, demands a closer inspection. This vision paper seeks to explore the opportunities of using LLMs in qualitative research to address many of its legacy challenges as well as potential new concerns and pitfalls arising from the use of LLMs. We share our vision for the evolving role of the qualitative researcher in the age of LLMs and contemplate how they may utilize LLMs at various stages of their research experience.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
Zurück zum Zitat Alkaissi, H., McFarlane, S.I.: Artificial hallucinations in ChatGPT: implications in scientific writing. Cureus 15, 192 (2023) Alkaissi, H., McFarlane, S.I.: Artificial hallucinations in ChatGPT: implications in scientific writing. Cureus 15, 192 (2023)
Zurück zum Zitat Arora, C., John, G., Mohamed, A.: Advancing requirements engineering through generative AI: assessing the role of LLMs. (2023) arXiv preprint arXiv:2310.13976. Arora, C., John, G., Mohamed, A.: Advancing requirements engineering through generative AI: assessing the role of LLMs. (2023) arXiv preprint arXiv:​2310.​13976.
Zurück zum Zitat Balel, Y.: The role of artificial intelligence in academic paper writing and its potential as a co-author’, Euro. J. Therap.. (2023) Balel, Y.: The role of artificial intelligence in academic paper writing and its potential as a co-author’, Euro. J. Therap.. (2023)
Zurück zum Zitat Bender, E.M., Timnit G., Angelina M.-M., Shmargaret S.: On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?? In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pp 610–23 (2021) Bender, E.M., Timnit G., Angelina M.-M., Shmargaret S.: On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?? In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pp 610–23 (2021)
Zurück zum Zitat Byun, C., Piper, V., Kevin, S.: Dispensing with Humans in Human-Computer Interaction Research. In: Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems, pp. 1–26 (2023) Byun, C., Piper, V., Kevin, S.: Dispensing with Humans in Human-Computer Interaction Research. In: Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems, pp. 1–26 (2023)
Zurück zum Zitat Easterbrook, S., Singer, J., Storey, M.A., Damian, D.: Selecting empirical methods for software engineering research. Guide to Adv. Emp. Softw. Eng. 8, 285–311 (2008)CrossRef Easterbrook, S., Singer, J., Storey, M.A., Damian, D.: Selecting empirical methods for software engineering research. Guide to Adv. Emp. Softw. Eng. 8, 285–311 (2008)CrossRef
Zurück zum Zitat Ebert, C., Louridas, P.: Generative AI for software practitioners. IEEE Softw. 40, 30–38 (2023)CrossRef Ebert, C., Louridas, P.: Generative AI for software practitioners. IEEE Softw. 40, 30–38 (2023)CrossRef
Zurück zum Zitat Emmert-Streib, F.: Importance of critical thinking to understand ChatGPT. Europ. J. Human Genet. 15, 1–2 (2023) Emmert-Streib, F.: Importance of critical thinking to understand ChatGPT. Europ. J. Human Genet. 15, 1–2 (2023)
Zurück zum Zitat Gentles, S.J., Cathy, C., Jenny, P., Ann Mckibbon, K.: Sampling in qualitative research: insights from an overview of the methods literature. Qual. Rep. 20, 1772–1789 (2015) Gentles, S.J., Cathy, C., Jenny, P., Ann Mckibbon, K.: Sampling in qualitative research: insights from an overview of the methods literature. Qual. Rep. 20, 1772–1789 (2015)
Zurück zum Zitat Hoda, R.: Socio-technical grounded theory for software engineering. IEEE Transaction Software Engineering. 48, 3808–3832 (2021)CrossRef Hoda, R.: Socio-technical grounded theory for software engineering. IEEE Transaction Software Engineering. 48, 3808–3832 (2021)CrossRef
Zurück zum Zitat Hou, X., Yanjie, Z., Yue, L., Zhou, Y., Kailong, W., Li, L., Xiapu, L., David, L., John, G., Haoyu, W.: Large language models for software engineering: a systematic literature review. arXiv preprint arXiv:2308.10620 Hou, X., Yanjie, Z., Yue, L., Zhou, Y., Kailong, W., Li, L., Xiapu, L., David, L., John, G., Haoyu, W.: Large language models for software engineering: a systematic literature review. arXiv preprint arXiv:​2308.​10620
Zurück zum Zitat Jalil, S., Suzzana, R., Thomas, D.L., Kevin, M., Wing, L.: Chatgpt and software testing education: Promises & perils. In: 2023 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW), 4130–37. IEEE (2023) Jalil, S., Suzzana, R., Thomas, D.L., Kevin, M., Wing, L.: Chatgpt and software testing education: Promises & perils. In: 2023 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW), 4130–37. IEEE (2023)
Zurück zum Zitat Jiang, D., Xiang R., Bill Y-L.: LLM-blender: Ensembling large language models with pairwise ranking and generative fusion. (2023) arXiv preprint arXiv:2306.02561. Jiang, D., Xiang R., Bill Y-L.: LLM-blender: Ensembling large language models with pairwise ranking and generative fusion. (2023) arXiv preprint arXiv:​2306.​02561.
Zurück zum Zitat Kitchenham, B.: Procedures for performing systematic reviews. Keele UK Keele Univ. 33(2004), 1–26 (2004) Kitchenham, B.: Procedures for performing systematic reviews. Keele UK Keele Univ. 33(2004), 1–26 (2004)
Zurück zum Zitat Kuhail, M.A., Sujith, S.M., Ashraf, K., Jose, B., Syed J.S.: Will I be replaced? Assessing chatgpt’s effect on software development and programmer perceptions of Ai tools. Assessing Chatgpt’s Effect on Software Development and Programmer Perceptions of Ai Tools. Kuhail, M.A., Sujith, S.M., Ashraf, K., Jose, B., Syed J.S.: Will I be replaced? Assessing chatgpt’s effect on software development and programmer perceptions of Ai tools. Assessing Chatgpt’s Effect on Software Development and Programmer Perceptions of Ai Tools.
Zurück zum Zitat Navigli, R., Simone, C., and Björn, R.: Biases in large language models: origins, inventory and discussion. ACM J. Data Inform. Qual. (2023) Navigli, R., Simone, C., and Björn, R.: Biases in large language models: origins, inventory and discussion. ACM J. Data Inform. Qual. (2023)
Zurück zum Zitat Nguyen-Duc, A., Beatriz C.-D., Adam, P., Chetan, A., Dron, K., Tomas, H., Usman, R., Jorge, M., Eduardo, G., Kai-Kristian K.: Generative artificial intelligence for software engineering–a research Agenda, (2023) arXiv preprint arXiv:2310.18648. Nguyen-Duc, A., Beatriz C.-D., Adam, P., Chetan, A., Dron, K., Tomas, H., Usman, R., Jorge, M., Eduardo, G., Kai-Kristian K.: Generative artificial intelligence for software engineering–a research Agenda, (2023) arXiv preprint arXiv:​2310.​18648.
Zurück zum Zitat Ozkaya, I.: Application of large language models to software engineering tasks: opportunities. Risks Implicat. IEEE Software. 40, 4–8 (2023) Ozkaya, I.: Application of large language models to software engineering tasks: opportunities. Risks Implicat. IEEE Software. 40, 4–8 (2023)
Zurück zum Zitat Polonsky, M.J., Jeffrey D.R.: Should artificial intelligent agents be your co-author? Arguments in favour, informed by ChatGPT. In: 91–96. SAGE Publications Sage UK: London, England (2023) Polonsky, M.J., Jeffrey D.R.: Should artificial intelligent agents be your co-author? Arguments in favour, informed by ChatGPT. In: 91–96. SAGE Publications Sage UK: London, England (2023)
Zurück zum Zitat Rudolph, J., Tan, S., Tan, S.: ChatGPT: bullshit spewer or the end of traditional assessments in higher education? J. Appl. Learn. Teach. 24, 6 (2023) Rudolph, J., Tan, S., Tan, S.: ChatGPT: bullshit spewer or the end of traditional assessments in higher education? J. Appl. Learn. Teach. 24, 6 (2023)
Zurück zum Zitat Scoccia, G.L.: Exploring Early Adopters’ Perceptions of ChatGPT as a Code Generation Tool. In: 2023 38th IEEE/ACM International Conference on Automated Software Engineering Workshops (ASEW), pp 88–93 (2023) Scoccia, G.L.: Exploring Early Adopters’ Perceptions of ChatGPT as a Code Generation Tool. In: 2023 38th IEEE/ACM International Conference on Automated Software Engineering Workshops (ASEW), pp 88–93 (2023)
Zurück zum Zitat Treude, C., Hideaki H.: She Elicits Requirements and he tests: software engineering gender bias in large language models. (2023) arXiv preprint arXiv:2303.10131. Treude, C., Hideaki H.: She Elicits Requirements and he tests: software engineering gender bias in large language models. (2023) arXiv preprint arXiv:​2303.​10131.
Zurück zum Zitat Watkins, R.: Guidance for researchers and peer-reviewers on the ethical use of large language models (LLMs) in scientific research workflows. AI Ethics 16, 1–6 (2023) Watkins, R.: Guidance for researchers and peer-reviewers on the ethical use of large language models (LLMs) in scientific research workflows. AI Ethics 16, 1–6 (2023)
Zurück zum Zitat Watson, C.: Unreliable narrators?’Inconsistency’(and some inconstancy) in interviews. Qual. Res. 6, 367–384 (2006)CrossRef Watson, C.: Unreliable narrators?’Inconsistency’(and some inconstancy) in interviews. Qual. Res. 6, 367–384 (2006)CrossRef
Metadaten
Titel
Large language models for qualitative research in software engineering: exploring opportunities and challenges
verfasst von
Muneera Bano
Rashina Hoda
Didar Zowghi
Christoph Treude
Publikationsdatum
01.05.2024
Verlag
Springer US
Erschienen in
Automated Software Engineering / Ausgabe 1/2024
Print ISSN: 0928-8910
Elektronische ISSN: 1573-7535
DOI
https://doi.org/10.1007/s10515-023-00407-8

Weitere Artikel der Ausgabe 1/2024

Automated Software Engineering 1/2024 Zur Ausgabe

Premium Partner