Skip to main content

2024 | OriginalPaper | Buchkapitel

Ethical Considerations in the Implementation and Usage of Large Language Models

verfasst von : Radu Stefan, George Carutasu, Marian Mocan

Erschienen in: The 17th International Conference Interdisciplinarity in Engineering

Verlag: Springer Nature Switzerland

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

This paper aims to investigate the ethical considerations surrounding the implementation and usage of large language models (LLMs). LLMs have shown tremendous advance in natural language processing and have the potential to revolutionize various fields, including education, healthcare, and many more. The wide adoption of LLMs and rapid increase in usage is also due to the ease of use (via chat interface) and the remarkable fast, coherent response, relative to the input prompt. However, their widespread use raises ethical questions that have not yet been fully considered. This study explores the ethical implications of LLMs and their impact on society, focusing on the need for transparency, fairness, and responsible usage. We analyze the current state of research in this area and provide recommendations for future research and development of LLMs that prioritize ethical considerations.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Barmayoun, D., Mocan, M.: Business ethics in Norway: history of business ethics in Norway and guidelines of Council on Ethics for the Government Pension Fund Global [articol],” 2020 Barmayoun, D., Mocan, M.: Business ethics in Norway: history of business ethics in Norway and guidelines of Council on Ethics for the Government Pension Fund Global [articol],” 2020
2.
Zurück zum Zitat Brown, T., et al.: Language models are few-shot learners. Adv. Neural. Inf. Process. Syst. 33, 1877–1901 (2020) Brown, T., et al.: Language models are few-shot learners. Adv. Neural. Inf. Process. Syst. 33, 1877–1901 (2020)
3.
Zurück zum Zitat “Pause Giant AI Experiments: An Open Letter,” Future of Life Institute, May 26 (2023) “Pause Giant AI Experiments: An Open Letter,” Future of Life Institute, May 26 (2023)
4.
Zurück zum Zitat Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019)
5.
Zurück zum Zitat Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing System, vol. 30 (2017) Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing System, vol. 30 (2017)
6.
Zurück zum Zitat Raffel, C., et al.: Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res. 21(1), 5485–5551 (2020)MathSciNet Raffel, C., et al.: Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res. 21(1), 5485–5551 (2020)MathSciNet
7.
Zurück zum Zitat Shoeybi, M., Patwary, M., Puri, R., LeGresley, P., Casper, J., Catanzaro, B.: Megatron-lm: Training multi-billion parameter language models using model parallelism. arXiv:1909.08053 (2019) Shoeybi, M., Patwary, M., Puri, R., LeGresley, P., Casper, J., Catanzaro, B.: Megatron-lm: Training multi-billion parameter language models using model parallelism. arXiv:​1909.​08053 (2019)
8.
Zurück zum Zitat Mikolov, T., Deoras, A., Kombrink, S., Burget, L., Cernocký, J.: “Empirical Evaluation and Combination of Advanced Language Modeling Techniques.,” in Interspeech, 2011, pp. 605–608 (2011) Mikolov, T., Deoras, A., Kombrink, S., Burget, L., Cernocký, J.: “Empirical Evaluation and Combination of Advanced Language Modeling Techniques.,” in Interspeech, 2011, pp. 605–608 (2011)
9.
Zurück zum Zitat Ramachandran, P., Liu, P.J., Le, Q.V.: “Unsupervised Pretraining for Sequence to Sequence Learning,” Nov (2016) Ramachandran, P., Liu, P.J., Le, Q.V.: “Unsupervised Pretraining for Sequence to Sequence Learning,” Nov (2016)
10.
Zurück zum Zitat Liu, Y., et al.: “RoBERTa: A Robustly Optimized BERT Pretraining Approach,” Jul (2019) Liu, Y., et al.: “RoBERTa: A Robustly Optimized BERT Pretraining Approach,” Jul (2019)
16.
Zurück zum Zitat Abid, A., Farooqi, M., Zou, J.: Persistent anti-muslim bias in large language models. In: Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, 2021, pp. 298–306 (2021) Abid, A., Farooqi, M., Zou, J.: Persistent anti-muslim bias in large language models. In: Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, 2021, pp. 298–306 (2021)
17.
Zurück zum Zitat Rudolph, J., Tan, S., Tan, S.: ChatGPT: bullshit spewer or the end of traditional assessments in higher education? J. Appl. Learn. Teach. 6(1) (2023) Rudolph, J., Tan, S., Tan, S.: ChatGPT: bullshit spewer or the end of traditional assessments in higher education? J. Appl. Learn. Teach. 6(1) (2023)
18.
Zurück zum Zitat Lund, B.D., Wang, T.: Chatting about ChatGPT: how may AI and GPT impact academia and libraries? Library Hi Tech News 40(3), 26–29 (2023)CrossRef Lund, B.D., Wang, T.: Chatting about ChatGPT: how may AI and GPT impact academia and libraries? Library Hi Tech News 40(3), 26–29 (2023)CrossRef
19.
Zurück zum Zitat Liebrenz, M., Schleifer, R., Buadze, A., Bhugra, D., Smith, A.: Generating scholarly content with ChatGPT: ethical challenges for medical publishing. Lancet Digit Health 5(3), e105–e106 (2023)CrossRef Liebrenz, M., Schleifer, R., Buadze, A., Bhugra, D., Smith, A.: Generating scholarly content with ChatGPT: ethical challenges for medical publishing. Lancet Digit Health 5(3), e105–e106 (2023)CrossRef
20.
Zurück zum Zitat Gilson, A., et al.: How does CHATGPT perform on the United States Medical Licensing Examination? the implications of large language models for medical education and knowledge assessment. JMIR Med Educ 9(1), e45312 (2023)CrossRef Gilson, A., et al.: How does CHATGPT perform on the United States Medical Licensing Examination? the implications of large language models for medical education and knowledge assessment. JMIR Med Educ 9(1), e45312 (2023)CrossRef
Metadaten
Titel
Ethical Considerations in the Implementation and Usage of Large Language Models
verfasst von
Radu Stefan
George Carutasu
Marian Mocan
Copyright-Jahr
2024
DOI
https://doi.org/10.1007/978-3-031-54671-6_10

    Marktübersichten

    Die im Laufe eines Jahres in der „adhäsion“ veröffentlichten Marktübersichten helfen Anwendern verschiedenster Branchen, sich einen gezielten Überblick über Lieferantenangebote zu verschaffen.