2.1 Ethics in ICT and Software Engineering
The topic of ‘ethics’ has been a well-researched and widely discussed topic in the field of ICT for a long time. Over recent years, various IT professional organisations worldwide, like the Association for Computing Machinery (ACM),
6 the Institute for Certification of IT Professionals (ICCP),
7 and AITP
8 have developed their own codes of ethics (Payne and Landry
2006). These codes of ethics in the ICT domain are created to motivate and steer the ethical behavior of all computer professionals. This includes those who are currently working in the field, those who aspire to do so, teachers, students, influencers, and anyone who makes significant use of computer technology, as defined by the Association for Computing Machinery (ACM).
In 1991, Gotterbarn (
1991) expressed concern about the insufficient emphasis placed on professional ethics in guiding the daily activities of computing professionals within their respective roles. Subsequently, he actively engaged in various initiatives aimed at advocating for ethical codes and fostering a sense of professional responsibility in the field. Studies have been conducted to explore how these codes of ethics affect the decision-making of professionals in the ICT sector. Ethics within the professional sphere can significantly aid ICT professionals in their decision-making, as evidenced by research conducted by Allen et al. (
2011), and these codes have been observed to influence the conduct of ICT professionals (Harrington
1996). In 2010, Van den Bergh and Deschoolmeester (
2010) conducted a survey involving 276 ICT professionals to explore the potential value of ethical codes of conduct for the ICT industry in dealing with contentious issues. They concluded that having a policy regarding ICT ethics does indeed significantly influence how professionals assess ethical or unethical situations in some cases. Fleischmann et al. (
2017) conducted a mixed-method study with ICT professionals on the role of codes of ethics and the relationship between their experiences and attitudes towards the codes of ethics.
Likewise, studies have been conducted to investigate the impact of ethics in the area of Software Engineering. Rashid et al. (
2009) concluded that ethics has been a very important part of software engineering and discussed the ethical challenges of software engineers who design systems for the digital world. Aydemir and Dalpiaz (
2018) introduced an analytical framework to aid stakeholders including users and developers in capturing and analysing ethical requirements to foster ethical alignment within software artifacts and the development processes. In a similar vein, according to Pierce and Henry (
1996), one’s personal ethical principles, workplace ethics, and adherence to formal codes of conduct all play a significant role in influencing the ethical conduct of software professionals. Pierce and Henry (
1996) also delves into the extent of influence exerted by these three factors. On a related note, Hall (
2009) examines the concept of ethical conduct in the context of software engineers, emphasizing the importance of good professional ethics. Furthermore, in a study by Fraga (
2022), they conducted a survey involving software engineering professionals to explore the role of ethics in their field. The findings of the study suggest that the promotion of ethical leadership among systems engineers can be achieved when they adhere to established standards, codes, and ethical principles. These studies into ethics within the realms of ICT and Software Engineering indicate that this subject has been of significant importance for a long time, and there has been a prolonged effort to improve ethical considerations in these fields.
In summary, there is a recognised need for a stronger focus on professional ethics in guiding the daily activities of computing professionals. Multiple studies consistently demonstrate the substantial influence of ethical codes on decision-making in the ICT sector and Software Engineering, shaping behavior and ethical assessments. The collective findings underscore the importance of ethical considerations in the fields of ICT and Software Engineering.
2.2 Secondary Studies on AI Ethics
A number of secondary studies have been conducted that focused on the theme of investigating the ethical principles and guidelines related to AI. For example, Khan et al. (
2022) conducted a Systematic Literature Review (SLR) to investigate the agreement on the significance of AI ethical principles and identify potential challenges to their adoption. They found that the most common AI ethics principles are transparency, privacy, accountability, and fairness. However, significant challenges in incorporating ethics into AI include a lack of ethical knowledge and vague principles. Likewise, Ryan and Stahl (
2020) conducted a review study to provide a comprehensive analysis of the normative consequences associated with current AI ethics guidelines, specifically targeting AI developers and organisational users. Lu et al. (
2022) conducted a Systematic Literature Review (SLR) to identify the responsible AI principles discussed in the existing literature and to uncover potential solutions for responsible AI. Additionally, they outlined a research roadmap for the field of software engineering with a focus on responsible AI.
Likewise, review studies have been conducted to investigate the ethical concerns of the use of AI in different domains. Möllmann et al. (
2021) conducted a Systematic Literature Review (SLR) to explore which ethical considerations of AI are being investigated in digital health and classified the relevant literature based on the five ethical principles of AI including
beneficence,non-maleficence, autonomy, justice, and explicability. Likewise, Royakkers et al. (
2018) conducted an SLR to explore the social and ethical issues that arise due to digitization based on six different technologies like Internet of Things, robotics, bio-metrics, persuasive technology, virtual & augmented reality, and digital platforms. The review uncovered recurring themes such as privacy, security, autonomy, justice, human dignity, control of technology, and the balance of powers.
Studies have also been conducted to explore different methods and approaches to enhance the ethical development of AI. For example, Wiese et al. (
2023) conducted a Systematic Literature Review (SLR) to explore the methods to promote and engage practice on the front end of ethical and responsible AI. The study was guided by an adaption of the PRISMA framework and Hess & Fore’s 2017 methodological approach. Morley et al. (
2020) conducted a review study with the aim of exploring AI ethics tools, methods, and research that are accessible to the public, for translating ethical principles into practice.
Most of the secondary studies have either focused on investigating specific AI ethical principles, the ethical consequences of AI systems, or the approaches to enhance the ethical development of AI. Conducting a review study to identify and analyse primary empirical research on AI practitioners’ perspectives regarding AI ethics is important for gaining an understanding of the ethical landscape in the field of AI. It can also inform practical interventions, contribute to policy development, and guide educational initiatives aimed at promoting responsible and ethical practices in the development and deployment of AI technologies.
2.3 Ethics in AI
There are numerous and divergent views on the topic of ethics in AI (Vakkuri et al.
2020b; Mittelstadt
2019; Hagendorff
2020), as it has been increasingly applied in various contexts and industries (Kessing
2021). AI practitioners and researchers seem to have mixed perspectives about AI ethics. Some believe there is no rush to consider AI-related ethical issues as AI has a long way from being comparable to human capabilities and behaviors (Siau and Wang
2020), while others conclude that AI systems must be developed by considering ethics as they can have enormous societal impact (Bostrom and Yudkowsky
2018; Bryson and Winfield
2017). Although the viewpoints vary from practitioner to practitioner, most conclude that AI ethics is an emerging and widely discussed topic and a current relevant issue of the real world (Vainio-Pekka
2020). This indicates that while opinions on the importance of AI ethics may differ, there is a consensus that the subject is highly relevant in the present context.
A number of studies conducted in the area of ethics in AI have been conceptual and theoretical in nature (Seah and Findlay
2021). Critically, there are copious numbers of guidelines on AI ethics, making it challenging for AI practitioners to decide which guidelines to follow. Unsurprisingly, studies have been conducted to analyse the ever-growing list of specific AI principles (Kelley
2021; Mark and Anya
2019; Siau and Wang
2020). For example, Jobin et al. (
2019) reviewed 84 ethical AI principles and guidelines and concluded that only five AI ethical principles –
transparency,
fairness,
non-maleficence,
responsibility and
privacy – are mainly discussed and followed. Fjeld et al. (
2020) reviewed 36 AI ethical principles and reported that there are eight key themes of AI ethics –
privacy,
accountability,
safety and security,
transparency and
explainability,
fairness and non-discrimination,
human control of technology,
professional responsibility, and
promotion of human values. Likewise, Hagendorff (
2020) analysed and compared 22 AI ethical guidelines to examine their implementation in the practice of research, development, and application of AI systems. Some review studies focused on exploring the challenges and potential solutions in the area of ethics in AI, for example, Jameel et al. (
2020); Khan et al. (
2022). The desire to set ethical guidelines in AI has been enhanced due to increased competition between organisations to develop robust AI tools (Vainio-Pekka
2020). Among them, only a few guidelines indicate an oversight or enforcement mechanism (Inv
2019). It suggests that recent research has dedicated significant attention to the analysis and comparison of various sets of ethical principles and guidelines for AI.
Similarly, AI practitioners have expressed various concerns regarding the public policies and ethical guidelines related to AI. For example, while the
ACM Codes of Ethics puts responsibilities to AI practitioners creating AI-based systems, a research study revealed that these practitioners generally believe that only physical harm caused by AI systems is crucial and should be taken into account (Veale et al.
2018). Similarly, in November 2021, the UN Educational, Scientific, and Cultural Organisation (UNESCO) signed a historic agreement outlining shared values needed to ensure the development of Responsible AI (UN
2021). The study conducted by Varanasi and Goyal (
2023) involved interviewing 23 AI practitioners from 10 organisations to investigate the challenges they encounter when collaborating on Responsible AI (RAI) principles defined by UNESCO. The findings revealed that practitioners felt overwhelmed by the responsibility of adhering to specific RAI principles (non-maleficence, trustworthiness, privacy, equity, transparency, and explainability), leading to an uneven distribution of their workload. Moreover, implementing certain RAI principles (accuracy, diversity, fairness, privacy, and interoperability) in real-world scenarios proved difficult due to conflicts with personal and team values. Similarly, a study by Rothenberger et al. (
2019) conducted an empirical study with AI experts to evaluate several AI ethics guidelines among which Microsoft AI Ethical Principles were one of them. The study found that the participants considered
‘Responsibility’ to be the foremost and notably significant ethical principle in the realm of AI. Following closely, they ranked
‘Privacy protection’ as the second most crucial principle among all other principles. This emphasises the perspective of these AI experts, who consider prioritising responsible AI practices and safeguarding user privacy to be fundamental aspects of ethical advancement and implementation of AI, without regarding other principles as equally crucial. Likewise, an empirical investigation was carried out by Sanderson et al. (
2023), involving AI practitioners and designers. This study aimed to assess the Australian Government’s high-level AI principles and investigate how these ethical guidelines were understood and applied by AI practitioners and designers within their professional contexts. The results indicated that implementing certain AI ethical principles, such as those related to
‘Privacy and security’,
‘Transparency’ and
‘Explainability’, and
‘Accuracy’, posed significant challenges for them. This suggests that there have been studies exploring the relationship between AI practitioners and the guidelines established by public organisations, as well as their sentiments towards each guideline.
Another prominent area of focus has been studies that were conducted to discuss the existing gap between research and practice in the field of ethics in AI. Smith et al. (
2020) conducted a review study to identify gaps in ethics research and practice of ethical data-driven software development and highlighted how ethics can be integrated into the development of modern software. Similarly, Shneiderman (
2020) provided 15 recommendations to bridge the gap between ethical principles of AI and practical steps for ethical governance. Likewise, there are solution-based papers and papers discussing models, frameworks, and methods for AI developers to enhance their AI ethics implementation. For example, an article by Vakkuri et al. (
2021) presents the AI maturity model for AI software. In contrast, another article by Vakkuri et al. (
2020a) discusses the ECCOLA method for implementing ethically aligned AI systems. There are also papers presenting the toolkit to address fairness in ML algorithms (Castelnovo et al.
2020) and transparency model to design transparent AI systems (Felzmann et al.
2020). In general, it suggests that recent studies have centered on addressing the gap between research and practical application in the field of AI ethics. This also involves the development of various tools and methods aimed at improving the ethical implementation of AI.
Overall, existing studies seem to primarily focus on either analysing the plethora of ethical AI principles, filling the gap between research and practice, or discussing tool-kits and methods. However, compared to the number of papers on AI ethics describing ethical guidelines and principles, and tools and methods, there is a relative lack of studies that focus on the views and experiences of AI practitioners on AI ethics (Vakkuri et al.
2020b). Furthermore, the literature also underscores the necessity for review studies that evaluate and synthesise the existing primary research on AI practitioners’ views and experiences of AI ethics (Khan et al.
2022; Leikas et al.
2019). To assimilate, analyse, and present the empirical evidence spread across the literature, we conducted a Grounded Theory Literature Review (GTLR) to investigate AI practitioners’ viewpoints on ethics in AI with some adaptations to the original framework, drawing data from papers whose prime focus may not have been understanding practitioners’ viewpoints but that nonetheless contained information about the same.