Despite the fact that the guidelines contain various parallels and several recurring topics, what are issues the guidelines do not discuss at all or only very occasionally? Here, I want to give a (non-exhaustive) overview of issues that are missing. Two things should be considered in this context. First, the sampling method used to select the AI ethics guidelines has an effect on the list of issues and omissions. When deliberately excluding for instance robot ethics guidelines, this has the effect that the list of entries lacks issues that are connected with robotics. Second, not all omissions can be treated equally. There are omissions which are missing or severely underrepresented without any good reason—for instance the aspect of political abuse or “hidden” social and ecological costs of AI systems—, and omissions that can be justified—for instance deliberations on artificial general intelligence or machine consciousness, since those technologies are purely speculative.
Nevertheless, in view of the fact that significant parts of the AI community see the emergence of
artificial general intelligence as well as associated
dangers for humanity or
existential threats as a likely scenario (Müller and Bostrom
2016; Bostrom
2014; Tegmark
2017; Omohundro
2014), one could argue that those topics could be discussed in ethics guidelines under the umbrella of potential prohibitions to pursue certain research strands in this area (Hagendorff
2019). The fact that artificial general intelligence is not discussed in the guidelines may be due to the fact that most of the guidelines are not written by research groups from philosophy or other speculative disciplines, but by researchers with a background directly in computer science or its application. In this context, it is noteworthy that the fear of the emergence of superintelligence is more frequently expressed by people who lack technical experience in the field of AI—one just has to think of people like Stephen Hawking, Elon Musk or Bill Gates—while “real” experts generally regard the idea of a strong AI as rather absurd (Calo
2017, 26). Perhaps the same holds true for the question of
machine consciousness and the ethical problems associated with it (Lyons
2018), as this topic is also omitted from all examined ethical guidelines. What is also striking is the fact that only the Montréal Declaration for Responsible Development of Artificial Intelligence (
2018) as well as the AI Now 2019 Report (
2019) explicitly addresses the aspect of democratic control, governance and political deliberation of AI systems. The mentioned documents are also the only guidelines that explicitly prohibits imposing certain lifestyles or concepts of “good living” on people by AI systems, as it is for example demonstrated in the Chinese scoring system (Engelmann et al.
2019). The former document further criticizes the application of AI systems for the reduction of
social cohesion, for example by isolating people in echo chambers (Flaxman et al.
2016). In addition, hardly any guideline discusses the possibility for
political abuse of AI systems in the context of automated propaganda, bots, fake news, deepfakes, micro targeting, election fraud, and the like. What is also largely absent from most guidelines is the issue of a
lack in diversity within the AI community. This lack of diversity is prevailing in the field of artificial intelligence research and development, as well as in the workplace cultures shaping the technology industry. In the end, a relatively small group of predominantly white men determines how AI systems are designed, for what purposes they are optimized, what is attempted to realize technically, etc. The famous AI startup “nnaisense” run by Jürgen Schmidhuber, which aims at generating an artificial general intelligence, to name just one example, employs only two women—one scientist and one office manager—in its team, but 21 men. Another matter, which is not covered at all or only very rarely mentioned in the guidelines, are aspects of
robot ethics. As mentioned in the methods chapter, specific guidelines for robot ethics exist, most prominently represented by Asimov’s three laws of robotics (Asimov
2004), but those guidelines were intentionally excluded from the analysis. Nonetheless, advances in AI research contribute, for instance, to increasingly anthropomorphized technical devices. The ethical question that arises in this context echoes Immanuel Kant’s “brutalization argument” and states that the abuse of anthropomorphized agents—as, for example, is the case with language assistants (Brahnam
2006)—also promotes the likelihood of violent actions between people (Darling
2016). Apart from that, the examined ethics guidelines pay little attention to the rather popular
trolley problems (Awad et al.
2018) and their alleged relation to ethical questions surrounding self-driving cars or other autonomous vehicles. In connection to this, no guideline deals in detail with the obvious question where systems of
algorithmic decision making are superior or inferior, respectively, to human decision routines. And finally, virtually no guideline deals with the
“hidden” social and ecological costs of AI systems. At several points in the guidelines, the importance of AI systems for approaching a sustainable society is emphasized (Rolnick et al.
2019). However, it is omitted—with the exception of the AI Now 2019 Report (
2019)—that producer and consumer practices in the context of AI technologies may in themselves contradict sustainability goals. Issues such as lithium mining, e-waste, the one-way use of rare earth minerals, energy consumption, low-wage “clickworkers” creating labels for data sets or doing content moderation are of relevance here (Crawford and Joler
2018; Irani
2016; Veglis
2014; Fang
2019; Casilli
2017). Although “clickwork” is a necessary prerequisite for the application of methods of supervised machine learning, it is associated with numerous social problems (Silberman et al.
2018; Irani
2015; Graham et al.
2017), such as low wages, work conditions and psychological work consequences, which tend to be ignored by the AI community. Finally, yet importantly, not a single guideline raises the issue of
public–
private partnerships and
industry-
funded research in the field of AI. Despite the massive lack of transparency regarding the allocation of research funds, it is no secret that large parts of university AI research are financed by corporate partners. In light of this, it remains questionable to what extent the ideal of freedom of research can be upheld—or whether there will be a gradual “buyout” of research institutes.