Having introduced our public interest AI framework and examining it via two public examples, we conclude the paper by looking at the broader implications vis-a-vis the ethical AI principles, and discuss some remaining challenges and open questions.
5.1 The relation between public interest AI and the broader AI ethics debate
In the last decade, much has been published on the question of how to design AI to serve certain ethical principles (just to name a few: AI HLEG
2019; Floridi et al.
2020; Leslie
2019; ‘AI for People’). At the center of these approaches is the engineer, who bears the responsibility to follow ethical values and ensure their embedding in the technology (Simon et al.
2020; Umbrello and van de Poel
2021). Generally, there is a strong focus on the values of fairness, accountability, and transparency to ensure against biases (Eiband et al.
2018; or works presented at the ACM FAccT conference
2020). Another widely discussed topic is the design of explainability (e.g., Arya et al.
2019; Miller
2019; Wolf
2019; Liao et al.
2020). Acknowledging that embedded values in AI do have a crucial impact on how systems affect society (see van de Poel
2020), we nevertheless believe it is necessary to shift the focus from predefined values to the procedure of AI development and deployment, as we have laid out in this paper.
Generally, we are in agreement with most of the values and principles that most ethical guidelines for AI argue for (for an overview see Floridi and Cowls
2019; Jobin et al.
2019). Nevertheless, as other authors we are in doubt that ethical guidelines based on principles alone can develop a binding framework for trustworthy and ethical AI, in which compliance and impact can be monitored and validated (see Mittelstadt
2019, p.505). We believe that an approach focused on the public interest, with its strong connection to the rule of law, and being process and governance oriented, is more promising to bridge the gap between values, principles, and concrete AI implementation that leads to democratic outcomes.
One question that is often underrepresented in AI ethics guidelines is whether AI should be used at all. Powell (
2021) and Gürses et al. (
2020) highlight the societal consequences of the paradigm of optimization (in which AI plays a driving role). Powell (
2021) argues for a right to minimal viable datafication, which means “seeking to employ decision-making strategies that may appear to be more costly on the surface but that leave space for different kinds of knowledge, as well as for data to decay over time, for frictions to be identified and addressed, and for different forms of democratic participation and accountability, including but not limited to data audit, sensing citizenship, and autonomous networking” (p.177). In agreement with these authors, the public interest principles we identified ask for a public justification regarding whether AI should be used in a case, and include an imperative to serve equality and human rights.
Although we advocate for a deliberative approach, we highly appreciate the internationally coordinated attempt to set boundaries and a clear legal framework for AI, for instance with the European Commission’s (
2021) proposed AI Regulation or the Council of Europe’s CAHAI (
2020) reports. Generally, the rule of law, after all, is itself in the public interest. But AI in the administrative practice shows that laws are not enough to ensure effective and democratically accepted outcomes. To achieve this, it is essential to understand the meaning of the public interest concept and to bring it to the forefront of AI projects aiming to serve the public.
It is important to highlight again the difference in the scope of projects that fall under the public interest as compared to the broader ethical AI discourse. In short, AI projects that primarily serve profit-maximization do not fall under the public interest. This is even when they are (hopefully) non-maleficent in nature, and have positive effects on the society and follow ethical AI guidelines. This is because, as we argued, public interest projects need to serve equality, which often counters private, profit-oriented interests. Additionally, profit-driven objectives are often counterproductive to a truly participatory design approach. As Sloane et al. (
2020) point out “[in a corporate setting] justice can almost be seen as an oxymoron: given the extractive and oppressive capitalist logics and contexts of ML systems, it appears impossible to design ML products that are genuinely ‘just’ and ‘equitable’.” Specifically in those cases, where AI is not designed to serve the public interest but with profit-oriented interests at heart, general ethical guidelines are a necessary addition to upcoming regulations. In agreement with other scholars (Jobin et al.
2019, p.96), we believe that in such cases, AI ethics should be further harmonized in a collaborative effort amongst stakeholders to allow a binding character, and also embedded in a broader framework of ethical action of organizations (Lauer
2021).
The hype around artificial intelligence for social good is still ongoing and requires further debunking. In many discussions, the conclusion a project is “for good” is made too quickly, without proper consideration of important details and without the help of any established theoretical analysis. Even though the ‘good’ or the ‘public interest’ cannot be defined universally, democracies have established political agreements and institutions to define exactly this. As we have hopefully exemplified with this article, there are existing concepts, theories, discourses, and deliberative procedures available to guide us to pragmatic conclusions.
5.2 Open questions for further research
The concept of Public Interest AI raises interesting and new questions that require further research.
First of all, we think more work needs to be done, to determine which degree and which type of deliberation and co-design are necessary for AI projects, to deliver on the promise to serve the public interest. Similar to the position articulated by Sloane et al. (
2020), we believe that more attention needs to be brought to successful and appropriate methods of participatory design overall, but particularly the step towards implementing the results is hard. As many attempts have shown, “design by committee” does not necessarily go well with creation. So, we propose it is an important (further research) question how to bridge this gap. On a more detailed level, we are interested to learn more about tools and methods to translate between participatory design and technical implementation.
Another related and important question regards the better understanding of the gap between the vision and the reality of open-source software for the public interest (and in particular public sector) AI. While we do hear voices in general agreement about open source being the goal for public service infrastructures, the reality seems to impose (to the best of our knowledge under-researched) obstacles toward the actual adoption. This is thus an open question also, in which scenarios, under which licenses, and to what degree is a commitment to open and free software necessary for public interest AI.
Finally, as a basis for more extensive research, we are releasing a survey and creating a dataset of public interest AI cases. We aim to identify cases in broad areas, including public administration, and test the (so far theoretical) guiding principles we have presented in this paper.