Skip to main content
Top
Published in: AI & SOCIETY 4/2023

Open Access 25-02-2022 | Open Forum

The problem with trust: on the discursive commodification of trust in AI

Authors: Steffen Krüger, Christopher Wilson

Published in: AI & SOCIETY | Issue 4/2023

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

This commentary draws critical attention to the ongoing commodification of trust in policy and scholarly discourses of artificial intelligence (AI) and society. Based on an assessment of publications discussing the implementation of AI in governmental and private services, our findings indicate that this discursive trend towards commodification is driven by the need for a trusting population of service users to harvest data at scale and leads to the discursive construction of trust as an essential good on a par with data as raw material. This discursive commodification is marked by a decreasing emphasis on trust understood as the expected reliability of a trusted agent, and increased emphasis on instrumental and extractive framings of trust as a resource. This tendency, we argue, does an ultimate disservice to developers, users, and systems alike, insofar as it obscures the subtle mechanisms through which trust in AI systems might be built, making it less likely that it will be.
Notes

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

Trust is all the rage in thinking about artificial intelligence (AI) and society. It dominates international principles and guidelines developed by the private sector (IBM 2018) and by multilateral organizations (OECD 2019). It takes center stage in national strategies for AI (Misuraca and Van Noordt 2020) and in the mandates for proposed national regulatory institutions (Mulgan 2016). The idea of trust grounds scholarly proposals for AI governance regimes (Janssen et al. 2020) and experimental tinkering in the programing of algorithms (Ribeiro et al. 2016; Toreini et al. 2020). It frames civil society calls for ethical AI (Balaram et al. 2018), is extolled as “essential” by social commentators (Larsson et al. 2019), and spurs the launch of boutique consultancies.1 Indeed, a recent mapping of efforts to strengthen the governance of AI finds a proliferation of principles “from all sectors, including governments, corporations, and civil society,” and notes that the “stated purpose of many of the principles is to forge trust: between governments and citizens, between corporations and consumers or users, and between AI researchers and the general public” (Cussins Newman 2020, p. 11).
Critical voices in this discourse are few and far between. After all, who would want to argue against the importance of trust and trustworthiness? Yet, some stakeholders have pointed towards the undue strain that is being put on the concept. A disillusioned member of the High-Level Expert Group responsible for developing EU ethics guidelines, for example, criticizes the notion of trustworthy AI as “conceptual nonsense”:
Machines are not trustworthy; only humans can be trustworthy (or untrustworthy). If, in the future, an untrustworthy corporation or government behaves unethically and possesses good, robust AI technology, this will enable more effective unethical behaviour (Metzinger 2019).
Warnings such as Metzinger’s seem to go mostly unheard in a discourse that is dominated by enthusiasm for the societal and market benefits assumed to follow from AI technologies. Instead, the literature considered in this commentary suggests that the concept of trust has a near-universal utility. Trust in AI is imagined to help governments gain legitimacy (Freuler and Iglesias 2018; Kuziemski and Misuraca 2020); to improve customer satisfaction with services (Al-Mushayt 2019; Berscheid and Roewer-Despres 2019; Matheny et al. 2019; Mehr 2017); help societies realize their potential for social good (Mikhaylov et al. 2018; Tomašev et al. 2020); and unlock market value (Chakravorti and Shankar 2017; Gordon Myers and Nejkov 2020; Orr and Davis 2020). Simultaneously, trust is widely applied as a quality seal for “good AI”, and often collocated with “excellence”, or “cutting-edge” technology, as in The European Commission’s “White Paper on artificial intelligence: a European approach to excellence and trust” (European Commission, 2020, our emphasis).
A complementary strain of literature, meanwhile, warns of the risk that AI can fail to achieve its objectives when it is not trusted. Villani (2018), for example, states that “widespread distrust of AI on the part of the general public, […] in the long run is liable to curb its development and all the benefits it could bring” (116). Just as social media platforms wither away when they are not being fed with the interactions of a critical mass of people, AI will not develop sufficient intelligence—will remain dumb and unreliable—when left unused. Such “bad AI” can then lead into a vicious circle in which already lacking public trust is damaged further by the insufficient number of interactions it receives (Reisman et al. 2018).
These hopes and fears must be understood in light of the market forces which drive them. The emergence of AI technologies corresponds with the emergence of new markets, as indicated by stiff competition between private companies, and efforts by governments to strengthen national competitiveness (European Strategy for Data 2020; Misuraca and Van Noordt 2020). The coupling of strong incentives to advance AI systems with the inherent uncertainty surrounding AI’s societal impact then leads to a polarized dynamic in which a skeptical public (Anderson et al. 2018) is matched by “AI cheerleaders” working to normalize socio-technical systems on behalf of private interests (Bourne 2019). The risk that this polarization can slide into paternalistic models of technological governance is very high indeed (Cardullo and Kitchin 2019; Oravec 2019) and should be taken seriously. As Cath (2018) notes when considering AI governance regimes
…industry efforts [to develop best practices] are laudable, but it is important to position them in light of three important questions. First, who sets the agenda for AI governance? Second, what cultural logic is instantiated by that agenda and, third, who benefits from it? Answering these questions is important because it highlights the risks of letting industry drive the agenda and reveals blind spots in current research efforts (Cath 2018, pp. 3–4).
It is in this vein that we question the prominence of trust in the contemporary discourse on AI and society. To be clear, we do not assert that trust is in any way unimportant. Efforts that address widely held public concerns about AI’s societal impacts are laudable, including efforts to build trust. We are concerned, however, that this near-universal attention to the concept obscures exactly the challenges that make trust in AI problematic in the first place.
This analysis is not an exploration of trust in AI or its conditions, whether they are present or ought to be, or whether trust is commodified in an economic sense of specific user interactions or market dynamics. Indeed, we are not concerned with trust per se, but the way in which the notion of trust is asserted and leveraged within a very specific discourse. This commentary is concerned with the commodification of trust as an important aspect, or moment, in the public discourse on AI and society, and the implications that moment might have for how AI is understood and implemented. Specifically, we are concerned by the increasing practical and instrumental attention to the general public’s trust in AI as a precondition for generating the vast quantities of data required by machine-learning systems. Hence, trust becomes asserted as something that should be provided blindly, as a resource to be extracted, and as an instrument for unlocking value. Under these circumstances, the discursive commodification of trust is closely associated with the eagerness of private and government stakeholders to promote the implementation of AI systems in the name of trust, while turning a blind eye to the question of whether those systems are actually deserving of people’s trust.
Our argument proceeds with a brief description of the theoretical background and methods that inform this commentary. This is followed by a third section outlining the relevant conceptions and levels of functioning of trust in the context of AI, before addressing the problems that can be associated with the existing perspectives. The fifth section concludes with an assessment of what trust has come to mean in the contemporary discourse and offers critical suggestions about how to counter the biases that this discourse produces to safeguard the common good and public well-being.

2 Theoretical background and method

Methodologically, what follows is best understood as a critical discourse analysis aligned with the understanding of socio-linguistic conventions as the expression of power relationships and power struggles (Fairclough 2010). In our application of the method, we depart from Michel Foucault’s (1972) definition of discourse as relations between statements, groups of statements, and other events, even if these statements and events have no prior connection to one another. In contradistinction to Foucault, however, we make room for the social effectiveness of non-discursive spheres in line with Norman Fairclough (1995) as well as the importance of a notion of subjectivity irreducible to social and discursive positions (e.g., Henriques et al. 1984; Carolan and Bell 2003). We believe that this is a productive approach to exploring the ways in which ideologies and hegemonic values are asserted and accepted in the discourses surrounding new technologies.
This approach is generally aligned with the tradition of critical social theory that understands these dynamics in terms of power and power relationships between groups and individuals (see Stoddart 2007). While the power relationships at play in this discourse are not the main focus of our analysis, this framing is particularly important in the context of AI’s simultaneous opacity and ubiquity. We hope that our analysis can guide attention to the powerful interests that are driving the current discourse on AI governance, or to ask in Cath’s (2018) language, who is setting the “agenda.” This approach is also well suited to an analysis of the role of trust in such a discourse. Not only is trust widely recognized to be a discursive phenomenon, constructed and situated within social systems (Carolan and Bell 2003), but a critical discourse theory provides a productive framework for understanding how notions of trust weigh in on and influence governance paradigms, precisely because “trust-related and trust-bearing issues are central to our understanding of how the conduct of professional practices impacts on human relationships in social life” (Candlin and Chrichton 2013, p. 1). In this respect, our approach departs from the analysis often applied to the micro-level of communications in Information Systems research (Cukier et al. 2009), but remains focused at the macro-level of public discourse and how the social reality inscribed in it is indicative of the balances and imbalances of power in society. This also follows the theoretical orientation of critical policy studies to understand that the language used in public documents with a scholarly or policy orientation will play a key role in staking out what is politically possible and desirable in regard to AI governance as a policy problem (Mulderrig et al. 2019).
Furthermore, our theoretical orientation is reflected in our use of the term “commodification”. There is a significant body of economic research conceptualizing trust as a commodity, which can be traced from Zucker’s (1986) seminal work on the production of trust in macro-economic structures, to contemporary game-theoretical work on the commodification of trust in specific economic interactions (Dasgupta 1989). Recently, there has also been increased interest in the commodification of trust in regard to new technologies, and Bodø (2021) for example explores how blockchain technologies “transform trust and trustworthiness, a form of social capital, into a commodity, an industrially produced asset that can be quantified, traded, enclosed, and sanctioned” (p. 2678–2679). Our critical discourse analysis departs from this economic approach in favor of a Marxist understanding of commodification and a critique of neoliberal practices that leverage new technologies to assert and reinforce power relationships (Cardullo and Kitchin 2019). Unlike comparable critical readings, however (e.g., Arvanitakis 2007), we are attuned to this process at the level of discourse, rather than at that of individual’s relationships or social realities.
Our sample for assessing the academic and policy discourse on AI and society is selective, and does not aspire to be strictly representative, inviting a critique that is often levied at critical discourse analysis (Breeze 2011). We nevertheless believe that a more partial review of literature is justified given the highly diffuse and fragmented nature of the discourse (Cussins Newman 2020) and the imprecise nature of trust as a concept. Our sample is nonetheless broad, composed of over 100 publications addressing AI, society, and trust. The sample includes academic journal articles (n = 65), government strategies and whitepapers (n = 29), analyses from Non-Governmental Organizations (NGOs) and think tanks (n = 10), reports from industry consultancies (n = 2), and journalistic opinion pieces and blog posts (n = 5).

3 What we talk about when we talk about trust

For all its commonsensical appeal, trust is a surprisingly slippery concept, and as Hardin (2002) notes, there is “no Platonically essential notion of trust. Ordinary-language usages of the term trust are manifold and ill articulated” (p. xx). This ambiguity makes sense to a degree, if one understands trust as a social phenomenon, discursively constructed, bound by context, and continuously negotiated as a basis for social interaction (Candlin and Crichton 2013, p. 9). Ambiguity in this context is also what makes trust such a powerful social phenomenon, as a fundamental condition for complex social action and cooperation (“Trust Mak. Break. Coop. Relations” 1989), and underpinning the social construction of truth itself (Carolan and Bell 2003).
A common language understanding of trust will often emphasize the notion of trust cultivated through a process of learning from experience in interpersonal relations over time, and has been the focus of a significant strain of psychological research (e.g.: Berzoff 2011; Erikson 1963). This type of trust, situated in individual interactions and expectations, has been integrated with sociological conceptions of “system trust” in which trust is marked by increasing complexity and challenges to anticipating the behaviour of others, including complex social systems (Giddens 1990; Granovetter 1985; Luhmann 2017). Though sociological research has increasingly focused on the de-personalized character of trust in such complex systems (Shapiro 1987), there is widespread agreement among scholars that individuals’ trust—be it in other people, institutions, or systems—is always a matter of both prediction and normativity and is “always associated with expectations about the behaviour of others” (Candlin and Chrichton 2013, p. 2). This resonates with Hamm et al.'s (2016) cross-disciplinary review of what they call institutional trust, which emphasizes perceptions that institutions are protecting people’s welfare as the primary antecedent of this type of trust.
For this commentary, it is not necessary to trace the messy debate about micro- and macro-level dynamics that contribute to building and eroding trust (e.g., Bachmann and Inkpen 2011). Indeed, there is every reason to believe that different types of trust are operating simultaneously, as has been demonstrated in experimental research on individuals’ trust in online financial interactions (Pennington et al. 2003). To explore how the notion of trust is manifest in the discourse on AI and society, it is nevertheless useful to note that there are several ways in which trust might be cultivated in novel technologies like AI. While, in keeping with Hamm et al. (2016) and critics like Bodó (2021), trust might be earned with the exhibition of trustworthy behaviour, other studies indicate that, under certain conditions, people are willing to trust in emerging technologies even if these technologies have by no means earned their trust (e.g., Mazey and Wingreen 2017).
Furthermore, dynamics of trust in AI occur in imperfect information systems, and complex systems introduce multiple vectors for mistrust (Shapiro 1987). This has led to several proposals for how trust can be created and vested through proxy, including notions of trust mediators (Bodó 2021), data stewards (Janssen et al. 2020), or algorithmic social contracts to embed societal values in AI design (Rahwan 2017). Other research has emphasized the importance of institutional mechanisms to foster trust and mutual accountability between the different sectors involved in AI development (Brundage et al. 2020), and suggests that it might also be necessary to introduce macro-level systemic changes to cultivate trust at scale. This aligns with Steedman et al.’s (2020) exploration of trust in data-driven systems, “particularized solutions to generalized problems are unlikely to be effective. We need collective, ecosystem solutions, for example, better regulation of data-driven systems, in order for them to be perceived as more trustworthy” (p. 829).
Because trust is a matter of perception and predication, however, trust might also be cultivated without the performance of trustworthy behaviour. It is in this vein that Bourne and Edwards (2012) describe the discursive strategies deployed to cultivate system trust in the hedge fund industry. In the same vein, critics of AI systems lament the emphasis on public relations campaigns and marketing that attend to AI technologies and technology vendors (Bourne 2019; Cardullo and Kitchin 2019; Oravec 2019). For the present commentary and analysis, then the question becomes which of these responses is most prominent in the contemporary scholarly and policy discourse on AI and society.

4 Problems with trust

What becomes apparent from the discussion so far is that it confronts researchers and stakeholders with a challenge: To a significant degree, the object one is asked to invest one’s trust in is still in the process of emergence. The very notion of AI is under public construction, both in the expert discourse reviewed here, and in the experiences and perspectives of the individuals that become AI users. Shifts and changes in sociocultural meanings affect the conditions for trust, and the global discourse sets the stage for how (and if) those conditions will be manifest. It is from this constructionist vantage point, which conceives of objects as being shaped by the ways in which they are understood, that we want to draw critical attention to the meanings that we see as emerging from the recent literature, where trust is frequently approximated to blind trust, to a resource, or an instrument.

4.1 Trust is blind

“[W]hether or not we are comfortable with AI may already be moot”, states the World Economic Forum’s report assessing the risks of AI systems (World Economic Forum 2017, p. 48). Its dry gesturing toward the inevitability of the implementation of AI illustrates just how much the question of trust in these systems is interwoven with the power relationships that support and are reinforced by them (Doteveryone 2018; Steedman et al. 2020). These power relations come yet clearer to the fore when the report explains that “To ensure that AI stays within the boundaries that we set for it, we must continue to grapple with building trust in systems that will transform our social, political and business environments, make decisions for us, and become an indispensable faculty for interpreting the world around us” (51). To taper the gist of this argument a little: Since AI already is a social reality, we better find ways to trust it.
Indeed, this rhetoric takes on a coercive tenor in some articulations, as for example in Polonski’s (2018) assertion that trust in AI systems is necessary in order for users to avoid being left behind:
[G]iven the unrelenting pace of technological progress, refusing to partake in the advantages offered by AI could place a large group of people at a serious disadvantage. As AI is reported and represented more and more in popular culture and in the media, it could contribute to a deeply divided society, split between those who believe in (and consequently benefit from) AI and those who reject it.
From this perspective, mistrust is an obstacle, a speedbump on the highway of inevitable progress, and skeptical users are pitted in opposition to the evocation of a greater public good. Not only is their skepticism detrimental to the latter; it will also be their own fault if they end up as the losers of the technosocial paradigm shift that AI is bringing about.
Such antagonistic positions, however, are less common in the discourse we reviewed, where reference to a decisively symbiotic relationship between AI systems, regulators, developers, and users is dominant. Rana el Kaliouby, CEO of an AI development firm specializing in human emotion detection, for example, writes on the World Economic Forum’s website that “we’re forging a new kind of partnership with technology. And with that partnership comes a new social contract: one that’s built on mutual trust, empathy and ethics” (el Kaliouby 2019). This “new social contract”, we hold, represents a misleading premise of equality between the actors that are designing and implementing AI, and the people who use it. The latter have usually very little knowledge about the system and its functioning; the system, by contrast, constantly gains information about the people—a problem that Zuboff (2019) in a similar context has identified as the decline of “reciprocity” (499–504).
Ideals of close partnership and symbiosis are further evoked when intergovernmental organizations recommend inclusive multi-stakeholder collaborations (UNESCO 2018) or prominent academics envision interdisciplinary collaborations across the public, private, and non-profit sectors driving the development of AI for social good (Tomašev et al. 2020). Common to all these is a tendency to obscure the power relations that underpin AI systems by conflating efforts for building and managing trust with the status of deserving trust.
The consequent outcome of such rhetorical maneuvers is the fostering of blind trust among the public, which is desired without attention to either the ways in which trust might be built, the reasons that it might not be merited, or what might go wrong. Indeed, this inherent opacity might also be the reason why, for all the emphasis on ethical AI in this discourse, attention to the potential harm it can cause is remarkably muted. While there are numerous examples of discriminatory and biased outcomes of AI use in the public sector, reinforcing socio-economic disparities and inequalities of justice (e.g.: Eubanks 2018; Grace 2019; O’Neil 2016; Park and Humphry 2019), the ways in which harm was caused are often unclear, because it remains an open question to what extent “algorithmic predictions influence human decision-makers” [Eubanks (2018), quoted Balaram et al. (2018, 10)].

4.2 Trust is a resource

Closely aligned with notions of blind trust is the rendering of trust into a resource and raw material—a process most readily apparent in the AI strategies of national governments, particularly of countries with high levels of trust. The Nordic Council of Ministers (2017), for example, states proudly that “The Nordic region is regarded as a world leader when it comes to social trust among its population”, adding that public trust in government is “perhaps the most important resource in the Nordic societies” (10). The report pushes the issue even further by stating that “trust can be regarded as a type of gold for the Nordic countries” (Nordic Council of Ministers 2017, p. 7 our emphasis). In this way, trust, previously presumed to be an outcome and reward for virtuous governance, becomes commodified and hollowed out; framed as a resource to be manufactured, grown and managed, acquired and supplied to then be extracted and refined in the service of other aims.
True to this goldrush spirit, the Norwegian national strategy for AI (Norway 2020) frames its high levels of public trust as part of its national brand (cf., 2) and states its intention to lead “the way in developing human-friendly and trustworthy artificial intelligence” that, it hopes, “may prove a key advantage in today's global competition” (2). The Danish national strategy joins in: “[A]lmost all Danes use the internet on a daily basis, and there is a high degree of mutual trust and confidence. This means that Denmark is adaptable, and there is a good basis for implementing artificial intelligence” (Denmark 2019, p. 16).
However, also nations blessed with less of the ‘Nordic gold’ have come to understand trust as a resource. While Luxemburg (2019) intends to create “an innovative and trusted regulatory environment in order to attract data-driven and data-centric services and businesses” (13), the German national strategy (Germany 2018) articulates the hope that “the high level of data protection and privacy standards achieved in the EU build citizens’ trust in new AI technologies and can therefore give German and European companies a competitive advantage internationally” (16). To briefly restate our core argument: While we deem achievements such as high levels of data protection as virtuous and welcome, the envisioned use of people’s trust as a competitive advantage presents a slippery slope.
Just how much a social-engineering attitude towards trust as a resource has suffused the discourse on AI can be gathered from neologisms such as “undertrust” and “overtrust” (e.g., US National Science and Technology Council 2019), which imply that trust can be measured and weighed like the ingredients for a cake. This is disingenuous at best, in a discourse that ambiguates between notions of trust as an input and an output of AI systems. In actuality, trust is far more entangled with other socio-economic and governance processes and outcomes (Wirtz et al. 2019) than is suggested by such neologisms. Ignoring those entanglements risks not only governance shortcomings, but also the normalization of particular agendas and interests that are served in the implementation of AI systems (Cath 2018).

4.3 Trust is an instrument

The notion of trust’s commodification becomes especially salient if one assumes that trust has no inherent value in itself. Rather, the identification of technologically robust AI with ethically robust AI that we find in significant parts of the discourse is characterized by a strong instrumental undercurrent.
In these instances, trust is described as “essential” to “reap the full potential public and economic benefits from new technologies” (Mulgan 2016, p. 1), or to “fulfill the promise and value that AI can bring in sectors such as retail, finance, health care, and more” (Larsson et al. 2019, p. v). “The digital economy,” states the Industrial Strategy of the UK’s Department for Business (United Kingdom 2018), “relies on trust to work effectively” (159). And the strategy, “AI 4 Belgium”, agrees: “After all, public trust is the cornerstone of any AI and data strategy. When it is lacking, innovation is off the table” (AI4Belgium Coalition 2019, p. 8).
Myers and Nejkov’s (2020) describe this trend uncritically, noting that “ensuring trust has emerged as a precondition to realizing the social, commercial, and public benefits of implementing AI technologies” (1). However, while this statement is almost certainly true, it is indicative of a worrying trend which turns the public’s trust into an instrument for unlocking other benefits: ensuring uptake and legitimacy of AI as well as its societal and market potentials. Indeed, this instrumental rationale makes analogies with “Nordic gold” and other commodities fall short. Rather than being a commodity in itself, trust in AI is desirable, because it catalyzes and enables the mining effort. It is not the final product, but—to present one last metaphor applied to it—a “lubricant” (Andreasson and Stende 2019, p. 28).

5 Conclusion

Our critical analysis of the discourse on AI and society suggests that there is an increasing trend towards the discursive commodification of trust. We find an overwhelming emphasis on the conditions for developing AI systems and unlocking markets, which conflates the interests and agendas of AI users, experts, and regulators, with those of technology vendors, venture capitalists, and financial backers, whose interests in advancing AI might be diametrically opposed. More critically, this emphasis has the effect of obfuscating the conditions under which trust between individuals and AI systems is developed and earned.
The insistence that governments focus on building public trust in AI systems obscures the questions about what might be required to actually build trustworthy systems. When trust is commodified in this discourse, there is no space to ask whether macro- or micro-level approaches are more appropriate, whether trust is justified, or who or what precisely users are asked to trust in. This might be dismissed as rhetorical emphasis; we hold, by contrast, that it is of critical importance, because the policy and scholarly discourse on AI and society is actively staking out the policy options for AI governance, and those governance structures will have far reaching and profound implications for society (Cath 2018; Cussins Newman 2020; Mulderrig et al. 2019). When appeals to build trust fail to acknowledge the intricacies of how trust gets built, they make the notion evermore elusive for both the designers of AI systems and the users.
Countering this discourse’s gravitation toward blind, commodified, and instrumental trust, we would encourage experts, politicians, and stakeholders to instead focus on the work and the investments that indeed remain preconditional of the public’s trust. Though not prominent in the discourse, there is solid research on which to base such an effort. Trustworthy AI may well require programming and design that is technologically robust (Bellamy et al. 2019; Etzioni and Etzioni 2016; Harrison et al. 2019; Kroll et al. 2017; Liao and Muller 2019; Sokol et al. 2020; Veale et al. 2018), processes and policy frameworks that protect citizens in the slightest case of doubt (Balaram et al. 2018; Brundage et al. 2020; Dignum 2019; Janssen and Kuk 2016; Kemper and Kolkman 2019; Kolkman 2020; Lee 2018; Mulgan 2016; Reisman et al. 2018; Vassilakopoulou 2020), or public-sector routines and protocols that allow for AI and human-service offers to run side-by-side for the foreseeable future, questioning, and learning from each other (Berscheid and Roewer-Despres 2019; Janssen et al. 2020; Katell et al. 2020; Rahwan 2017; Vestby and Vestby 2019; Yeung and Lodge 2019).
These might not be necessary mechanisms for building trust in AI, or the most appropriate or effective. But until the scholarly and policy discourse assessed here becomes actively engaged in an exploration of which mechanisms are the right mechanisms for building trust, the discourse on trust in AI is a shell game, baiting with market value and technological glitz, ignoring the hard questions about why trust matters. With the current discourse, governments and businesses are in no position to approach citizens and customers with a plea for their trust in shiny new systems. As Dignum (2019) writes: “Ensuring ethically aligned AI systems requires more than designing systems whose result can be trusted. It is about the way we design them, why we design them, and who is involved in designing them” (v). We would add that it is also about fostering a discourse that actively acknowledges and engages with how this should be done.

Declarations

Conflict of interest

The authors declare that they have no conflicts of interest.
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Literature
go back to reference Bellamy RKE, Mojsilovic A, Nagar S, Ramamurthy KN, Richards J, Saha D, Sattigeri P, Singh M, Varshney KR, Zhang Y, Dey K, Hind M, Hoffman SC, Houde S, Kannan K, Lohia P, Martino J, Mehta S (2019) AI fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias. IBM J Res Dev. https://doi.org/10.1147/JRD.2019.2942287CrossRef Bellamy RKE, Mojsilovic A, Nagar S, Ramamurthy KN, Richards J, Saha D, Sattigeri P, Singh M, Varshney KR, Zhang Y, Dey K, Hind M, Hoffman SC, Houde S, Kannan K, Lohia P, Martino J, Mehta S (2019) AI fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias. IBM J Res Dev. https://​doi.​org/​10.​1147/​JRD.​2019.​2942287CrossRef
go back to reference Berzoff J (2011) Psychosocial Ego development: the theory of Erik Erikson. In: Berzoff J, Melano L, Hertz P (eds) Nside out and outside. Psychodynamic clinical theory and psychopathology in contemporary multicultural contexts, 3rd edn. Rowman & Littlefield, Lanham, pp 97–117 Berzoff J (2011) Psychosocial Ego development: the theory of Erik Erikson. In: Berzoff J, Melano L, Hertz P (eds) Nside out and outside. Psychodynamic clinical theory and psychopathology in contemporary multicultural contexts, 3rd edn. Rowman & Littlefield, Lanham, pp 97–117
go back to reference Brundage M, Avin S, Wang J, Belfield H, Krueger G, Hadfield G, Khlaaf H, Yang J, Toner H, Fong R, Maharaj T, Koh PW, Hooker S, Leung J, Trask A, Bluemke E, Lebensold J, O’Keefe C, Koren M, Anderljung M (2020) Toward trustworthy AI development: mechanisms for supporting verifiable claims. http://arxiv.org/abs/2004.07213 Brundage M, Avin S, Wang J, Belfield H, Krueger G, Hadfield G, Khlaaf H, Yang J, Toner H, Fong R, Maharaj T, Koh PW, Hooker S, Leung J, Trask A, Bluemke E, Lebensold J, O’Keefe C, Koren M, Anderljung M (2020) Toward trustworthy AI development: mechanisms for supporting verifiable claims. http://​arxiv.​org/​abs/​2004.​07213
go back to reference Candlin CN, Chrichton J (2013) From ontology to methodology: exploring the discursive landscape of trust. In: Candlin CN, Chrichton J (eds) Discourses of trust. Palgrave Macmillan, London, pp 1–20CrossRef Candlin CN, Chrichton J (2013) From ontology to methodology: exploring the discursive landscape of trust. In: Candlin CN, Chrichton J (eds) Discourses of trust. Palgrave Macmillan, London, pp 1–20CrossRef
go back to reference Chakravorti B, Shankar R (2017) Digital planet 2017: how competitiveness and trust in digital economies vary across the world. The Fletcher School, Tufts University, Medford Chakravorti B, Shankar R (2017) Digital planet 2017: how competitiveness and trust in digital economies vary across the world. The Fletcher School, Tufts University, Medford
go back to reference Cussins Newman J (2020) Decision points in AI governance (CLTC White Paper Series) Cussins Newman J (2020) Decision points in AI governance (CLTC White Paper Series)
go back to reference Denmark (2019) National strategy for artificial intelligence of denmark (Issue March) Denmark (2019) National strategy for artificial intelligence of denmark (Issue March)
go back to reference Dignum V (2019) Responsible artificial intelligence: how to develop and use AI in a responsible way. Springer, SwitzerlandCrossRef Dignum V (2019) Responsible artificial intelligence: how to develop and use AI in a responsible way. Springer, SwitzerlandCrossRef
go back to reference Doteveryone (2018) People, power and technology: the 2020 digital attitudes report Doteveryone (2018) People, power and technology: the 2020 digital attitudes report
go back to reference Erikson EH (1963) Childhood and society, 2nd edn. Norton, New York Erikson EH (1963) Childhood and society, 2nd edn. Norton, New York
go back to reference Etzioni A, Etzioni O (2016) Keeping Al legal. Vanderbilt J Entertain Technol Law 19(1):133–146 Etzioni A, Etzioni O (2016) Keeping Al legal. Vanderbilt J Entertain Technol Law 19(1):133–146
go back to reference Eubanks V (2018) Automating inequality: how high-tech tools profile, police, and punish the poor. St. Martin’s Press, New York Eubanks V (2018) Automating inequality: how high-tech tools profile, police, and punish the poor. St. Martin’s Press, New York
go back to reference European Strategy for Data (2020) A European strategy for data European Strategy for Data (2020) A European strategy for data
go back to reference Foucault M (1972) The Archaeology of Knowledge & the Discourse on Language. Pantheon Books, New York Foucault M (1972) The Archaeology of Knowledge & the Discourse on Language. Pantheon Books, New York
go back to reference Fairclough N (1995) Critical discourse analyisis – The critical study of language. Longman, London Fairclough N (1995) Critical discourse analyisis – The critical study of language. Longman, London
go back to reference Germany (2018) Artificial intelligence strategy Germany (2018) Artificial intelligence strategy
go back to reference Giddens A (1990) The consequences of modernity. Polity Press, Cambridge Giddens A (1990) The consequences of modernity. Polity Press, Cambridge
go back to reference Grace J (2019) Machine learning technologies and their inherent human rights issues in criminal justice contexts Mr. Jamie Grace, Senior Lecturer in Law, Sheffield Hallam University, UK 1. PP. 1–19 Grace J (2019) Machine learning technologies and their inherent human rights issues in criminal justice contexts Mr. Jamie Grace, Senior Lecturer in Law, Sheffield Hallam University, UK 1. PP. 1–19
go back to reference Henriques J, Hollway W, Urwin C, Venn C, Walkerdine V (1984) Changing the Subject. Psychology, Social Regulation and Subjectivity. Routledge, London Henriques J, Hollway W, Urwin C, Venn C, Walkerdine V (1984) Changing the Subject. Psychology, Social Regulation and Subjectivity. Routledge, London
go back to reference Katell M, Young M, Dailey D, Herman B, Guetler V, Tam A, Binz C, Raz D, Krafft PM (2020) Toward situated interventions for algorithmic equity: lessons from the field. FAT* 2020—Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 45–55. https://doi.org/10.1145/3351095.3372874 Katell M, Young M, Dailey D, Herman B, Guetler V, Tam A, Binz C, Raz D, Krafft PM (2020) Toward situated interventions for algorithmic equity: lessons from the field. FAT* 2020—Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 45–55. https://​doi.​org/​10.​1145/​3351095.​3372874
go back to reference Liao QV, Muller M (2019) Enabling value sensitive AI systems through participatory design fictions. Preprint Liao QV, Muller M (2019) Enabling value sensitive AI systems through participatory design fictions. Preprint
go back to reference Luhmann N (2017) Tust and power. Polity Press, New York Luhmann N (2017) Tust and power. Polity Press, New York
go back to reference Nordic Council of Ministers (2017) Trust—the Nordic Gold Nordic Council of Ministers (2017) Trust—the Nordic Gold
go back to reference Norway (2020) Nasjonal strategi for kunstig intelligens Norway (2020) Nasjonal strategi for kunstig intelligens
go back to reference O’Neil C (2016) Weapons of math destruction: how big data increases inequality and threatens democracy. Crown Random House, New YorkMATH O’Neil C (2016) Weapons of math destruction: how big data increases inequality and threatens democracy. Crown Random House, New YorkMATH
go back to reference Stoddart MCJ (2007) Ideology, hegemony, discourse : a critical review of theories of knowledge and power. Soc Thought Res 28:191–225 Stoddart MCJ (2007) Ideology, hegemony, discourse : a critical review of theories of knowledge and power. Soc Thought Res 28:191–225
go back to reference Tomašev N, Cornebise J, Hutter F, Mohamed S, Picciariello A, Connelly B, Belgrave DCM, Ezer D, van der Haert FC, Mugisha F, Abila G, Arai H, Almiraat H, Proskurnia J, Snyder K, Otake-Matsuura M, Othman M, Glasmachers T, de Wever W, Clopath C (2020) AI for social good: unlocking the opportunity for positive impact. Nat Commun 11(1):2468. https://doi.org/10.1038/s41467-020-15871-zCrossRef Tomašev N, Cornebise J, Hutter F, Mohamed S, Picciariello A, Connelly B, Belgrave DCM, Ezer D, van der Haert FC, Mugisha F, Abila G, Arai H, Almiraat H, Proskurnia J, Snyder K, Otake-Matsuura M, Othman M, Glasmachers T, de Wever W, Clopath C (2020) AI for social good: unlocking the opportunity for positive impact. Nat Commun 11(1):2468. https://​doi.​org/​10.​1038/​s41467-020-15871-zCrossRef
go back to reference Toreini E, Aitken M, Coopamootoo KPL, Elliott K, Zelaya VG, Missier P, Ng M, van Moorsel A (2020) Technologies for trustworthy machine learning: a survey in a socio-technical context. ArXiv:2007.08911v1. http://arxiv.org/abs/2007.08911 Toreini E, Aitken M, Coopamootoo KPL, Elliott K, Zelaya VG, Missier P, Ng M, van Moorsel A (2020) Technologies for trustworthy machine learning: a survey in a socio-technical context. ArXiv:2007.08911v1. http://​arxiv.​org/​abs/​2007.​08911
go back to reference Vassilakopoulou P (2020) Sociotechnical approach for accountability by design in AI systems Vassilakopoulou P (2020) Sociotechnical approach for accountability by design in AI systems
go back to reference Villani C (2018) For a meaningful artificial intelligence: towards a French and European strategy Villani C (2018) For a meaningful artificial intelligence: towards a French and European strategy
go back to reference Zuboff S (2019) The age of surveillance capitalism. Profile Zuboff S (2019) The age of surveillance capitalism. Profile
go back to reference Zucker LG (1986) Production of trust: institutional sources of economic structure, 1840–1920. Res Org Behavi 8:53–111 Zucker LG (1986) Production of trust: institutional sources of economic structure, 1840–1920. Res Org Behavi 8:53–111
Metadata
Title
The problem with trust: on the discursive commodification of trust in AI
Authors
Steffen Krüger
Christopher Wilson
Publication date
25-02-2022
Publisher
Springer London
Published in
AI & SOCIETY / Issue 4/2023
Print ISSN: 0951-5666
Electronic ISSN: 1435-5655
DOI
https://doi.org/10.1007/s00146-022-01401-6

Other articles of this Issue 4/2023

AI & SOCIETY 4/2023 Go to the issue

Premium Partner