Skip to main content
Top
Published in: AI & SOCIETY 4/2023

10-01-2022 | Original Article

Varieties of transparency: exploring agency within AI systems

Authors: Gloria Andrada, Robert W. Clowes, Paul R. Smart

Published in: AI & SOCIETY | Issue 4/2023

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

AI systems play an increasingly important role in shaping and regulating the lives of millions of human beings across the world. Calls for greater transparency from such systems have been widespread. However, there is considerable ambiguity concerning what “transparency” actually means, and therefore, what greater transparency might entail. While, according to some debates, transparency requires seeing through the artefact or device, widespread calls for transparency imply seeing into different aspects of AI systems. These two notions are in apparent tension with each other, and they are present in two lively but largely disconnected debates. In this paper, we aim to further analyse what these calls for transparency entail, and in so doing, clarify the sorts of transparency that we should want from AI systems. We do so by offering a taxonomy that classifies different notions of transparency. After a careful exploration of the different varieties of transparency, we show how this taxonomy can help us to navigate various domains of human–technology interactions, and more usefully discuss the relationship between technological transparency and human agency. We conclude by arguing that all of these different notions of transparency should be taken into account when designing more ethically adequate AI systems.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Footnotes
1
See Ferreira et al. (2021).
 
2
Wang (2008).
 
3
See Bernard Marr, The Revolutionary Way of Using Artificial Intelligence in Hedge Funds: https://​www.​forbes.​com/​sites/​bernardmarr/​2019/​02/​15/​the-revolutionary-way-of-using-artificial-intelligence-in-hedge-funds-the-case-of-aidyia/​#17eb640157ca (last Accessed: 13 June 2021). Also, in extreme cases, such systems may be used for the algorithmic regulation of society (Cristianini and Scantamburlo 2020).
 
6
See Lupton (2016).
 
7
See Floridi et al. (2018).
 
8
For important discussions of these themes, however, see Coeckelbergh (2020), de Fine Licht and de Fine Licht (2020) and Walmsley (2020).
 
9
For an overview, see Müller (2020). See also de Fine Licht and de Fine Licht (2020).
 
10
In their influential work, Turilli and Floridi write: “In the disciplines of computer science and IT studies, however, ‘transparency’ is more likely to refer to a condition of information invisibility, such as when an application or computational process is said to be transparent to the user” (Turilli and Floridi 2009, p. 105).
 
11
More on this in Sect. 3.
 
12
Wheeler (2019) identifies the problem in the following quote: “Sometimes, technology is described as being transparent when a specified class of users is able to understand precisely how it functions. This is a perfectly reasonable notion of transparency, but note that a device which is transparent in this sense may be broken or malfunctioning, and so will not be transparent in the phenomenological sense, and that a device which is phenomenologically transparent-in-use may be impenetrable in its inner workings, and so will not be transparent in the ‘open to understanding’ sense. Therefore, there is a double dissociation between the two concepts” (Wheeler 2019, p. 859).
 
13
Walmsley (2020) classifies different notions of transparency and distinguishes between “outward” transparency that targets various epistemic and ethical features of AI systems and functional transparency. We do not have sufficient space to address the difference between Walmsley’s taxonomy and ours. However, we wish to highlight that these different varieties of transparency correspond to different forms of what we call reflective transparency. We thank an anonymous reviewer for bringing this to our attention.
 
14
See de Fine Licht and de Fine Licht (2020).
 
15
This is sometimes referred to as “opening the black box”. See Zednik (2021).
 
17
See, for instance, the report that Amnesty International and Afrewatch have presented on child labour and cobalt mines: https://​www.​amnesty.​org/​en/​latest/​news/​2016/​01/​child-labour-behind-smart-phone-and-electric-car-batteries/​ (last accessed: 2 June, 2021).
 
18
See Smart et al. (2017), pp. 77–78.
 
19
Clowes (2020).
 
20
Classic examples of this sort of transparency in the literature include Heidegger (1927) and Maurice Merleau-Ponty (1945). For more contemporary approaches, see Clark (2008), Heersmink (2015), Wheeler (2019) and Andrada (2020).
 
21
It has usually been argued that transparency entails a lack of conscious thought or reflection for the artefact’s proficient use. See Heersmink (2015), Andrada (2020).
 
22
This is empirically supported by experiments, such as those performed by Maravita and Iriki (2004).
 
23
Transparency has played an important role in the hypothesis of the extended mind, where some form of transparency-in-use is an indicator of mental or cognitive extension (Clark and Chalmers 1998; Clark 2008; Andrada 2020). Here we do not want to enter into the debate concerning the plausibility or otherwise of the extended mind thesis. Nevertheless, as will become clear, we do consider a certain degree of transparency-in-use to be central to a successful human–technology interaction.
 
24
Heersmink (2015) refers to representational transparency as “informational transparency”. In order to avoid confusion with our notion of information transparency, we have chosen to speak of “representational transparency”.
 
25
Note that this distinction holds for informational devices, but not for all cases of transparency-in-use.
 
26
See for instance https://​calmtech.​com/​ (last accessed: 2 August, 2021).
 
27
See Wheeler (2019), for an account of the risks that transparency entails in some of our cognitive processes. See also Andrada (2021), for more on the connection between transparency and an agent’s epistemic standing.
 
28
See Clowes (2020).
 
29
See Carter (2020).
 
30
Thanks to an anonymous reviewer for bringing this to our attention.
 
31
This relates to the so-called control problem (Bostrom 2014; Russell 2019), which can be viewed as a problem of (collective) human agency gaining control over (in this case) a super-intelligent AI, in order to avoid an existential threat.
 
32
Thanks to an anonymous reviewer for encouraging us to develop this point further.
 
33
The link between transparency-in-use and agency also highlights the importance of accessible and inclusive technologies. See Andrada (2020), for more on the relationship between phenomenological transparency, technologies and diverse embodiments.
 
34
We wish to warn the reader that we are not saying that this distinction is correct. In fact, there might be good reasons to think that, even if we can make such a distinction for certain theoretical purposes, the relationship between such forms of agency would be much more dynamic and intertwined. Nevertheless, this distinction may help to clarify our proposed analysis.
 
35
It is important to point out that the situation would be different if we wanted to determine the correct degree and type of transparency required for enhancing trust and trustworthiness in AI systems. From a practical perspective, theorists such as O’Neill (2020) have questioned the extent to which transparency is effective in supporting assessments of trustworthiness, and there are additional reasons to doubt the assumed relationship between transparency and trust; that is, the idea that (reflective) transparency always plays a positive role in cultivating trust or supporting assessments of system trustworthiness (see also Nguyen 2021). On the other hand, users trust AI systems for engaging in various actions. They do not want to constantly check on well-functioning equipment, because that impedes their ability to act with it. That is why some degree of transparency-in-use seems to be necessary for trustworthiness. The crucial thing to bear in mind here is that the adequate type and degree of transparency required for promoting trust and trustworthiness in AI systems might turn out to be different from the level of transparency required for promoting agency. We hope to come back to this issue in future work, but this is already enough to show how applying our taxonomy to different normative frameworks can help to illuminate different dimensions of human–technology interactions and AI ethics.
 
36
See Clowes (2019a, b), for examples of how the use of Fitbit and personal tracking systems is often a way of practising agency.
 
Literature
go back to reference Andrada G (2020) Transparency and the phenomenology of extended cognition. LÍMITE Interdiscipl J Philos Psychol 15(20):1–17 Andrada G (2020) Transparency and the phenomenology of extended cognition. LÍMITE Interdiscipl J Philos Psychol 15(20):1–17
go back to reference Bostrom N (2014) Superintelligence: paths, dangers, strategies. Oxford University Press, Oxford Bostrom N (2014) Superintelligence: paths, dangers, strategies. Oxford University Press, Oxford
go back to reference Bratman ME (2000) Reflection, planning, and temporally extended agency. Philos Rev 109(1):35–61CrossRef Bratman ME (2000) Reflection, planning, and temporally extended agency. Philos Rev 109(1):35–61CrossRef
go back to reference Bucher T (2012) Want to be on the top? Algorithmic power and the threat of invisibility on Facebook. New Media Soc 14(7):1164–1180CrossRef Bucher T (2012) Want to be on the top? Algorithmic power and the threat of invisibility on Facebook. New Media Soc 14(7):1164–1180CrossRef
go back to reference Carter JA (2020) Intellectual autonomy, epistemic dependence and cognitive enhancement. Synthese 197(7):2937–2961CrossRef Carter JA (2020) Intellectual autonomy, epistemic dependence and cognitive enhancement. Synthese 197(7):2937–2961CrossRef
go back to reference Clark A (2008) Supersizing the mind: embodiment, action, and cognitive extension. Oxford University Press, New YorkCrossRef Clark A (2008) Supersizing the mind: embodiment, action, and cognitive extension. Oxford University Press, New YorkCrossRef
go back to reference Clowes RW (2019a) Immaterial engagement: Human agency and the cognitive ecology of the Internet. Phenomenol Cogn Sci 18(1):259–279CrossRef Clowes RW (2019a) Immaterial engagement: Human agency and the cognitive ecology of the Internet. Phenomenol Cogn Sci 18(1):259–279CrossRef
go back to reference Clowes RW (2019b) Screen reading and the creation of new cognitive ecologies. AI Soc 34:705–720CrossRef Clowes RW (2019b) Screen reading and the creation of new cognitive ecologies. AI Soc 34:705–720CrossRef
go back to reference Clowes RW (2020) The internet extended person: exoself or doppelganger? LÍMITE Interdiscipl J Philos Psychol 15(22):1–23 Clowes RW (2020) The internet extended person: exoself or doppelganger? LÍMITE Interdiscipl J Philos Psychol 15(22):1–23
go back to reference Cristianini N, Scantamburlo T (2020) On social machines for algorithmic regulation. AI Soc 35:645–662CrossRef Cristianini N, Scantamburlo T (2020) On social machines for algorithmic regulation. AI Soc 35:645–662CrossRef
go back to reference de Fine Licht K, de Fine Licht J (2020) Artificial intelligence, transparency, and public decision-making. AI Soc 35(4):917–926CrossRef de Fine Licht K, de Fine Licht J (2020) Artificial intelligence, transparency, and public decision-making. AI Soc 35(4):917–926CrossRef
go back to reference Diakopoulos N (2020) Transparency. In: Dubber MD, Pasquale F, Das S (eds) The oxford handbook of ethics of AI. Oxford University Press, New York, pp 197–213 Diakopoulos N (2020) Transparency. In: Dubber MD, Pasquale F, Das S (eds) The oxford handbook of ethics of AI. Oxford University Press, New York, pp 197–213
go back to reference Dreyfus SE, Dreyfus HL (1980) A five-stage model of the mental activities involved in directed skill acquisition. In: Operations Research Center, University of California, Berkeley, California Dreyfus SE, Dreyfus HL (1980) A five-stage model of the mental activities involved in directed skill acquisition. In: Operations Research Center, University of California, Berkeley, California
go back to reference Ferreira FGDC, Gandomi AH, Cardoso RTN (2021) Artificial intelligence applied to stock market trading: a review. IEEE Access 9:30898–30917CrossRef Ferreira FGDC, Gandomi AH, Cardoso RTN (2021) Artificial intelligence applied to stock market trading: a review. IEEE Access 9:30898–30917CrossRef
go back to reference Floridi L, Cowls J, Beltrametti M, Chatila R, Chazerand P, Dignum V, Luetge C, Madelin R, Pagallo U, Rossi F (2018) AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Mind Mach 28(4):689–707CrossRef Floridi L, Cowls J, Beltrametti M, Chatila R, Chazerand P, Dignum V, Luetge C, Madelin R, Pagallo U, Rossi F (2018) AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Mind Mach 28(4):689–707CrossRef
go back to reference Gallagher S (2005) How the body shapes the mind. Oxford University Press, OxfordCrossRef Gallagher S (2005) How the body shapes the mind. Oxford University Press, OxfordCrossRef
go back to reference Gillett AJ, Heersmink R (2019) How navigation systems transform epistemic virtues: knowledge, issues and solutions. Cogn Syst Res 56:36–49CrossRef Gillett AJ, Heersmink R (2019) How navigation systems transform epistemic virtues: knowledge, issues and solutions. Cogn Syst Res 56:36–49CrossRef
go back to reference Heersmink R (2013) A taxonomy of cognitive artifacts: function, information, and categories. Rev Philos Psychol 4(3):465–481CrossRef Heersmink R (2013) A taxonomy of cognitive artifacts: function, information, and categories. Rev Philos Psychol 4(3):465–481CrossRef
go back to reference Heersmink R (2015) Dimensions of integration in embedded and extended cognitive systems. Phenomenol Cogn Sci 14(3):577–598CrossRef Heersmink R (2015) Dimensions of integration in embedded and extended cognitive systems. Phenomenol Cogn Sci 14(3):577–598CrossRef
go back to reference Heersmink R, Sutton J (2020) Cognition and the web: extended, transactive, or scaffolded? Erkenntnis 85:139–164CrossRef Heersmink R, Sutton J (2020) Cognition and the web: extended, transactive, or scaffolded? Erkenntnis 85:139–164CrossRef
go back to reference Heidegger M (1927) Being and time. Basil Blackwell, Oxford Heidegger M (1927) Being and time. Basil Blackwell, Oxford
go back to reference Lupton, D. (2016) Digital health technologies and digital data: new ways of monitoring, measuring and commodifying human bodies. In: Olleros FX, Zhegu M (eds) Research handbook on digital transformations. Edward Elgar Publishing Ltd., Cheltenham Lupton, D. (2016) Digital health technologies and digital data: new ways of monitoring, measuring and commodifying human bodies. In: Olleros FX, Zhegu M (eds) Research handbook on digital transformations. Edward Elgar Publishing Ltd., Cheltenham
go back to reference Maravita A, Iriki A (2004) Tools for the body (schema). Trends Cogn Sci 8(2):79–86CrossRef Maravita A, Iriki A (2004) Tools for the body (schema). Trends Cogn Sci 8(2):79–86CrossRef
go back to reference Merleau-Ponty M (1945) Phenomenology of Perception. Routledge Press, London Merleau-Ponty M (1945) Phenomenology of Perception. Routledge Press, London
go back to reference O’Neill O (2020) Questioning Trust. In: Simon J (ed) The routledge handbook of trust and philosophy. Routledge, New York, pp 17–27CrossRef O’Neill O (2020) Questioning Trust. In: Simon J (ed) The routledge handbook of trust and philosophy. Routledge, New York, pp 17–27CrossRef
go back to reference Russell SJ (2019) Human compatible: AI and the problem of control. Viking Press, New York Russell SJ (2019) Human compatible: AI and the problem of control. Viking Press, New York
go back to reference Smart PR, Heersmink R, Clowes RW (2017) The cognitive ecology of the internet. In: Cowley SJ, Vallée-Tourangeau F (eds) Cognition beyond the brain: computation, interactivity and human artifice (2nd ed, pp 251–282). Springer International Publishing, Cham, Switzerland Smart PR, Heersmink R, Clowes RW (2017) The cognitive ecology of the internet. In: Cowley SJ, Vallée-Tourangeau F (eds) Cognition beyond the brain: computation, interactivity and human artifice (2nd ed, pp 251–282). Springer International Publishing, Cham, Switzerland
go back to reference Turilli M, Floridi L (2009) The ethics of information transparency. Ethics Inf Technol 11(2):105–112CrossRef Turilli M, Floridi L (2009) The ethics of information transparency. Ethics Inf Technol 11(2):105–112CrossRef
go back to reference Wang F-Y (2008) Toward a revolution in transportation operations: AI for complex systems. IEEE Intell Syst 23(6):8–13CrossRef Wang F-Y (2008) Toward a revolution in transportation operations: AI for complex systems. IEEE Intell Syst 23(6):8–13CrossRef
go back to reference Weller A (2019) Transparency: motivations and challenges. In: Samek W, Montavon G, Vedaldi A, Hansen LK, Müller K-R (eds) Explainable AI: interpreting, explaining and visualizing deep learning (Vol 11700, pp 23–40). Springer, Cham, Switzerland Weller A (2019) Transparency: motivations and challenges. In: Samek W, Montavon G, Vedaldi A, Hansen LK, Müller K-R (eds) Explainable AI: interpreting, explaining and visualizing deep learning (Vol 11700, pp 23–40). Springer, Cham, Switzerland
go back to reference Wheeler M (2019) The reappearing tool: transparency, smart technology, and the extended mind. AI Soc 34(4):857–866CrossRef Wheeler M (2019) The reappearing tool: transparency, smart technology, and the extended mind. AI Soc 34(4):857–866CrossRef
go back to reference Zednik C (2021) Solving the black box problem: a normative framework for explainable artificial intelligence. Philos Technol 34:265–288CrossRef Zednik C (2021) Solving the black box problem: a normative framework for explainable artificial intelligence. Philos Technol 34:265–288CrossRef
Metadata
Title
Varieties of transparency: exploring agency within AI systems
Authors
Gloria Andrada
Robert W. Clowes
Paul R. Smart
Publication date
10-01-2022
Publisher
Springer London
Published in
AI & SOCIETY / Issue 4/2023
Print ISSN: 0951-5666
Electronic ISSN: 1435-5655
DOI
https://doi.org/10.1007/s00146-021-01326-6

Other articles of this Issue 4/2023

AI & SOCIETY 4/2023 Go to the issue

Premium Partner