Skip to main content
Top
Published in: AI & SOCIETY 1/2024

10-05-2022 | OPEN FORUM

Machine agency and representation

Authors: Beba Cibralic, James Mattingly

Published in: AI & SOCIETY | Issue 1/2024

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Theories of action tend to require agents to have mental representations. A common trope in discussions of artificial intelligence (AI) is that they do not, and so cannot be agents. Properly understood there may be something to the requirement, but the trope is badly misguided. Here we provide an account of representation for AI that is sufficient to underwrite attributions to these systems of ownership, action, and responsibility. Existing accounts of mental representation tend to be too demanding and unparsimonious. We offer instead a minimalist account of representation that ascribes only those features necessary for explaining action, trimming the “extra” features in existing accounts (e.g., representation as a “mental” phenomenon). Our account makes ‘representation’ whatever it is that, for example, the thermostat is doing with the thermometer. The thermostat is disposed to act as long as the thermometer is outside a given range of parameters.. Our account allows us to offer a new perspective on the ‘responsibility gap’, a problem raised by the actions of sophisticated machines: because nobody has enough control over the machine’s actions to be able to assume responsibility, conventional approaches to responsibility ascription are inappropriate. We argue that there is a distinction between finding responsible and holding responsible and, in order to resolve the responsibility gap, we must first clarify the conceptual terrain on which agent is in fact responsible.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Footnotes
1
Machine learning is a subfield of artificial intelligence involving the development and evaluation of algorithms that enable a computer to extract (or learn) functions from a dataset (sets of examples). Deep learning is a type of machine learning that focuses on deep neural network models that are capable of making accurate data-driven decisions. This type of machine learning is particularly suited to contexts where the data is complex and where there are large datasets available. Although most of the examples referred to will be specifically of deep learning systems, we use the general term ‘machine learning’. For more, see Kelleher (2019).
 
2
For a discussion of mental states in the context of algorithmic fairness, see Binns (2018). As Binns explains, many philosophers like Richard Arneson, Thomas Scanlon, and Annabelle Lever would find it difficult to call a machine “discriminatory” per se because their accounts of what makes discrimination wrong places emphasis on the intentions, beliefs, and values of the decision-maker. Thus “a decision-maker with no such intent, who nonetheless accidentally or unwittingly created such disparities, would argue not be guilty of wrongful discrimination” (3).
 
3
Cf Danto (1979) “Causality, representations, and the explanation of actions,” Tulane Studies in Philosophy, 28: 1–19.
 
4
It’s not our purpose to explain these behaviors using our minimalist approach. We note that scientists trying to explain such things do so without appealing to humanlike mental representations in the case of octopodes, or even mental states at all in the case of termites, for example. See, for example, Ocko et al. (2019).
 
5
We take for granted that action requires representation but recognize that there may be a way of telling this same story without using the term ‘representation’. We think there are merits to using ‘representation’ in the sense we have just described; nevertheless, there may be views that do not rely on any kind of notion of representation that may be compatible with ours.
 
6
See for example, Callender and Cohen (2006).
 
7
Our favorite textbook for teaching these issues, Computability and Logic, proceeds in just this way.
 
8
See for example, The Behavioural and Brain Sciences (1980) 3, 417–457 for the beginning of the argument and its critiques. The terrain is broadly similar to what it was then, though more deeply entrenched on either side. We hope we’re mapping a way out of the conflict.
 
9
Some scholars conceive of the responsibility gap as a problem that will impact us in the future. That is, they frame the responsibility gap as the kind of problem that will occur once we introduce certain kinds of autonomous machines—which currently do not exist. On our view, we are already facing conceptual questions on how to attribute responsibility. This problem will likely worsen over time as more sophisticated learning machines are integrated into various social systems. It is worth recognizing, however, that the responsibility gap is not merely a problem that arises from speculation about future machines, and faces us now.
 
10
For example, Sparrow (2007), writing on autonomous weapons systems, poses an argument similar to that of Matthias and ultimately argues that because we cannot adequately attribute responsibility, we ought to cease developing these weapons systems.
 
11
Scholars have carved up the literature in other ways. Recently, Tigard (2020) distinguished between techno-optimists and techno-pessimists. The techno-optimists believe that the responsibility gap can be bridged, even if they might disagree on how responsibility can or should be found, they typically endorse the view that responsibility can be located somewhere, and prefer to proceed with the development of AI. The techno-pessimists reject this view, believing that the use of AI leaves us without anyone to hold account for harms done and that we should scale back production. Note that Tigard characterizes these camps using both descriptive and normative claims. It is plausible, however, to commit to one of the descriptive claims about the responsibility gap (yes, it can be bridged, or no, it cannot be bridged) without committing to the normative claim (yes, let’s keep developing AI, or no, let’s halt development).
 
12
Moreover. there is good reason to think that we need a theory here because we have some evidence to show that people are biased and inconsistent in how they apportion responsibility and blame. Madeleine Clare Elish refers to the ‘moral crumble zone’, where humans in human–robot interactions are perceived as blameworthy when problems occur, even if the human operator is not completely at fault. A significant problem is that our perception of fault in these situations is often different. For more, see Elish (2019).
 
13
Although it is true that we have heard machines praised and blamed by their delighted or frustrated users.
 
14
There is an interesting but distinct issue regarding the role that consciousness and attention play in holding someone responsible. A woman who acts in a particular way while sleepwalking may not be held responsible for her actions in the same manner as she would if acted in that way while awake. See Bello and Bridewell (2017).
 
Literature
go back to reference Anscombe GEM (1957) Intention. Blackwell, Oxford Anscombe GEM (1957) Intention. Blackwell, Oxford
go back to reference Arkin, R. (2010) The Case for Ethical Autonomy in Unmanned Systems, Journal of Military Ethics 9(4): 332-341 CrossRef Arkin, R. (2010) The Case for Ethical Autonomy in Unmanned Systems, Journal of Military Ethics 9(4): 332-341 CrossRef
go back to reference Asaro PM (2007) Robots and responsibility from a legal perspective. In: Proceedings of the IEEE Asaro PM (2007) Robots and responsibility from a legal perspective. In: Proceedings of the IEEE
go back to reference Bello P, Bridewell W (2017) There is no agency without attention. AI Mag 38(4):27–34 Bello P, Bridewell W (2017) There is no agency without attention. AI Mag 38(4):27–34
go back to reference Binns R (2018) Fairness in machine learning: lessons from political philosophy. Proc Mach Learn Res 81:149–159 Binns R (2018) Fairness in machine learning: lessons from political philosophy. Proc Mach Learn Res 81:149–159
go back to reference Callender C, Cohen J (2006) There is no special problem about scientific representation. THEORIA Int J Theory Hist Found Sci 21(1):67–85MathSciNetCrossRef Callender C, Cohen J (2006) There is no special problem about scientific representation. THEORIA Int J Theory Hist Found Sci 21(1):67–85MathSciNetCrossRef
go back to reference Danto AC (1979) Causality, representations, and the explanation of actions. Tulane Stud Philos 28:1–19CrossRef Danto AC (1979) Causality, representations, and the explanation of actions. Tulane Stud Philos 28:1–19CrossRef
go back to reference Darling K (2021) The New Breed. Henry Holt and Company, New York Darling K (2021) The New Breed. Henry Holt and Company, New York
go back to reference Davidson D (1963) Actions, reasons, and causes. J Philos (American Philosophical Association, Eastern Division, Sixtieth Annual Meeting) 60(23):685–700 Davidson D (1963) Actions, reasons, and causes. J Philos (American Philosophical Association, Eastern Division, Sixtieth Annual Meeting) 60(23):685–700
go back to reference Dennett D (1987) The intentional stance. Bradford, Boston Dennett D (1987) The intentional stance. Bradford, Boston
go back to reference Dretske F (1991) Explaining behavior: reasons in a world of causes. MIT Press, BostonCrossRef Dretske F (1991) Explaining behavior: reasons in a world of causes. MIT Press, BostonCrossRef
go back to reference Elish MC (2019) Moral crumple zones: cautionary tales in human-robot interaction. Engag Sci Technol Soc 5:40–60 Elish MC (2019) Moral crumple zones: cautionary tales in human-robot interaction. Engag Sci Technol Soc 5:40–60
go back to reference Frankfurt HG (1978) The problem of action. Am Philos Q 15(2):157–162 Frankfurt HG (1978) The problem of action. Am Philos Q 15(2):157–162
go back to reference Hanson FA (2009) Beyond the skin bag: on the moral responsibility of extended agencies. Ethics Inf Technol 11(1):91–99CrossRef Hanson FA (2009) Beyond the skin bag: on the moral responsibility of extended agencies. Ethics Inf Technol 11(1):91–99CrossRef
go back to reference Hellström T (2013) On the moral responsibility of military robots. Ethics Inf Technol 15(12):99–107CrossRef Hellström T (2013) On the moral responsibility of military robots. Ethics Inf Technol 15(12):99–107CrossRef
go back to reference Johnson DG (2014) Technology with no human responsibility? J Bus Ethics 127(4):707–715CrossRef Johnson DG (2014) Technology with no human responsibility? J Bus Ethics 127(4):707–715CrossRef
go back to reference Marino D, Tamburrini G (2006) Learning robots and human responsibility. Int Rev Inf Ethics 6(12):46–51 Marino D, Tamburrini G (2006) Learning robots and human responsibility. Int Rev Inf Ethics 6(12):46–51
go back to reference Matthias A (2004) The responsibility gap: ascribing responsibility for the actions of learning automata. Ethics Inf Technol 6(3):175–183CrossRef Matthias A (2004) The responsibility gap: ascribing responsibility for the actions of learning automata. Ethics Inf Technol 6(3):175–183CrossRef
go back to reference Nagenborg M, Capurro R, Weber J, Pingel C (2008) Ethical regulations on robotics in Europe. AI Soc 22(3):349–366CrossRef Nagenborg M, Capurro R, Weber J, Pingel C (2008) Ethical regulations on robotics in Europe. AI Soc 22(3):349–366CrossRef
go back to reference Rahwan I (2018) Society-in-the-loop: programming the algorithmic contract. Ethics Inf Technol 20(1):5–14CrossRef Rahwan I (2018) Society-in-the-loop: programming the algorithmic contract. Ethics Inf Technol 20(1):5–14CrossRef
go back to reference Santoro M, Marino D, Tamburrini G (2008) Learning robots interacting with humans: from epistemic risk to responsibility. AI Soc 22(3):301–314CrossRef Santoro M, Marino D, Tamburrini G (2008) Learning robots interacting with humans: from epistemic risk to responsibility. AI Soc 22(3):301–314CrossRef
go back to reference Tigard DW (2020) There is no techno-responsibility gap. Philos Technol 34:589–607CrossRef Tigard DW (2020) There is no techno-responsibility gap. Philos Technol 34:589–607CrossRef
Metadata
Title
Machine agency and representation
Authors
Beba Cibralic
James Mattingly
Publication date
10-05-2022
Publisher
Springer London
Published in
AI & SOCIETY / Issue 1/2024
Print ISSN: 0951-5666
Electronic ISSN: 1435-5655
DOI
https://doi.org/10.1007/s00146-022-01446-7

Other articles of this Issue 1/2024

AI & SOCIETY 1/2024 Go to the issue

Premium Partner