Skip to main content
Erschienen in: Ethics and Information Technology 1/2017

26.09.2016 | Original Paper

Artificial agents among us: Should we recognize them as agents proper?

verfasst von: Migle Laukyte

Erschienen in: Ethics and Information Technology | Ausgabe 1/2017

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

In this paper, I discuss whether in a society where the use of artificial agents is pervasive, these agents should be recognized as having rights like those we accord to group agents. This kind of recognition I understand to be at once social and legal, and I argue that in order for an artificial agent to be so recognized, it will need to meet the same basic conditions in light of which group agents are granted such recognition. I then explore the implications of granting recognition in this manner. The thesis I will be defending is that artificial agents that do meet the conditions of agency in light of which we ascribe rights to group agents should thereby be recognized as having similar rights. The reason for bringing group agents into the picture is that, like artificial agents, they are not self-evidently agents of the sort to which we would naturally ascribe rights, or at least that is what the historical record suggests if we look, for example, at what it took for corporations to gain legal status in the law as group agents entitled to rights and, consequently, as entities subject to responsibilities. This is an example of agency ascribed to a nonhuman agent, and just as a group agent can be described as nonhuman, so can an artificial agent. Therefore, if these two kinds of nonhuman agents can be shown to be sufficiently similar in relevant ways, the agency ascribed to one can also be ascribed to the other—this despite the fact that neither is human, a major impediment when it comes to recognizing an entity as an agent proper, and hence as a bearer of rights.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Fußnoten
1
In each of these specifications, a right gives one a normative ability to do or not do something: This can be the ability to demand something from someone (rights as claims), or the freedom to do something that is not prohibited (rights as privileges), or the ability to modify a legal situation (rights as powers) or not be subject to the powers of others (rights as immunities). For a discussion, see Hohfeld 1917 and Jones 1994.
 
2
A caveat before we proceed is that the thermostat example just introduced should not be taken to mean that an artificial device is rational just because it correctly executes the instructions it is designed to execute. Nor should the motivational states we attribute to it be taken to mean that it somehow “wants” or “intends” to do what it does. The example is rather intended to illustrate that we can explain an agent’s actions as if it were rational and intentional, without saying that it is a rational agent driven by actual intentions.
 
3
I should note that the parallel between group agents and artificial agents is not new (see Solum 1992; Singer 2013). List and Pettit (2011) and Pettit (2007) seem to reject that parallel, since they consider the agency of a “bare-bones” artificial agent (a very stripped-down robotic device) in contrast to the full agency of group agents. But as can be appreciated from the way artificial agents were just defined, I understand them to comprise a class much more inclusive than that of robots.
 
4
This is a standard position on moral responsibility: See Himma (2009).
 
5
I should point out, as previously suggested, that while a capacity for normative judgment is an essential condition subject to which responsibility can be ascribed to an agent, we also have to look at the roles agents play in the environment in which they interact, for this is essential in figuring out the kinds of responsibilities that can be ascribed to them and the consequences that should follow as a result of the agent failing to fulfil those responsibilities. The question of roles is discussed at the end of “Fourth condition of agency: personhood” section.
 
6
For other criticisms about the fitness to be held responsible, see Tuomela (2011).
 
7
Another example where List and Pettit’s three conditions of responsibility find a counterpart in the law is in the legal concept of force majeure, which excuses a party from responsibility for nonperformance ascribable to events beyond that party’s control.
 
8
This structural difference will be taken up in “Structural difference” section.
 
9
I should note here that this parallel between neurons and individuals, on the one hand, and individuals and groups, on the other, is itself up for debate. It would be rejected on an incompatibilist view such as hard determinism or metaphysical libertarianism. The former would argue that there is no free will in virtue of which an individual or group agent might control its actions—for that control is only mechanistic (Illes 2005, 45)—such that the question of responsibility wouldn’t arise in the first place. The latter, for its part, would grant that responsibility is an issue, but only for human beings and only if they have “a freedom to originate action uncaused by prior events and influences” (ibid.).
 
10
Although it is a fallacy to proceed on a basis of likeness to human beings in ascribing personhood to an agent, there is no denying that humans do react differently in their interaction with a robot when the robot looks human. As the roboticist Daniel Wilson observes (Singer 2009, 405), we unconsciously make judgments based on a robot’s form and “care differently about a humanoid robot versus a dog robot versus a robot that doesn’t look like anything alive.”
 
11
Yet another approach to personhood is the interest-based one offered by Briggs (2012), who takes List and Pettit’s view of personhood to mean that “a person is the sort of thing to which it is appropriate to assign conventional rights” (Briggs 2012, 289) and thus suggests that we look to interests as the basis on which to assign those rights, the idea being that it makes no sense to ascribe rights to something (say, a rock) if that thing “cannot benefit from those rights” and so cannot be said to have an interest in them. This idea that something ought to have rights to the extent that it can benefit from them calls up the competence approach (because implicit in that idea is that of an underlying capacity, or ability, to benefit from the rights in question), but at the same time, an interest-based approach would be more restrictive in its ascription of rights than would the inter-relational approach I will be introducing shortly, for if we take interests as a basis of ascription, we may not be able to contemplate the idea of the environment, for example, as having any interest in protection and so as a subject of rights.
 
12
Four such approaches are Hubbard (2011) (in which the x variable is personhood itself), Rothblatt (2014) (consciousness), Dennett (2013) (intelligence), and rights (Nussbaum 2006, 2011), and what they all have in common is that, in testing for a quality or property x, they do not ask us to imagine what it would be like to enter into the “mind” of the entity we think it might be ascribable to, but only ask us to consider whether this entity is functionally or operationally capable of acting consistently with what it means to have that quality or property.
 
13
The approach “allows for the fact that agency develops over time and shifts the focus to the future appropriate behaviour of complex systems, with moral responsibility being more a matter of rational and socially efficient policy that is largely outcomes-focused” (Galliott 2015, 224).
 
14
Interestingly for our purposes, this very same reasoning was anticipated by Chief Justice John Marshall in the landmark case Trustees of Dartmouth College v. Woodward (1819), where it was applied to the concept of a business corporation: “From the nature of things, the artificial person called a corporation, must be created, before it can be capable of taking any thing. When, therefore, a charter is granted, and it brings the corporation into existence without any act of the natural persons who compose it, and gives such corporation any privileges, franchises, or property, the law deems the corporation to be first brought into existence, and then clothes it with the granted liberties and property” (italics added).
 
15
Another parallel that can be drawn is between a group agent and a multi-agent system (MAS), a system composed of interacting individual agents (computer systems) acting to achieve a common goal (for an introduction to MASs, see Woolridge 2009). This parallel will not be addressed here because the artificial agents making up an MAS are different from the kinds of agents discussed in this paper.
 
16
As Tuomela (2011) has pointed out, the authors do not address the grounds of supervenience—causal, conceptual, or epistemic—and my own discussion of supervenience suffers form the same defect.
 
17
There are a number of other theories that take this approach: See the table in List and Pettit (2011, 7).
 
18
This is List and Pettit’s way of striking a middle ground between two views of group agency which they term “emergentist” and “eliminativist”: “Where emergentism makes group agents into hyper-realities, eliminativism makes them into non-realities” (List and Pettit 2011, 75). It is not entirely clear, however, how this middle-of-the-road view (epistemological autonomy) can be distinguished from the emergentist view, since List and Pettit use the same exact language to describe both: “From the emergentist tradition,” they note, “it went without saying that group agents were agents in their own right, over and above their members” (ibid. 73); compare that with their own approach, on which “we must think of group agents as relatively autonomous entities—agents in their own right” (ibid., 77), thus defending “the idea that group agents can be agents over and above their individual members” (ibid., 78).
 
19
Non-redundant realism is criticized by Sylvan (2012), arguing that group agents can be seen through the lens of a redundant realism.
 
20
Consider in this regard the opinion expressed by the computer scientist and inventor Ray Kurzweil (quoted in Greenemeier 2010): “Machines will follow a path that mirrors the evolution of humans. Ultimately, however, self-aware, self-improving machines will evolve beyond humans’ ability to control or even understand them.”
 
21
Rawls would later be criticized by Habermas (1995, 114) for assimilating rights and liberties to goods—which are more like property, or things you own—but that is a matter that would take us on a long detour, so it cannot be taken up here.
 
22
On the historical context in which that judgment and recognition came to be, see Friedman 2005, 136–37. For a broader discussion of corporations as rights-holders, see Clements (2012).
 
23
For an overview of the roboethics debate see, for instance, Lin et al. (2012).
 
24
The important point here is the emphasis on reasons: As previously mentioned, I am not suggesting that because history or the law evolved as it did in regard to corporations, then we should mimic the same line of development in dealing with artificially intelligent agents. Rather, I am saying that the analogies that group agents (and corporations among them) can be shown to have to artificial agents warrant an investigation aimed at exploring whether the justifications for one development (in the past) are sound and might also justify another development (in the future).
 
25
For a critique of Nussbaum’s cosmopolitanism, see Ayaz Naseem and Hyslop-Margison (2006).
 
Literatur
Zurück zum Zitat Ayaz Naseem, M., & Hyslop-Margison, E. J. (2006). Nussbaum’s concept of cosmopolitanism: Practical possibility or academic delusion? Paideusis, 15(2), 51–60. Ayaz Naseem, M., & Hyslop-Margison, E. J. (2006). Nussbaum’s concept of cosmopolitanism: Practical possibility or academic delusion? Paideusis, 15(2), 51–60.
Zurück zum Zitat Briggs, R. (2012). The normative standing of group agents. Episteme, 9(3), 283–291.CrossRef Briggs, R. (2012). The normative standing of group agents. Episteme, 9(3), 283–291.CrossRef
Zurück zum Zitat Clements, J. D. (2012). Corporations are not people: Why they have rights that you do and what you can do about it. San Francisco, CA: Berrett-Koehler Publishers. Clements, J. D. (2012). Corporations are not people: Why they have rights that you do and what you can do about it. San Francisco, CA: Berrett-Koehler Publishers.
Zurück zum Zitat Dennett, D. C. (2009). Intentional systems theory. In B. McLaughlin, A. Beckermann, & S. Walter (Eds.), The Oxford handbook of philosophy of mind (pp. 339–350). Oxford: Oxford University Press. Dennett, D. C. (2009). Intentional systems theory. In B. McLaughlin, A. Beckermann, & S. Walter (Eds.), The Oxford handbook of philosophy of mind (pp. 339–350). Oxford: Oxford University Press.
Zurück zum Zitat Dennett, D. C. (2013). Intuition pumps and other tools for thinking. New York: W. W. Norton & Company. Dennett, D. C. (2013). Intuition pumps and other tools for thinking. New York: W. W. Norton & Company.
Zurück zum Zitat Dietrich, E. (2011). Homo sapiens 2.0: Building the better robots of our future. In M. Anderson & S. Anderson (Eds.), Machine ethics (pp. 531–541). Cambridge: Cambridge University Press.CrossRef Dietrich, E. (2011). Homo sapiens 2.0: Building the better robots of our future. In M. Anderson & S. Anderson (Eds.), Machine ethics (pp. 531–541). Cambridge: Cambridge University Press.CrossRef
Zurück zum Zitat Emerson, R., & Hardwicke, J. W. (1997). Business Law. Hauppauge, NY: Barron’s educational series. Emerson, R., & Hardwicke, J. W. (1997). Business Law. Hauppauge, NY: Barron’s educational series.
Zurück zum Zitat Fischer, M. (2007). A pragmatist cosmopolitan moment: Reconfiguring Nussbaum’s cosmopolitan concentric circles. The Journal of Speculative Philosophy (new series), 21(3), 151–165.CrossRef Fischer, M. (2007). A pragmatist cosmopolitan moment: Reconfiguring Nussbaum’s cosmopolitan concentric circles. The Journal of Speculative Philosophy (new series), 21(3), 151–165.CrossRef
Zurück zum Zitat Friedman, L. M. (2005). A history of American law. New York, NY: Touchstone. Friedman, L. M. (2005). A history of American law. New York, NY: Touchstone.
Zurück zum Zitat Galliott, J. (2015). Military robots: Mapping the moral landscape. Farnham: Ashgate. Galliott, J. (2015). Military robots: Mapping the moral landscape. Farnham: Ashgate.
Zurück zum Zitat Greene, J. D., et al. (2009). Pushing moral buttons: The interaction between personal force and intention in moral judgment. Cognition, 111(3), 364–371.CrossRef Greene, J. D., et al. (2009). Pushing moral buttons: The interaction between personal force and intention in moral judgment. Cognition, 111(3), 364–371.CrossRef
Zurück zum Zitat Greenemeier, L. (Ed.). (2010). 12 Events that will change the world. Scientific American, 302(6), 36–50. Greenemeier, L. (Ed.). (2010). 12 Events that will change the world. Scientific American, 302(6), 36–50.
Zurück zum Zitat Habermas, J. (1995). Reconciliation through the public use of reason: Remarks on John Rawls’s political liberalism. The Journal of Philosophy, 92(3), 109–131. Habermas, J. (1995). Reconciliation through the public use of reason: Remarks on John Rawls’s political liberalism. The Journal of Philosophy, 92(3), 109–131.
Zurück zum Zitat Hartmann, T. (2010). Unequal protection: How corporations became “people”—And how you can fight back. San Francisco, CA: Berrett-Koehler Publishers Inc. Hartmann, T. (2010). Unequal protection: How corporations became “people”—And how you can fight back. San Francisco, CA: Berrett-Koehler Publishers Inc.
Zurück zum Zitat Himma, K. (2009). Artificial agency, consciousness and the criteria for moral agency: What properties must an artificial agent have to be a moral agent? Ethics and Information Technology, 11(1), 19–29.CrossRef Himma, K. (2009). Artificial agency, consciousness and the criteria for moral agency: What properties must an artificial agent have to be a moral agent? Ethics and Information Technology, 11(1), 19–29.CrossRef
Zurück zum Zitat Hubbard, P. E. (2011). “Do androids dream?”: Personhood and intelligent artifacts. Temple Law Review, 83, 404–474. Hubbard, P. E. (2011). “Do androids dream?”: Personhood and intelligent artifacts. Temple Law Review, 83, 404–474.
Zurück zum Zitat Hughes, J. (2004). Citizen cyborg. Cambridge, MA: Westview. Hughes, J. (2004). Citizen cyborg. Cambridge, MA: Westview.
Zurück zum Zitat Illes, J. (2005). Neuroethics: Defining the issues in theory, practice and policy. New York: Oxford University Press. Illes, J. (2005). Neuroethics: Defining the issues in theory, practice and policy. New York: Oxford University Press.
Zurück zum Zitat Krishnan, A. (2009). Killer robots: Legality and ethicality of autonomous weapons. Farnham: Ashgate. Krishnan, A. (2009). Killer robots: Legality and ethicality of autonomous weapons. Farnham: Ashgate.
Zurück zum Zitat Laukyte, M. (2012). Artificial and autonomous: A person? In G. Dodig-Crnkovic, A. Rotolo, et al. (Eds.), Social computing, social cognition, social networks and multiagent systems social turn (SNAMAS 2012) (pp. 73–78). Birmingham: AISB. Laukyte, M. (2012). Artificial and autonomous: A person? In G. Dodig-Crnkovic, A. Rotolo, et al. (Eds.), Social computing, social cognition, social networks and multiagent systems social turn (SNAMAS 2012) (pp. 73–78). Birmingham: AISB.
Zurück zum Zitat Lin, P., Abney, K., & Bekey, G. A. (Eds.). (2012). Roboethics: The ethical and social implications of robotics. Cambridge, MA: The MIT Press. Lin, P., Abney, K., & Bekey, G. A. (Eds.). (2012). Roboethics: The ethical and social implications of robotics. Cambridge, MA: The MIT Press.
Zurück zum Zitat List, C., & Pettit, P. (2008). Group agency and superve nience. In J. Hohwy, J. Kallestrup (Eds.), Being reduced: New essays on reduction, explanation, and causation (pp. 75–92). New York: Oxford University Press. List, C., & Pettit, P. (2008). Group agency and superve nience. In J. Hohwy, J. Kallestrup (Eds.), Being reduced: New essays on reduction, explanation, and causation (pp. 75–92). New York: Oxford University Press.
Zurück zum Zitat List, C., & Pettit, P. (2011). Group agency: The possibility, design, and status of corporate agents. Oxford: Oxford University Press.CrossRef List, C., & Pettit, P. (2011). Group agency: The possibility, design, and status of corporate agents. Oxford: Oxford University Press.CrossRef
Zurück zum Zitat Naess, A. (2010). The ecology of wisdom: Writings by Arne Naess. Berkeley: Counterpoint. Naess, A. (2010). The ecology of wisdom: Writings by Arne Naess. Berkeley: Counterpoint.
Zurück zum Zitat Nussbaum, M. (1994). The therapy of desire: Theory and practice in hellenistic ethics. Princeton, NJ: Princeton University Press. Nussbaum, M. (1994). The therapy of desire: Theory and practice in hellenistic ethics. Princeton, NJ: Princeton University Press.
Zurück zum Zitat Nussbaum, M. (1997). Cultivating humanity: A classical defense of reform in liberal education. Cambridge, MA: Harvard University Press. Nussbaum, M. (1997). Cultivating humanity: A classical defense of reform in liberal education. Cambridge, MA: Harvard University Press.
Zurück zum Zitat Nussbaum, M. C. (2006). Frontiers of justice: Disability, nationality, species membership. Cambridge, MA: Harvard University Press. Nussbaum, M. C. (2006). Frontiers of justice: Disability, nationality, species membership. Cambridge, MA: Harvard University Press.
Zurück zum Zitat Nussbaum, M. C. (2011). Creating capabilities: The human development approach. Cambridge and London: The Belknap Press of Harvard University Press.CrossRef Nussbaum, M. C. (2011). Creating capabilities: The human development approach. Cambridge and London: The Belknap Press of Harvard University Press.CrossRef
Zurück zum Zitat Pettit, P. (2001). A theory of freedom: From the psychology to the politics of agency. Cambridge: Polity Press. Pettit, P. (2001). A theory of freedom: From the psychology to the politics of agency. Cambridge: Polity Press.
Zurück zum Zitat Pettit, P. (2007). Responsibility incorporated. Ethics, 117, 171–201.CrossRef Pettit, P. (2007). Responsibility incorporated. Ethics, 117, 171–201.CrossRef
Zurück zum Zitat Rawls, J. (1971). A theory of justice. Cambridge, MA: The Belknap Press of Harvard University Press. Rawls, J. (1971). A theory of justice. Cambridge, MA: The Belknap Press of Harvard University Press.
Zurück zum Zitat Rothblatt, M. (2014). Virtually human: The promise—And the Peril—Of digital immortality. New York: St. Martin’s Press. Rothblatt, M. (2014). Virtually human: The promise—And the Peril—Of digital immortality. New York: St. Martin’s Press.
Zurück zum Zitat Sandbach, F. H. (1989). The stoics (2nd ed.). Cambridge: Hackett. Sandbach, F. H. (1989). The stoics (2nd ed.). Cambridge: Hackett.
Zurück zum Zitat Singer, P. S. (2009). Wired for war: The robotics revolution and conflict in the 21st century. London: Penguin Books. Singer, P. S. (2009). Wired for war: The robotics revolution and conflict in the 21st century. London: Penguin Books.
Zurück zum Zitat Singer, A. E. (2013). Corporate moral agency and artificial intelligence. International Journal of Social and Organizational Dynamics in IT, 3(1), 1–13.CrossRef Singer, A. E. (2013). Corporate moral agency and artificial intelligence. International Journal of Social and Organizational Dynamics in IT, 3(1), 1–13.CrossRef
Zurück zum Zitat Solum, L. B. (1992). Legal personhood for artificial intelligences. North Carolina Law Review, 70, 1231–1287. Solum, L. B. (1992). Legal personhood for artificial intelligences. North Carolina Law Review, 70, 1231–1287.
Zurück zum Zitat Sparrow, R. (2007). Killer robots. Journal of Applied Philosophy, 24(1), 62–77.CrossRef Sparrow, R. (2007). Killer robots. Journal of Applied Philosophy, 24(1), 62–77.CrossRef
Zurück zum Zitat Sylvan, K. L. (2012). How to become a redundant realist? Episteme, 9(3), 271–282.CrossRef Sylvan, K. L. (2012). How to become a redundant realist? Episteme, 9(3), 271–282.CrossRef
Zurück zum Zitat Westra, L. (2013). The supranational corporation: Beyond the multinationals. Leiden: Brill.CrossRef Westra, L. (2013). The supranational corporation: Beyond the multinationals. Leiden: Brill.CrossRef
Zurück zum Zitat Woolridge, M. (2009). An introduction to multiagent systems (2nd ed.). Chichester: Willey. Woolridge, M. (2009). An introduction to multiagent systems (2nd ed.). Chichester: Willey.
Metadaten
Titel
Artificial agents among us: Should we recognize them as agents proper?
verfasst von
Migle Laukyte
Publikationsdatum
26.09.2016
Verlag
Springer Netherlands
Erschienen in
Ethics and Information Technology / Ausgabe 1/2017
Print ISSN: 1388-1957
Elektronische ISSN: 1572-8439
DOI
https://doi.org/10.1007/s10676-016-9411-3

Weitere Artikel der Ausgabe 1/2017

Ethics and Information Technology 1/2017 Zur Ausgabe

Premium Partner