Skip to main content

Advertisement

Log in

AI Systems Under Criminal Law: a Legal Analysis and a Regulatory Perspective

  • Research Article
  • Published:
Philosophy & Technology Aims and scope Submit manuscript

Abstract

Criminal liability for acts committed by AI systems has recently become a hot legal topic. This paper includes three different contributions. The first contribution is an analysis of the extent to which an AI system can satisfy the requirements for criminal liability: accomplishing an actus reus, having the corresponding mens rea, possessing the cognitive capacities needed for responsibility. The second contribution is a discussion of criminal activity accomplished by an AI entity, with reference to a recent case involving an online bot, the Random Darknet Shopper. This discussion will provide the context for the analysis of commonalities and differences between criminal activities by humans and by artificial systems. The third contribution concerns the evaluation of different ways of addressing criminal activities by AI systems in a regulatory perspective.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. P8_TA (2017)0051 Civil Law Rules on Robotics European Parliament resolution of 16 February 2017, with recommendations to the Commission on Civil Law Rules on Robotics, 2015.

  2. On 25 April 2018, the EU Commission set up three different groups of experts on (i) the ethics of AI; (ii) whether and to what extent to amend the directive on liability for defective products; and, (iii) liability and new technologies formation (https://ec.europa.eu/digital-single-market/en/high-level-expert-group-artificial-intelligence). See also the Commission’s document on Artificial intelligence: Commission outlines a European approach to boost investment and set ethical guidelines, IP/18/3362. For an extensive literature analysis of the foreseeable threats of AI crimes see King et al (2018)

  3. For instance, potent drugs, manipulation of the brain, brain lesions, neurological disorders, phobias, drug addiction, coercive threats.

  4. As Bentham observed “law’s proper role” is “to address the wills of citizens and thus to guide their actions through their understanding”. (Postema 2001, 494).

  5. From Boethius’s idea of a person as an “individual substance of rational nature”, to Kant’s view of personality as the “freedom of a rational being under moral law, see Brozek (2017).

  6. The issue of whether AI systems can bear legal entitlements, i.e. rights and powers, has been addressed relative to civil law (see Pagallo 2013, 102,) relative to constitutional law (see Solum 1991, 1255).

  7. Most European drug laws penalize many acts involving hard drugs: illegal cultivation, production, manufacture, extraction, preparation, acquisition, and possession, offering, offering for sale, distribution, purchase, sale, delivery on any terms whatsoever, brokerage, dispatch, dispatch for transit, transport, importation and exportation of illegal drugs.

  8. On the notion of criminal act, and in particular willed and unwilled acts in criminal law, see for example Murphy (1979), Hart (1968), Austin and Austin (2000).

  9. This theory dates back to nineteenth century authors such as Holmes, O.W. and Austin, J.. In particular, Holmes’s view is that “An act is always a voluntary muscular contraction, and nothing else. The chain of physical sequences which it sets in motion or directs to the plaintiff’s harm is no part of it, and very generally a long train of such sequences intervenes”. According to this author “An act […] imports intention […] A spasm is not an act. The contraction of the muscles must be willed” (Holmes 2009, 63). Similarly, for Austin “Most of the names which seem to be names of acts, are names of acts coupled with certain of their circumstances. For example: If I kill you with a gun or pistol, I shoot you. And the long train of incidents which are denoted by that brief expression, are considered (or spoken of) as if they constituted an act, perpetrated by me. In truth, the only parts of the train which are my act or acts, are the muscular motions by which I raise the weapon, point it at your head or body, and pull the trigger” (Austin 1875, 202). Generally, see also Ormerod et al (2011), Herring (2014), and Duff (1990, 96-99).

  10. According to Holmes to understand the working of the law we have to consider the psychology of the bad man, namely, the individual that only complies of the law in order to avoid sanctions, and only to the extent that the disutility of the sanctions outweighs the benefit to be obtained through the violation (see Holmes 1897, 459).

  11. For some examples of the harm that could be caused by an AI system that misunderstands a moral imperative, see Bostrom (2014). For instance, a “perverse instantiation” of the utilitarian imperative of making people as happy could be implemented by “implanting electrodes into the pleasure centers of our brains” (158).

  12. See !MEDIENGRUPPE BITNIK website (https://motherboard.vice.com/en_us/article/mgbwg4/in-europe-robots-can-legally-buy-drugs-online-for-art), last accessed 22 May 2018.

  13. For an overview of variations in state statutes for strict liability for dog bites, see generally Miller (1987), Wisch (2012) and Walden (2017). In extreme cases these issues fall within the scope of criminal law whenever dog owners violate legal restrictions on keeping dangerous dog or the owner’s failure to control the animal is reckless or criminally negligent (e.g. under the Dangerous Dog Laws).

    In extreme cases these issues fall within the scope of criminal law whenever dog owners violate legal restrictions on keeping dangerous dog or the owner’s failure to control the animal is reckless or criminally negligent (e.g. under the Dangerous Dog Laws).

  14. Bayern, S., et al. Company law and autonomous systems: a blueprint for lawyers, entrepreneurs, and regulators. Hastings Sci. & Tech. LJ, 2017, 9: 135. Pagallo, U., The laws of robots, p. 103 (n 2).

References

  • Allen, C., Varner, G., & Zinser, J. (2000). Prolegomena to any future artificial moral agent. Journal of Experimental & Theoretical Artificial Intelligence, 12, 251.

  • Asaro, P. M. (2016). The liability problem for autonomous artificial agents. Ethical and moral considerations in non-human agents. AAAI Spring Symposium Series.

  • Ashworth, A., & Horder, J. (2013). Principles of criminal law. Oxford University Press.

  • Austin, J. (1875). Lecture on jurisprudence: Or the philosophy of positive law. J. Murray.

  • Austin, J. and Austin, S. (2000). The province of jurisprudence determined. J Murray.

  • Bengio, Y. et al. (2003). A neural probabilistic language model. Journal of machine learning research, 3, 1137.

  • Bhuta, N., Beck, S., Geiss, R., Kress, C., & Liu, H. Y. (2015). Autonomous weapons systems: Law, ethics, policy. Cambridge University Press.

  • Bird, K. R. (2006). Natural and probable consequences doctrine: Your acts are my acts’ W. St. UL Rev, 34, 43.

    Google Scholar 

  • Boella, G. and Van Der Torre, L. (2007). A game-theoretic approach to normative multi-agent systems. IEEE Transactions on Systems, Man, and Cybernetics, 68–79.

  • Bostrom, N. (2014). Superintelligence. Oxford University Press

  • Bratman, M. (1987). Intention, plans, and practical reason. Harvard University Press.

  • Bronitt, S., and McSherry, B. (2017). Principles of criminal law 4e. Thomson Reuters.

  • Brozek, B. (2017). The troublesome ‘person’. In Kurki, V. A. and Pietrzykowski, T., editors, Legal personhood: Animals, Artificial Intelligence and the Unborn. Springer

  • Calverley, D. J. (2008). Imagining a non-biological machine as a legal person. AI & SOCIETY, 22, 523.

  • Castelfranchi, C., Dignum, F., Jonker, C. M., and Treur, J. (1999). Deliberative normative agents: Principles and architecture, in International Workshop on Agent Theories, Architectures, and Languages. Springer.

  • Chopra, S., & White, L. F. (2011). A legal theory for autonomous artificial agents. University of Michigan Press.

  • Doshi-Velez, F., Kortz, M., Budish, R., Bavitz, C., Gershman, S., O'Brien, D. Shieber, S., Paulson, J.A., Waldo, J., Weinberger, D., Wood, AA. (2017). Accountability of AI under the law: The role of explanation. Preprint available at arXiv:1711.01134.

  • Duff, R. A. (1990). Intention, agency and criminal liability: Philosophy of action and the criminal law. Blackwell.

  • Duff, R. A. (2007). Answering for crime: Responsibility and liability in the criminal law. Bloomsbury Publishing.

  • Endsley, M. R. and Garland, D. (2000). awareness: A critical review, situation awareness analysis and measurement. CRC Press.

  • Ferzan, K. K. (2000). Opaque recklessness. J. Crim. L. & Criminology, 91, 597.

  • Fischer, J. M., & Ravizza, M. (2000). Responsibility and control. Cambridge University Press.

  • Floridi, L. (2016). The method of abstraction. In The Routledge handbook of philosophy of information. Routledge.

  • Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14, 349.

  • Freitas, P. M., Andrade, F, and Novais, P. (2014). Criminal liability of autonomous agents: From the unthinkable to the plausible, in AI Approaches to the Complexity of Legal Systems. Springer.

  • Ghahramani, Z. (2015). Probabilistic machine learning and artificial intelligence. Nature, 521, 452.

  • Gillies, P. (1980). The law of criminal complicity. Law Book Company.

  • Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., and Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM Computing Surveys (CSUR).

  • Hallevy, G. (2010). The criminal liability of artificial intelligence entities—From science fiction to legal social control, Akron Intell. Prop. 4, J. 171.

  • Hallevy, G. (2011). Unmanned vehicles: Subordination to criminal law under the modern concept of criminal liability, JL Inf. & Sci., 21 200.

  • Hallevy, G. (2012). The matrix of derivative criminal liability. Springer Science & Business Media.

  • Hallevy, G. (2013). When robots kill: Artificial intelligence under criminal law. UPNE.

  • Hart, H. L. A. (1968). Punishment and responsibility: Essays in the philosophy of law. Clarendon Press.

  • Herring, J. (2014). Criminal law: Text, cases, and materials. Oxford University Press.

  • Heyman, M. G. (2010). The natural and probable consequences doctrine: A case study in failed law reform Berkeley. J. Crim. L., 15, 388.

  • Hinton, G., Deng, L., Yu, D., Dahl. G., Mohamed, A., Jaitly, N., Senior, A., Vanhoucke, V., Nguyen, P., and Kingsbury, B. (2012). Deep neural networks for acoustic modelling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 29, 82.

  • Hollander, C. D., & Wu, A. S. (2011). The current state of normative agent-based systems. Journal of Artificial Societies and Social Simulation, 14, 6.

  • Holmes, O. W. (2009). The common law. Harvard University Press.

  • Holmes, O. W. (1897). The path of the law. Harvard Law Review, 10, 457.

  • Hsu, F.-H. (2002). Behind deep blue: Building the computer that defeated the world chess champion. Princeton University Press.

  • Jennings, N. R. (1993). Commitments and conventions: The foundation of coordination in multi-agent systems. Knowledge Engineering Review, 8, 223.

  • Kasperkevic, J. (2015). Swiss police release robot that bought ecstasy online, The Guardian, (https://www.theguardian.com/world/2015/apr/22/swiss-police-release-robot-random-darknet-shopper-ecstasy-deep-web) last accessed 30 March 2018.

  • Kelley, R., Schaerer, E., Gomez, M., & Nicolescu, M. (2010). Liability in robotics: An international perspective on robots as animals. Advanced Robotics, 24, 1861.

  • Kelsen, H. (1967). The pure theory of law. University of California Press.

  • Kenny, A. J. (1978). Freewill and responsibility. Routledge.

  • Kharpal, A. (2015). Robot with $100 bitcoin buys drugs, gets arrested, Cnbc, , (http://www.cnbc.com/2015/04/21/robot-with-100-bitcoin-buys-drugs-gets-arrested.html) last accessed 30 March 2018.

  • King, T., Aggarwal, N., Taddeo, M., Floridi, L. (2018). Artificial intelligence crime: An interdisciplinary analysis of foreseeable threats and solutions, . SSRN: https://ssrn.com/abstract=3183238 or https://doi.org/10.2139/ssrn.3183238.

  • Kinny, D., Georgeff, M., and Rao, A. (1996). A methodology and modelling technique for systems of BDI agents. European Workshop on Modelling Autonomous Agents in a Multi-Agent World. Springer.

  • Kinny, D, Georgeff, M, and Rao, A. (2017). Why things can hold rights: Reconceptualizing the legal person. In Legal personhood: Animals, artificial intelligence and the unborn. Springer.

  • Kurki, V. A. and Pietrzykowski, T. (2017). Legal personhood: Animals, artificial intelligence and the unborn. Springer.

  • Kurzweil, R., et al. (1990). The age of intelligent machines. Cambridge, MIT press.

  • Lepora, C., & Goodin, R. E. (2013). On complicity and compromise. Oxford University Press.

  • Litton, P. (2013). Criminal responsibility and psychopathy: Do psychopaths have a right to excuse?. Handbook on psychopathy and law, 275.

  • !2018 MEDIENGRUPPE BITNIK website (https://motherboard.vice.com/en_us/article/mgbwg4/in-europe-robots-can-legally-buy-drugs-online-for-art), last accessed 22 May 2018.

  • Miller, W. (1987). Annotation, modern status of rule of absolute or strict liability Dogbite, Animal Law Review, 51, 446.

  • Morse, S. J. (2008). Psychopathy and criminal responsibility. Neuroethics, 1, 205.

  • Murphy, J. G. (1979). Retribution, justice, and therapy: Essays in the philosophy of law. Springer.

  • Neumann, M. (2010). Norm internalisation in human and artificial intelligence. Journal of Artificial Societies and Social Simulation, 13, 12.

  • North, P. (2012). Civil liability for animals. Oxford University Press.

  • Ormerod, D., Smith, J. C., and Hogan, B. (2011). Smith and Hogan’s criminal law. Oxford University Press.

  • P8_TA(2017)0051 (2015). Civil Law Rules on Robotics European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics.

  • Pagallo, U. (2013). The laws of robots, Springer.

  • Pagallo, U. (2017). AI and bad robots: The criminology of automation. In the Routledge Handbook of Technology, Crime and Justice. Routledge.

  • Poole, D., Mackworth, A., & Goebel, R. (1998). Computational intelligence: A logical approach. New York: Oxford University Press.

  • Postema, G. (2001). Law as command: The model of command in modern jurisprudence. Philosophical Issues, 11, 470.

  • Power, M. (2014). What happens when a software bot goes on a darknet shopping spree? (https://www.theguardian.com/technology/2014/dec/05/software-bot-darknet-shopping-spree-random-shopper) last accessed 30 March 2018.

  • Robinson, T. B. (1997). A question of intent: Aiding and abetting law and the rule of accomplice liability under section 924 (c). Michigan Law Review, 96, 783.

  • Russel, S., & Norvig, P. (2010). Artificial intelligence: A modern approach. Prentice Hall.

  • Sartor, G. (2009). Cognitive automata and the law: Electronic contracting and the intentionality of software agents, in Artificial intelligence and law 17, 253.

  • Schaerer, E., Kelley, R., and Nicolescu, M. (2009). Robots as animals: A framework for liability and responsibility in human-robot interactions. In RO-MAN 2009. The 18th IEEE international symposium on Robot and human interactive communication. IEEE.

  • Schmitt, M. N., & Jeffrey, S. T. (2012). Out of the loop: Autonomous weapon systems and the law of armed conflict. Harv. Nat'l Sec. J., 4, 231.

    Google Scholar 

  • Sermanet, P. Eigen, D., Zhang, X., Mathieu, M., Fergus, R., & LeCun, Y. (2013). Overfeat: Integrated recognition, localization and detection using convolutional networks, arXiv preprint arXiv:1312.6229.

  • Shoham, Y. and Tennenholtz, M. (1992a). Emergent conventions in multi-agent systems: Initial experimental results and observations, In KR-92.

  • Shoham, Y. and Tennenholtz, M. (1992b). On the synthesis of useful social laws for artificial agent societies (preliminary report).

  • Shute, S. (2002). Knowledge and belief in the criminal law, criminal law theory: Doctrines of the general part. Oxford University Press.

  • Slobogin, C. (2003). The integrationist alternative to the insanity defense: Reflections on the exculpatory scope of mental illness in the wake of the Andrea Yates trial. American Journal of Criminal Law, 30, 315.

  • Solum, L. (1991). Legal personhood for artificial intelligences. North Carolina Law Review, 70, 1231.

  • Task force on the role of autonomy, (2011). DSB task force on the role of autonomy, 2011. The role of autonomy in DoD systems. US Defense Science Board (DSB).

  • Walden, C. (2017). State Dangerous Dog Laws, Animal Legal & Historical Center. Michigan State University.

  • Weyns, D., Steegmans, E., & Holvoet, T. (2004) Towards active perception in situated multi-agent systems. Applied Artificial Intelligence, 18, 867.

  • Wisch, R. F. (2012). Quick Overview of Dog Bite Strict Liability Statutes, Animal Law, available at: https://www.animallaw.info/article/brief-summary-dog-bite-laws.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Francesca Lagioia.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lagioia, F., Sartor, G. AI Systems Under Criminal Law: a Legal Analysis and a Regulatory Perspective. Philos. Technol. 33, 433–465 (2020). https://doi.org/10.1007/s13347-019-00362-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13347-019-00362-x

Keywords

Navigation