1 Introduction

With the advent of big data analytics, machine learning and artificial intelligence systems (henceforth ‘AI systems’),Footnote 1 both the assessment of the risk of crime and the operation of criminal justice systems are becoming increasingly technologically sophisticated. While authors disagree whether these technologies represent a panacea for criminal justice systems—for example by reducing case backlogs—or will further exacerbate social divisions and endanger fundamental liberties, the two camps nevertheless agree that such new technologies have important consequences for criminal justice systems. The automation brought about by AI systems challenges us to take a step back and reconsider fundamental questions of criminal justice: What does the explanation of the grounds of a judgment mean? When is the process of adopting a judicial decision transparent? Who should be accountable for (semi-) automated decisions and how should responsibility be allocated within the chain of actors when the final decision is facilitated by the use of AI? What is a fair trial? And is the due process of law denied to the accused when AI systems are used at some stage of the criminal procedure?

The technical sophistication of the new AI systems used in decision-making processes in criminal justice settings often leads to a ‘black box’ effect.Footnote 2 The intermediate phases in the process of reaching a decision are by definition hidden from human oversight due to the technical complexity involved. For instance, multiple areas of applied machine learning show how new methods of unsupervised learning or active learning operate in a way that avoids human intervention. In the active approaches of machine learning used for natural language processing, for instance, the learning algorithm accesses a large corpus of unlabelled samples and, in a series of iterations, the algorithm selects some unlabelled samples and asks the human annotator for appropriate labels. The approach is called active as the algorithm decides what samples should be annotated by the human based on its current hypothesis. The core idea of active machine learning is to eliminate humans from the equation. Moreover, artificial neural networks (hereafter ‘ANN’) learn to perform tasks by considering examples, generally without being programmed with task-specific rules. As such, artificial neural networks can be extremely useful in multiple areas, such as computer vision, natural language processing, in geoscience for ocean modelling, or in cybersecurity for identifying and discriminating between legitimate activities and malicious ones. They do not demand labelled samples, e.g., in order to recognise cats in images or pedestrians in traffic, but can generate knowledge about what a cat looks like on their own. The operations in machine learning approaches are not transparent even for the researchers that built the systems and while this may not be problematic in many areas of applied machine learning, as the examples below show, AI systems must be transparent when used in judicial settings, where the explainability of decisions and the transparency of the reasoning are of significant—even civilizational—value. A decision-making process that lacks transparency and comprehensibility is not considered legitimate and non-autocratic. Due to the inherently opaque nature of these AI systems, the new tools used in criminal justice settings may thus be at variance with fundamental liberties.

Lawyers must be aware, moreover, of the supra-legal context and background rationale for implementing AI systems. While some reasons may be legitimate—e.g., when AI systems facilitate access to courts to individuals that might otherwise be left on the sidelines of justice—others may be disputable and require a wider social debate that can only be held outside the judicial system. Shrinking budgets, the decreased legitimacy of the judiciary, and an overload of cases may all lead to the implementation of new solutions that information technology companies are ready to offer to governments. However, proposals for outsourcing a public service to private sector providers will trigger (or should trigger) a major political discussion which must be held in more democratic fora.

Following on from this introductory contextualisation of the automation of criminal justice, this article proceeds to an outline of the automation of crime control and answers the questions of what is being automated and who is being replaced in crime control (Part 2). It then analyses encounters between artificial intelligence systems and the law through case law (Part 3) and through an analysis of some of the affected human rights (Part 4). Thereafter, it answers the following question: what human rights may be affected by AI systems and how? The article concludes by offering some thoughts on the proposed solutions to remedy the risks of AI systems in the criminal justice domain (Part 5).

2 How does automation change crime control?

2.1 The automation of policing

By means of CompStat (COMPuter STATistics, or COMParative STATistics), geospatial modelling for predicting future crime concentrations, or ‘hot spots’,Footnote 3 has developed into a paradigm of managerial policing employing Geographic Information Systems (GIS) to map crime. This has been advocated as a multi-layered dynamic approach to crime reduction, quality of life improvement, personnel and resource management, and not merely a computer programme. The idea is not solely to ‘see crime’ visually presented on a map, but rather to develop a comprehensive managerial approach or a police management philosophy. As a ‘human resource management tool’, it involves ‘weekly meetings where officers review recent metrics (crime reports, citations, and other data) and talk about how to improve those numbers.’Footnote 4

Compared to algorithmic prediction software, the CompStat system is calibrated less frequently. As a police officer from Santa Cruz (USA) reported: ‘I’m looking at a map from last week and the whole assumption is that next week is like last week […]’.Footnote 5 CompStat relies more on humans to recognise patterns. Nevertheless, it incorporated for the first time the idea of seeing how crime evolves and focusing on ‘the surface’ and not the causes of crime. In this context, Siegel argues with respect to predictive analytics: ‘We usually don’t know about causation, and we often don’t necessarily care […] the objective is more to predict than it is to understand the world […]. It just needs to work; prediction trumps explanation.’Footnote 6 In comparison to AI analytics, its limiting factor is the depth of the information and the related breadth of analysis. The amount of data is not the problem as agencies collect vast amounts of data every day; rather, the next challenge is the ability to pull operationally-relevant knowledge from the data collected.

Computational methods of ‘predictive crime mapping’ started to enter into crime control twelve years ago.Footnote 7 Predictive ‘big data’ policing instruments took another evolutionary step forward. First, advancements in AI promised to make sense of enormous amounts of data and to extract meaning from scattered data sets. Secondly, they represented a shift from being a decision support system to being a primary decision-maker. Thirdly, they are aimed at the regulation of society at large and not only the fight against crime. (For an example of ‘function-creep’, see Singapore’s ‘total information awareness system programme’.)Footnote 8

Police are using AI tools to penetrate deeply into the preparatory phase of crime which is yet to be committed, as well as to scrutinise already-committed crimes. With regard to ex-ante preventive measures, automation tools are supposed to excavate plotters of crimes which are yet to be committed from large amounts of data. Hence, a distinction is made between tools focusing on ‘risky’ individuals (‘heat lists’—algorithm-generated lists identifying people most likely to commit a crime)Footnote 9 and tools focusing on risky places (‘hot spot policing’).Footnote 10 With regard to the second, ex-post-facto uses of automation tools, there have been many success stories in the fight against human trafficking. In Europe, Interpol manages the International Child Sexual Exploitation Image Database (ICSE DB) to fight child sexual abuse. The database can facilitate the identification of victims and perpetrators through an analysis of, for instance, furniture and other mundane items in the background of abusive images—e.g., it matches carpets, curtains, furniture, and room accessories—or identifiable background noise in the video. Chatbots acting as real people are another advancement in the fight against grooming and webcam ‘sex tourism’. In Europe, the Dutch children’s rights organisation Terre des Hommes was the first NGO to combat webcam child ‘sex tourism’ by using a virtual character called ‘Sweetie’.Footnote 11 The Sweetie avatar, posing as a ten-year-old Filipino girl, was used to identify offenders in chatrooms and online forums and operated by an agent of the organisation, whose goal was to gather information on individuals who contacted Sweetie and solicited webcam sex. Moreover, Terre des Hommes started engineering an AI system capable of depicting and acting as Sweetie without human intervention in order to not only identify persistent perpetrators but also to deter first-time offenders.

Some other research on preventing crime with the help of computer vision and pattern recognition with supervised machine learning seems outright dangerous.Footnote 12 Research on automated inferring of criminality from facial images based on still facial images of 1,856 real persons (half convicted) yielded the result that there are merely three features for predicting criminality: lip curvature, eye inner corner distance, and nose-mouth angle. The implicit assumptions of the researchers were that, first, the appearance of a person’s face is a function of innate properties, i.e., the understanding that people have an immutable core. Secondly, that ‘criminality’ is an innate property of certain (groups of) people, which can be identified merely by analysing their faces. And thirdly, in the event of the first two assumptions being correct, that the criminal justice system is actually able to reliably determine such ‘criminality’, which implies that courts are (or perhaps should become) laboratories for the precise measurement of people’s faces. The software promising to infer criminality from facial images Footnote 13 in fact illuminated some of the deep-rooted misconceptions about what crime is, and how it is defined, prosecuted, and adjudicated. The once ridiculed phrenology from the nineteenth century hence entered the twenty-first century in new clothes as ‘algorithmic phrenology’, which can legitimise deep-rooted implicit biases about crime.Footnote 14 Two researchers, Wu and Zhang, later admitted that they ‘agree that the pungent word criminality should be put in quotation marks; a caveat about the possible biases in the input data should be issued. Taking a court conviction at its face value, i.e., as the ‘ground truth’ for machine learning, was indeed a serious oversight on our part.’Footnote 15 Nevertheless, their research revealed how, in the near future, further steps along the line of a corporal focus on crime control can reasonably be expected: from the analysis of walking patterns, posture, and facial recognition for identification purposes, to analysis of facial expressions and handwriting patterns for emotion recognition and insight into psychological states.

2.2 Automation in criminal courts

Courts use AI systems to assess the likelihood of recidivism and the likelihood of flight of those awaiting trial, or of offenders in bail and parole procedures. The most analysed and discussed examples come from the USA, which is also where most such software is currently being used.Footnote 16 The Arnold Foundation algorithm, which is being rolled out in 21 jurisdictions in the USA,Footnote 17 uses 1.5 million criminal cases to predict defendants’ behaviour in the pre-trial phase. Florida uses machine learning algorithms to set bail amounts.Footnote 18

A study of 1.36 million pre-trial detention cases showed that a computer could predict whether a suspect would flee or re-offend even better than a human judge.Footnote 19 However, while this data seems persuasive, it is important to consider that the decisions may in fact be less just. There will always be additional facts in a particular case that may be unique and go beyond the forty or so parameters considered by the algorithm in this study which might crucially determine the outcome of the deliberation process. There is thus the inevitable need for ad infinitum improvements. Moreover, the problem of selective labelling needs to be considered: we see results only regarding sub-groups that are analysed, only regarding people who have been released. The data that we see is data garnered based on our decisions as regards who to send to pre-trial detention. The researchers themselves pointed out that judges may have a broader set of preferences than the variables that the algorithm focuses on.Footnote 20 Finally, there is the question of what we want to achieve with AI systems, what we would like to ‘optimise’: decreasing crime is an important goal, but not the only goal in criminal justice. The fairness of the procedure is equally significant.

Several European countries are using automated decision-making systems for justice administration, especially for the allocation of cases to judges, e.g., in Georgia, Poland, Serbia, and Slovakia, and to other public officials, such as enforcement officers in Serbia.Footnote 21 However, while these cases are examples of indirect automated decision-making systems, they may still significantly affect the right to a fair trial. The study ‘alGOVrithms—State of Play’ showed that none of the four countries using automated decision-making systems for case-allocation allows access to the algorithm and/or the source code.Footnote 22 Independent monitoring and auditing of automated decision-making systems is not possible, as the systems lack basic transparency. The main concern touches on how random these systems actually are, and whether they allow tinkering and can therefore be ‘fooled’. What is even more worrying is that automated decision-making systems used for court management purposes are not transparent even for the judges themselves.Footnote 23

There are several other ongoing developments touching upon courtroom decision-making. In Estonia, the Ministry of Justice is financing a team to design a robot judge which could adjudicate small claims disputes of less than €7,000.Footnote 24 In concept, the two parties will upload documents and other relevant information, and the AI will issue a decision against which an appeal with a human judge may be lodged.

2.3 Automation in prisons

New tools are used in various ways in the post-conviction stage. In prisons, AI is increasingly being used for the automation of security as well as for the rehabilitative aspect of prisonisation. A prison that houses some of China’s most high-profile criminals is reportedly installing an AI network that will be able to recognise and track every prisoner around the clock and alert guards if anything seems out of place.Footnote 25

These systems are also used to ascertain the criminogenic needs of offenders that can be changed through treatment, and to monitor interventions in sentencing procedures.Footnote 26 In Finnish prisons the training of inmates encompasses AI training algorithms.Footnote 27 The inmates help to classify and answer simple questions in user studies, e.g., reviewing pieces of content collected from social media and from around the internet. The work is supposed to benefit Vainu, the company organising the prison work, while also providing prisoners with new job-related skills that could help them successfully re-enter society after they serve their sentences. Similarly, in England and Wales, the government has announced new funding for prisoners to be trained in coding as part of a £1.2m package to help under-represented groups get into such work.Footnote 28 Some scholars are even discussing the possibility of using AI to address the solitary confinement crisis in the USA by employing smart assistants, similar to Amazon’s Alexa, as a form of ‘confinement companions’ for prisoners. While the ‘companions’ may alleviate some of the psychological stress for some prisoners, the focus on the ‘surface’ of the problem of solitary confinement conceals the debate about the aggravating harm of such confinement,Footnote 29 and actually contributes to the legitimisation of solitary confinement penal policy. The shift from the real problem seems outrageous on its own.

3 Encounters between AI systems and the law

3.1 Due process of law and AI systems in the USA

In the American context, which is where most actual employment of AI systems in criminal justice has so far occurred, the decision on a risk assessment algorithm in the judgment in Loomis v. Wisconsin (2016), entitled Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), was a sobering one.Footnote 30 The COMPAS algorithm identified Loomis as an individual who presented a high risk to society due to a high risk of re-offending and the first instance court decided to refuse his request to be released on parole. In the appeal, the Supreme Court of Wisconsin decided that the recommendation from the COMPAS algorithm was not the sole grounds for refusing his request to be released on parole and hence the decision of the court did not violate Loomis’s due process right. By confirming the constitutionality of the recommendation risk assessment algorithm, the Supreme Court of Wisconsin neglected the strength of the ‘automation bias’.Footnote 31 By claiming that the lower court had the possibility to depart from the proposed algorithmic risk assessment, the Court ignored the social psychology and human-computer interaction research on the biases involved in all algorithmic decision-making systems, which show that once a high-tech tool offers a recommendation it becomes extremely burdensome for a human decision-maker to refute such a ‘recommendation’.Footnote 32 Decision-makers regularly rate automated recommendations more positively than neutral despite being aware that such recommendations may be inaccurate, incomplete, or even wrong.Footnote 33

In the judgment in Kansas v. Walls (2017),Footnote 34 the Court of Appeals of the State of Kansas reached the opposite finding to Loomis and decided that the defendant must be allowed access to the complete diagnostic Level of Service Inventory-Revised (LSI-R) assessment, which the court relied on in deciding what probation conditions to impose on him. The Court of Appeals decided that by refusing the defendant access to his LSI-R assessment the district court denied him the opportunity to challenge the accuracy of the information that the court was required to rely on in determining the conditions of his probation. By referring to the judgment in Kansas v. Easterling,Footnote 35 the Court of Appeals decided the district court’s failure to give the defendant a copy of the entire LSI-R deprived him of his constitutional right to procedural due process in the sentencing phase of his criminal proceedings.

3.2 Human rights compliance of AI systems in the EU

AI systems have a significant impact on human rights ‘that engage state obligations vis-à-vis human rights.’Footnote 36 Since the data deluge has reached all social domains and algorithmic systems increasingly permeate various aspects of contemporary life,Footnote 37 human rights compliance can no longer be seen as the exclusive domain of privacy and personal data protection Footnote 38 and non-discrimination and equality law.Footnote 39 Automated systems have been introduced to replace humans in the banking, insurance, education, and employment sectors, as well as in armed conflicts. They have influenced general elections and core democratic processes. Personal data protection regime is thus not sufficient to address all of the challenges as regards ensuring the compliance of AI systems with human rights. The human rights implications are then necessarily manifold, as the Committee of Experts on Internet Intermediaries (MSI-NET) at the Council of EuropeFootnote 40 rightly acknowledges. The human rights that may be impacted through the use of automated processing techniques and algorithms are: (1) the right to a fair trial and due process, (2) privacy and data protection, (3) freedom of expression, (4) freedom of assembly and association, (5) the right to an effective remedy, (6) the prohibition of discrimination, (7) social rights and access to public services, and (8) the right to free elections. Moreover, as fundamental freedoms are interdependent and interrelated, all human rights are potentially impacted by the use of algorithmic technologies, e.g., in education, social welfare, democracy, and judicial systems. The developments with the AI used in social systems and domains may even ‘disrupt the very concept of human rights as protective shields against state interference.’Footnote 41

3.2.1 Equality and discrimination

Over-policing, as the most visible example of discrimination stemming from predictive policing programmes, occurs when the police patrol areas with more crime, which in turn amplifies the need to police areas already policed. It is a prime example of the ‘vicious circle’ effect of the use of machine learning in the crime control domain.Footnote 42 However, under-policing is even more critical, as the police do not scrutinise some areas as much as others, which leads to a disproportionate feeling and experience of justice. Some types of crime are then more likely to be prosecuted than others and the central principle of legality—the requirement that all crimes be prosecuted ex officio, as opposed to the principle of opportunity, by which prosecutors decide on prosecution at their own discretion, is thus not respected.Footnote 43 The use of predictive software to ascertain the treatment of perpetrators of white-collar crimes may neglect the fact that the enculturation of such offenders did not fail in any meaningful way.Footnote 44 On the contrary, such offenders are typically distinguished and respected citizens, e.g., CEOs, physicians, judges, or university professors. Critical criminologists have shown how the definition of crime itself—and even more so the prosecution of crime—is inherently political: law enforcement agencies are forced to make ‘political’ decisions about which crime to prosecute and investigate due to limited resources and personnel. They prioritise activities either explicitly or implicitly. Inequality in predictive policing then changes the perception of what counts as ‘serious crime’ in the first place. Hedge fund operations with sub-prime mortgages packaged in ‘derivatives’ are then reduced to unfortunate ‘bad luck’, despite impoverishing large parts of the population as their savings or equity evaporate. Predictive policing software has not been able to capture this important shift.

3.2.2 Personal data protection

With regard to the implications of the use of AI systems for personal data protection, the set of barriers to the adverse impacts of AI systems includes rights, such as the explicit consent of data subjects to the processing of their personal data, the data minimisation principle, the principle of purpose limitation, and the set of rights relating to when automated decision-making is allowed. The General Data Protection Directive (henceforth GDPR)Footnote 45 offers some points of reference here. In cases of automated processing, the data controller must implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, for instance by ensuring him or her the right to obtain human intervention on the part of the controller, to express his or her point of view, and to contest the decision (Art. 22, para. 3 of the GDPR). The GDPR includes the right of the data subject to receive ‘meaningful information about the logic involved’ in automated processing. (See Arts. 13, 14, and 15.)

Automated decisions, which produce adverse legal effects concerning the data subject or significantly affect him or her, are prohibited pursuant to Article 11 of the Law Enforcement DirectiveFootnote 46 (henceforth ‘Law Enforcement Directive’), unless they are authorised by Union or Member State law, which also has to ensure appropriate safeguards for the rights and freedoms of the data subject. In line with the provisions of the Law Enforcement Directive, judicial decisions made entirely by an algorithmic tool can never be legal.

3.2.3 The right to a fair trial

The use of algorithms in criminal justice systems raises serious concerns with regard to Article 6 (concerning the right to a fair trial) of the European Convention on Human Rights Footnote 47 and Article 47 of the Charter,Footnote 48 and the principle of the equality of arms and adversarial proceedings as established by the European Court of Human Rights.Footnote 49 The fair trial standards contained in Article 6 of the ECHR guarantee the accused the right to participate effectively in the trial and include the presumption of innocence, the right to be informed promptly of the cause and nature of the accusation, the right to a fair hearing and the right to defend oneself in person.

The right to effective participation may be violated in a variety of different situations, ranging from poor acoustics in the courtroom Footnote 50 to preventing the accused from being present at the trial or from examining a witness testifying against him or her.Footnote 51 The latter is also one of the minimal guarantees of a fair trial contained in Art. 6, para. 3 and normally requires that all evidence against the accused be produced in his or her presence at a public hearing, which gives the defendant an effective opportunity to challenge the evidence against him or her.Footnote 52 The right to confrontation does not apply merely to witnesses, as the term is usually understood under national law, since it has an autonomous meaning in the Convention system that goes beyond its ordinary meaning and also includes experts, expert witnesses, and victims. In any case in which the deposition serves to a material degree as the basis for the conviction of the defendant, it constitutes evidence for the prosecution to which the Convention guarantees apply.Footnote 53 The right enshrined in Article 6(3)(d) can even be applied to documentary evidence Footnote 54 and computer files Footnote 55 relevant to the criminal accusations against the defendant. Therefore, in order to ensure effective participation in a trial, the defendant must also be able to challenge the algorithmic score that is the basis of his or her conviction.

However, the right to confrontation is not absolute and may be restricted if certain conditions are met. The traditional approach of the European Court of Human Rights was that the right to a fair trial was violated if a conviction was based either solely or to a decisive degree on an uncontested statement (the ‘sole or decisive rule’).Footnote 56 However, in Al Khawaja and Tahery the Court partially departed from its previous jurisprudence, stating that the admission of untested evidence will not automatically result in a breach of Article 6 (1): when assessing the overall fairness of a trial, the European Court of Human Rights has to consider whether it was necessary to admit such evidence and whether there were sufficient counterbalancing factors, including strong procedural safeguards.Footnote 57

The problems posed by AI systems are very similar to those presented by anonymous witnesses or undisclosed documentary evidence as AI systems are opaque (as discussed in the introduction). At least some degree of disclosure is necessary in order to ensure a defendant has the opportunity to challenge the evidence against him or her and to counterbalance the burden of anonymity. Absent or anonymous witnesses, although not per se incompatible with the right to a fair trial, can only participate in a criminal procedure as a measure of last resort and under strict conditions ensuring that the defendant is not placed at a disadvantage. Such a rule should be applied to the use of AI systems used in criminal justice settings. A fair balance should be struck between the right to participate effectively in the trial, on the one hand, and the use of opaque AI systems designed to help judges arrive at more accurate assessments of the defendant’s future conduct, on the other.Footnote 58 The right to cross-examine witnesses should be interpreted so as to also encompass the right to examine the data and the underlying rules of the risk-scoring methodology. In probation procedures, such a right should entail ensuring it is possible for a convicted person to question the model applied—from the data fed into the algorithm to the overall model design.

The use of algorithmic tools in criminal procedure could also violate some other aspects of the right to a fair trial, in particular the right to a randomly selected judge, the right to an independent and impartial tribunal, and the presumption of innocence.

3.2.4 Presumption of innocence

Besides affecting many dimensions of inequality, AI decision-making systems may collide with several other fundamental liberties. Similar to ‘redlining’, the ‘sleeping terrorist’ concept used in German anti-terrorist legislation infringed upon the presumption of innocence. The mere probability of a match between the attributes of known terrorists and a ‘sleeping’ one directs the watchful eye of the state to the individual. O’Neil offers the illustrative example of the case of Robert McDaniel, a twenty-two-year-old high school student who received increased police attention due to a predictive programme’s analysis of his social network and residence in a poor and dangerous neighbourhood: ‘… he was unlucky. He has been surrounded by crime, and many of his acquaintances have gotten caught up in it. And largely because of these circumstances—and not his own actions—he has been deemed dangerous. Now the police have their eye on him.’Footnote 59

3.2.5 Effective remedy

Automated techniques and algorithms used for crime prevention purposes facilitate forms of secret surveillance and ‘data-veillance’ that are impossible for the affected individual to know about. The European Court of Human Rights has underlined that the absence of notification at any point undermines the effectiveness of remedies against such measures.Footnote 60 The right to an effective remedy implies the right to a reasoned and individual decision. Article 13 of the European Convention on Human Rights stipulates that everyone whose rights have been violated shall have an effective remedy before a national authority. The available remedy should be effective in practice and in law. As noted in the Study on the Human Rights Dimensions of Automated Data Processing Techniques:

Automated decision-making processes lend themselves to particular challenges for individuals’ ability to obtain effective remedy. These include the opaqueness of the decision itself, its basis, and whether the individuals have consented to the use of their data in making this decision, or are even aware of the decision affecting them. The difficulty in assigning responsibility for the decision also complicates individuals’ understanding of whom to turn to [to] address the decision. The nature of decisions being made automatic, without or with little human input, and with a primacy placed on efficiency rather than human-contextual thinking, means that there is an even larger burden on the organisations employing such systems to provide affected individuals with a way to obtain [a] remedy.Footnote 61

3.2.6 Other rights

New notions in the pre-emptive crime paradigm, such as ‘sleeping terrorist’, are in collision with the principle of legality, i.e., lex certa, which requires the legislature to define a criminal offence in a substantially specific manner. Standards of proof are thresholds for state interventions into individual rights. However, the new language of mathematics, which helps define new categories, such as ‘person of interest’, re-directs law enforcement agency activities towards individuals not yet considered ‘suspects’. The new notions being invented contravene the established standards of proof in criminal procedure.

AI systems should respect a certain set of rights pertaining to tribunals, i.e., the right to a randomly selected judge, which requires that the criteria determining which court—and which specific judge thereof—is competent to hear the case, be clearly established in advance (the rule governing the allocation of cases to a particular judge within the competent court, thus preventing ‘forum shopping’), and the right to an independent and impartial tribunal (as discussed in the section on automation in criminal courts).

4 Discussion: toward solutions

How should we design human-rights-compliant AI systems that respect the rule of law standards of the ‘analogue world’? The trend to ‘algorithms’ everything has raised the interest of policymakers. They share concern over the impact of algorithms on fundamental liberties and how to make ‘algorithms accountable’. In the European context, the Council of Europe’s European Commission for the Efficiency of Justice (CEPEJ) adopted the ‘European Charter on the Use of AI in Judicial Systems’ at the end of 2018 to mitigate the above-mentioned risks specifically in the justice sector.Footnote 62 Similar concerns can be noticed elsewhere in the world, most notably in the USA, where the New York City Council was the first to pass a law on algorithmic decision-making transparency.Footnote 63 The law sets up a task force to monitor the fairness and validity of the algorithms used by municipal agencies.

The use of AI in criminal justice and policing potentially affects several criminal procedure rights: the presumption of innocence; the right to a fair trial (including the equality of arms in judicial proceedings, the right to cross-examine witnesses); the right to an independent and impartial tribunal (including the right to a randomly selected judge); the principle of non-discrimination and equality; and the principle of legality (i.e., lex certa), and blurs the existing standards of proof.

AI is becoming even more complex with the concept of deep learning with artificial neural networks. Further technological developments might improve this (e.g., the ongoing research on ‘explainable AI’ may remedy the opacity of current AI approaches), but for the time being transparency is not much more than an illusion.

There is a sentiment that A.I. tools will vaporise the biases and mental shortcuts (heuristics) inherent to human judgment and reasoning. This is a powerful reason why AI technologies have too quickly been given too much power to tackle and solve essentially social (and not technological) problems. Social scientists, including lawyers, must engage more intensively with computer and data scientists in order to build a human-rights-compliant approach.

Listing the relevant fundamental rights and analysing case studies may be of great benefit as regards the human rights compliance of the novel systems that may be used in the future. However, we may still find any list inadequate. When a process of deciding by automated means involves the use of automated reasoning to aid or replace a decision-making process that would otherwise be performed by humans, any human right may be affected depending on the social domain in which the systems are employed.

Listing possible actors in the chain of building and employing AI systems may also lead to all-encompassing lists of state and private sector actors. The deepening of the digital ecosystem has led to a situation where responsibilities are becoming increasingly spread to a number of dependant actors. We can map responsibility in several ways: from the obligations of states to the obligations of the private sector; from data preparation to writing algorithm code (how data is cleaned and prepared, which data is taken in and used, and which data is left out of the calculus, etc.); from algorithmic design and development to implementation processes, etc. With the deepening of the digital ecosystem it becomes much more burdensome to determine who is responsible for certain data intake and algorithmic output. The acts committed might not even reach the existing thresholds of accountability. It may even be unjust to hold an actor accountable for the consequences of activities that are generally of great benefit to a society. An actor may be generating a risk our societies are willing to accept as ‘socially permissible risk’.Footnote 64

One way forward is to learn from experiments from domains other than that of justice. In her succinct analysis of automated welfare systems in the USA, EubanksFootnote 65 shows how removing human discretion from public assistance eligibility assessment seemed like a compelling solution to ending discrimination against African-Americans in the welfare system. If human decision-makers are biased, then moving towards eliminating humans from the decision-making loop seems logical. However, despite the fact that such a move towards automation and the elimination of the human from the decision-making process may intuitively feel like the right move to make, the experiences that Eubanks uncovered show that this may well be counterproductive. What advocates of automated decision-making systems neglect is the importance of the ability to bend the rules and re-interpret them according to social circumstances.Footnote 66 Removing human discretion thus is a double-edged sword: it can reduce human bias, but it can also exacerbate past injustices or produce new ones.

Similarly, in Turkle’s analysis of the social acceptability of computerised decision-making systems, she claims that when a system is perceived as discriminatory and one that creates racially disparate outcomes in sentencing, disadvantaged African Americans would choose a computerised judge rather than a human judge.Footnote 67 After all, human judges tend to be white middle-aged men. The ‘tough on crime’ laws that established mandatory minimum sentences for many categories of crime and removed part of judges’ discretion did make US criminal justice fairer—but all defendants were hit hard and prisons soon became overcrowded. Ironically, writes Eubanks,Footnote 68 the adoption of ‘tough on crime’ laws were a result of organising by both conservative ‘law-and-order’ types and by some progressive civil rights activists who saw the bias in judicial discretion. However, the evidence of the past thirty years is different: racial disparity in the criminal justice system has worsened, and mandatory sentencing laws and guidelines have put sentencing on autopilot.Footnote 69

Lastly, the impacts of AI systems extend beyond human rights. Their impacts may have distorting effects on the fundamental cornerstones and architecture of liberal democracies, i.e., regarding the principle of the separation of powers and the limitation of political power by the rule of law.