Human Rights and the Non-derogable Right to Life
Fundamental human rights, enshrined under the Universal Declaration of Human Rights as well as the European Convention on Human Rights (ECHR), are broadly classified into the following three categories [
Non-derogable, absolute rights
Non-derogable, non-absolute rights
Non-derogable, absolute rights are defined as those rights that “cannot be limited or infringed under any circumstance, not even during a declared state of emergency” [
]. They also cannot be suspended
]. Non-derogable, non-absolute rights are those rights whose ordinary application may be limited under
] (e.g., the right to marry and start a family can be limited such that multiple marriages are outlawed).
Qualified rights are rights that “permit interferences subject to
conditions” (emphasis supplied). However, such interferences must be “in accordance with the law and necessary in a democratic state for the requirements of public order, public health or morals, national security or public safety” [
]. Thus, for example, the right to privacy is a qualified right, which can be interfered with if a search warrant has been granted by a court of law. The right to free speech can also be limited or interrupted if it encroaches upon another person’s rights (defamation) or if it threatens national security by inciting violence. Similarly, the right to intellectual property, as well as the right to innovate are qualified rights, subject to several limitations, including limitations imposed in the interest of public order and morality [
The right to life, although not universally recognized as absolute [
], is recognized universally as a non-derogable right [
]. Accordingly, its ordinary application can only be limited under
circumstances. These circumstances are already defined under Article 2 of the European Convention on Human Rights, which states under paragraph (1):
Everyone’s right to life shall be protected by law. No one shall be deprived of his life intentionally save in the execution of a sentence of a court following his conviction of a crime for which this penalty is provided by law. (Emphasis supplied)
In addition to the limitation to the right to life affected by capital punishment (as envisaged in Article 2 paragraph (1)), the ECHR further mentions only three circumstances
under Article 2 paragraph (2), in which the
use of force
resulting in the “[d]eprivation of life shall not be regarded as inflicted in contravention of Article 2”, namely:
(a) In defense of any person from unlawful violence
(b) In order to effect a lawful arrest or to prevent the escape of a person lawfully detained
(c) In action lawfully taken for the purpose of quelling a riot or insurrection
Accordingly, the programming of AVs to always take the life of one category of persons, even in dilemma situations, is not envisaged as justifiable within the scope of Article 2.
More relevant in this regard are the opening words of Article 2.1 of the ECHR: “
Everyone’s right to life shall be protected by law.” In this context, the provisions of Article 14 of the ECHR are relevant and must be read in conjunction:
The enjoyment of the rights and freedoms set forth in this Convention shall be secured without discrimination on any ground such as sex, race, colour, language, religion, political or other opinion, national or social origin, association with a national minority, property, birth or other status.
Similar to the wording of Article 14 of ECHR, Article 2 of the Universal Declaration of Human Rights (UDHR) also states that every human being is entitled to
all rights and freedoms (including the right to life) “without distinction of any kind, such as race, colour, sex, language, religion, political or other opinion, national or social origin, property, place of birth or other status (Article 2).” Further, Article 7 of the UDHR provides that “all are equal before the law and are entitled without any discrimination to equal protection of the law (Article 7).”
In other words, when we read the right to life (Article 2 of ECHR) together with the non-discrimination clause (Article 14 of ECHR and Article 2 of the UDHR),
all human beings, irrespective of age, religions, race, gender, or nationality, are equally entitled to the right to life and the law cannot endorse any practice (even when agreed to by the majority or followed for the sake of convenience) which, either in effect or at the outset, compromises the lives of one category of persons in preference over another category of persons.
In the context of dilemma situations in AVs, the fact that every human being has an equal right to life and is entitled to have the law protect this right would require that the law does not permit AVs to be programed in such a way that a specific category of persons is preferentially compromised or spared in any situations that arise or may arise (in the future). By programming a vehicle, based on the findings of “The Moral Machine Experiment,” to always strike an old person in country A or always strike a child in country B would violate the right to life of old people in country A and the right to life of children in country B. Beyond human rights law, under civilized convention as also age-old moral law, all lives are equally valuable [
Further, in natural circumstances (i.e., when an automobile faced with a dilemma situation is not programed to always hit a specific category of persons), each human being that is involved in any dilemma situation will have an equal chance of surviving. Would it be legally or ethically justifiable to permit a program to reduce this 50–50 chance of survival to a 0% chance of survival for a specific category of persons?
It is in the context of human rights and the classification of the right to life as non-derogable that one must read Rule 9 of the German Ethics Code for Automated and Connected Driving. Indeed, Awad at al. refer to Rule 9 in their paper, but do not comprehensively examine its rationale and scope. Rule 9 is consistent with fundamental human rights, as well as the manner in which these rights are legitimately categorized. It states:
In the event of unavoidable accident situations, any distinction based on personal features (age, gender, physical or mental constitution) is strictly prohibited. It is also prohibited to offset victims against one another. General programming to reduce the number of personal injuries may be justifiable. Those parties involved in the generation of mobility risks must not sacrifice non-involved parties. [
In effect, therefore, the recommendations of Awad et al. [
], that their paper’s empirical data be used to guide AV decision-making in dilemma situations, if adopted by any original equipment manufacturer (OEM), would violate the basic principles of the UDHR and the German Ethics Code, as well as age-old moral law.
Tort Law, Empirical Data, and the “Reasonable Person” Standard
Torts, or civil wrongs, are remedied or compensated following basic principles of tort law. These principles have evolved over centuries, primarily through case law (decided court decisions). Torts are broadly classified as intentional and unintentional torts. Unintentional torts are also known as cases of “negligence.” Negligence is defined as “the omission to do something which a
would do, or doing something which a
reasonable and prudent man
would not do.” (emphasis supplied). In any case of negligence, a person (which includes a real as well as artificial person, e.g., corporations or industries) “is liable for harm that is the
of his or her actions.” ([
When a person (the injured party) files a lawsuit (court case) for negligence, he/she must prove that (a) the alleged tortfeasor (defendant) owed a duty of care to the injured party (plaintiff); (b) the defendant breached this duty of care; (c) the plaintiff suffered injury; (d) the negligent act (or omission) caused the plaintiff’s injury, and (e) the defendant’s negligent act (or omission) was the proximate cause of the plaintiff’s injury. ([
In the landmark negligence case,
Lord Atkin developed the “neighbor principle” which significantly expanded the scope of the tort of negligence and the duty of care owed under it. The key finding of the court in this case was
The rule that you are to love your neighbor becomes in law, you must not injure your neighbor; and the lawyer’s question, Who is my neighbor? receives a restricted reply. The answer seems to be – persons who are so closely and directly affected by my act that I ought
reasonably to have them in contemplation as being so affected when I am directing my mind to the acts or omissions which are called in question. (emphasis supplied)
With this tort law primer in mind, let us imagine then that a car C is programmed by a car manufacturer M, according to the recommendations of Awad et al.’s research [
]. Let us further presume that C is purchased by a person P in country B (see above, where we presume that following Awad et al.’s recommendations, country B is one where AVs are programmed to ensure that in dilemma situations, elderly persons are always spared). Now imagine that car C, while driving on the roads of country B, faces a dilemma situation and following its program diligently, hits a child and causes its death. Let us assume, as is likely to be the case, that the child’s parents sue the owner of the car, P, for negligence and the car’s manufacturers, M, are sued alongside. What might the decision of the court be?
It is relevant to note at the outset that, to the authors’ knowledge, no court has had an opportunity so far to consider such a matter (or one with a similar fact scenario). We can safely say that there is currently a great deal of uncertainty about the manner in which the “duty of care” under tort law would be interpreted and distributed in cases involving fully automated vehicles, and more particularly, in cases involving AVs facing dilemma situations. However, applying the basic principles of tort law as described above, we expect the following will be considered by the courts in Europe, (and perhaps also courts of other common and civil law countries that follow basic principles of equity, fairness and justice.)
Coming back to our fact scenario, the first thing one may ask is: Is P liable for negligence? Probably not—P had no choice but to resign to the pre-programmed car’s decision to hit the child. All (s)he could do in the circumstance was wait and watch.
Even if one were to argue that P had the choice of not buying an AV at all, once the described situation occurs, the court will look for the “proximate cause” (see discussion above). In our case, it is the manner in which the car was programmed and not the fact of the car’s purchase, which is the proximate cause of the harm caused
] (referring to Polemis: In re Arbitration Between Polemis and Furness, Withy & Co., Ltd.  3 K.B. 560).
Is M then liable for negligence? The answer to this question, first, and most importantly, the court will consider the duty of care and who owed this duty to whom. As stated earlier, negligence is defined as “the omission to do something which a
reasonable man would do, or
doing something which a
reasonable and prudent man would not do.” The question that arises, therefore, is who is a
reasonable man and how would he/she have behaved in a similar circumstance. Can an action/inaction be deemed reasonable (or otherwise) based on empirical evidence or the opinions of many (other reasonable persons)?
The “reasonable person,” or the person who has been famously described as “the man on the Clapham Omnibus,” is not a real person, but rather a creature of legal fiction. Applying the “reasonable person” standard or test has been likened to applying the “Golden Rule” in both common and civil law systems [
]. A person’s behavior is deemed reasonable before the eyes of the law (in a manner similar to behavior that is deemed “ethical”), when it is “fair, just or equitable. The person must be honest, moderate, sane, sensible, and tolerable” [
]. At the outset, it can be reiterated that no clear, universal consensus has emerged from any research so far conducted (including from Awad et al.’s research [
]) as to what choice a reasonable person would make, in
(dilemma) circumstance. It is also not at all clear that the decision would remain the same in
, including similar, (dilemma) situations, if they were to be faced multiple times.
Secondly, the terminology (“reasonable person”) is rather vague, as is the term “moral person.” Indeed, perhaps because the term’s interpretation is heavily dependent on context, courts have warned against attempting to define what this hypothetical “reasonable person,” a construct of legal fiction, would do, by attempting to lead empirical evidence in court through, for example, testimony of other “reasonable persons.” In the recent case
Healthcare at Home Limited
The Common Services Agency
], the UK Supreme Court stated as follows:
It follows from the nature of the reasonable man, as a means of describing a standard applied by the court, that it would be misconceived for a party to seek to lead evidence from actual passengers on the Clapham omnibus as to how they would have acted in a given situation or what they would have foreseen, in order to establish how the reasonable man would have acted or what he would have foreseen. Even if the party offered to prove that his witnesses were reasonable men, the evidence would be beside the point. The behavior of the reasonable man is not established by the evidence of witnesses, but by the application of a legal standard by the court. The court may require to be informed by
evidence of circumstances
which bear on its application of the standard of the reasonable man in
any particular case
; but it is then for the court to determine the outcome, in those circumstances, of applying that impersonal standard. [
Court decisions are valuable guiding/check posts, including in the currently under-regulated AV domain [
], not least because cases of human injury, whether caused by human or machine actors, will eventually boil down to questions of legal liability. Because legal liability in tort law, in both civil and common law countries, is based on the “reasonable person” standard, it may be safe to presume, based on the above court decision, that using empirical data to determine “socially acceptable principles” may be highly problematic in a court of law, especially when dealing with questions of legal liability.
The above court ruling, therefore, calls to question research (as done by Awad et al. [
]) aimed at recommending the programming of “reasonable” and even “moral” behavior in machines based on empirical survey data. Pre-determining “moral” or “reasonable” behavior based on such statistical or empirical data would also give out uncomfortable, inaccurate, and illegal signals as to how much specific categories of life are valued by a region or country, potentially creating a chain of other undesirable socio-cultural consequences. Policy and codified law, including computer codes, which increasingly play the role of law, must seek to implement higher moral goals rather than perpetuating aggregated individual preferences (which, in the worse of cases, may be reflective of individual prejudices), of any statistical majority.
Pre-determining the outcome of any accident by programming AVs in a specific way based entirely on what a large number of presumably reasonable persons have said in response to an empirical or experimental survey/study, is, therefore, most likely to be considered a violation of duty of care under torts. Such programming would also not accurately track, and would perhaps also disincentivize, manufacturers’ overall duties vis-à-vis ensuring vehicular safety and readiness to face complex driving scenarios.
Once a duty of care and its violation are established, the other elements of negligence clearly follow in our hypothetical trolley dilemma situation. Because of the way the AV was programmed, the child (or the elderly) was injured/sacrificed. If not
responsible, the programming was at least
foreseeably and directly
responsible for causing the injury. Had it not been for the programming, the person would have had at least a 50% chance of survival. Undoubtedly, programming an AV in this manner would result in placing all liability in the hands of the car manufacturer—several perplexing questions associated with liability in case of accidents involving AVs may then disappear [
]. In fact, it has been opined that individuals may then
AVs at level 5 automation, in order to be completely absolved from any responsibility and, therefore, also from any liability [
]. This is problematic for several ethical reasons, as discussed in the following section.
Such empirical data-based programming may even result in a finding of strict liability (for M—the equipment manufacturer), because such programming arguably results in the AV becoming “inherently dangerous” for specific categories of persons, for example, for children in country B. It would also be a willful violation of the non-derogable right to life of children (or of the elderly, or even of those with criminal records), as discussed above. This would be the case even if the majority of the surveyed people in country B were to find the programming and its resulting casualty “reasonable” or “moral.”
From a legal perspective, it is also noteworthy that in a trolley dilemma situation, no matter what split second decision a human driver makes, (s)he is likely to
not be held liable either for negligence or under principles of strict liability (unless mala fides or recklessness is proved, in which case, we would go out of the purview of negligence under tort law and towards criminal liability).
If, however, the recommendations of Awad et al. [
]. are used to design AV policy and regulation that permit car manufacturers (OEMs) to program AVs in the above manner (e.g., in country B), the express legislation would override principles of tort law that have evolved over centuries in close compliance with fundamental principles of equity, fairness, and responsibility. Such a legislation would result in making no one liable (neither the OEM nor the car owner) and leave the injured party, and indeed the entire class of persons affected by the policy-based programming, without remedy.
In this context, it is also relevant to highlight the paper’s statement that “In summary, the individual variations that we observe are theoretically important, but not essential information for policymakers [
].” Imposing an alleged “Universal ethic” on individuals whose personal (and very justifiable) moral conscience would urge them to act differently in dilemma situations is also a violation of the individual’s right to freedom of conscience (Article 10 of the European Charter on Fundamental Rights) and the right to conscientious objection. The armchair “conscience” of an alleged “majority” cannot be used to impose allegedly conscientious choices on the whole. Not everyone who buys or uses AVs programed in such a way would consider them as “moral machines.”
In the context of the human right to “equality” and “liberty,” it has been said that:
This understanding of the equality of all human beings leads “naturally” to a political emphasis on autonomy. Personal liberty, especially the liberty to choose and pursue one’s own life, clearly is entailed by the idea of equal respect. For the state to interfere in matters of personal morality would be to treat the life plans and values of some as superior to others.
Autonomy (liberty) and equality are less a pair of guiding principles – let alone competing principles – than different manifestations of the central commitment to the equal worth and dignity of each and every person, whatever her social utility…. Equal and autonomous rights-bearing individuals…. have no right to force on one another ideas of what is right and proper, because to do so would treat those others as less than equal moral agents. Regardless of who they are or where they stand, individuals have an inherent dignity and worth for which the state must demonstrate an active and equal concern. And everyone is
to this equal concern and respect…
Awad et al. themselves recognize that their sample is close to the internet-connected, tech savvy population that is interested in driverless car technology [
]. While the authors seem to suggest that the opinions of this group are rather (more) important as they will be the early adopters of the technology, it must also be borne in mind that in implementing the choice of the majority in the manner recommended (e.g., all old people in country A or all young people in country B), the morality of the tech savvy will be imposed on the population as a whole—especially the weak and the poor who, while not being inside the AV, may be the victims of its generalized programming. Legislation based on such statistical findings would be scary at the very least, and would be in violation of the right to equal treatment under the law (Article 7, UDHR). Here again, Rule 9 of the German Ethics Code, which states that “Those parties involved in the generation of mobility risks must not sacrifice non-involved parties,” is centrally relevant and must be borne in mind.