Introduction
Design of a Meta-Ethical Robot
Nine Modules
-
PL The set of all tautologies.
-
\(\square{\text {PL}}\;\{{\square{A}:A \in {\text {PL}}}\} \).
-
US Uniform substitution.
-
\(\hbox{RRSE}_{\rm T}\) From \(A\leftrightarrow B\) and F to infer \(F^{A/B}\), where \(F^{A/B}\) is the result of replacing one occurrence of A in F by B.
-
MP From A and A⊃ B to infer B.
-
SMP From A and \(A\rightarrow B\) to infer B.
-
OA: it is obligatory that A.
-
u: that which is unconditionally obligatory.
-
\({\circledR}_{{A}}{{B}} \mathop{=}\limits^{\hbox{df}} A \rightarrow O B: A\) requires B.
Flexibility of Architecture
-
The non-deontic modules can be turned on and off.
-
The deontic and meta-ethical modules are fixed.
Modus Operandi
Unix
shell script displayed in Fig. 1 illustrates this process. The countermodel generator
MaGIC
(Slaney 2008) and the theorem prover
Prover9
(McCune 2008) run concurrently in the background (as indicated by the ampersand & at the end of the respective command lines) until one of them terminates. The value of the argument of
sleep
is irrelevant because only one of the two programs can terminate if the inputs to the programs are equivalent, the logical systems are consistent and the programs work correctly.
Meta-Ethical Reasoning Process
Prover9
(McCune 2008), and two countermodel generators, namely
Mace4
(McCune 2008) and
MaGIC
(Slaney 2008). The latter program is especially suitable in the case of relevant systems, but it was useful for the refutation of T3 in H as well.Prover9
quickly derives theorems T1, T2 and T3 and D. The countermodel generators produce no result. From this, the robot concludes that \({\varvec{\mathfrak{S}}_{\rm T}^{\circ}}\) plus
Mally
is unacceptable in the light of its meta-ethical standards (see Lokhorst 2010 and Lokhorst 2011 for the details). This concludes the case of \({\varvec{\mathfrak{S}}_{\rm T}^{\circ}}\). The case of \({\bf \L}_{\aleph_0}\) is more difficult. It takes
Prover9
several hours to prove that T1 and T2 are theorems of \({\bf \L}_{\aleph_0}\) plus Mally’s axioms; the derivations of these formulas are about a hundred lines long. The derivations of T1 and T2 are presented in the Appendix, if only to demonstrate that it is not advisable to do this kind of reasoning without computer assistance. The remaining cases (H and the relevant systems) are relatively easy, both for computers and humans (even though the refutation of T3 in H required some human ingenuity, as described in Lokhorst 2011).
Formula | D1–D5 plus | ||||||
---|---|---|---|---|---|---|---|
\({\varvec{\mathfrak{S}}_{\hbox {T}}^{\circ}}\)
|
\({\bf \L}_{\aleph_0}\)
|
H
|
R
|
RM
|
RM3
|
KR
| |
\({\bf T1}\quad A\rightarrow O A\)
| + | + | + | − | − | − | − |
\({\bf T2}\quad O A\rightarrow A\)
| + | + | − | − | − | + | − |
\({\bf T3} \quad O(A\lor B)\rightarrow(O A\lor O B)\)
| + | + | − | − | + | + | − |
\({\bf D} \quad O A\rightarrow\sim O\sim A\)
| + | + | + | − | − | + | + |
Conclusion
-
Different logical systems, both deontic and non-deontic.
-
Different theorem-provers and model-generators, for example the award-winning
Vampire
andParadox
(Rabe et al. 2009). -
Different meta-ethical criteria, for example, acceptability and unacceptability of alternative deontic principles. Freedom from “is-ought fallacies” would be an example. Is-ought fallacies are formulas of the form \(A\rightarrow O B\) or \(A\rightarrow\sim O B\), where A and B contain no occurrences of O and u. It can be proven that R plus Mally’s axioms is the only system on our list that avoids such fallacies, but we doubt whether the computer is clever enough to see this.
-
Different ways of reasoning about (moral) reasoning, for example in terms of computational complexity and computational tractability.
-
More expressive languages, for example languages with perception, knowledge, action, multiple agents, strategies, intention and time (see Hoven and Lokhorst 2002 and Lokhorst and Hoven 2011 for further discussion). In this context, it is interesting to know that large parts of multimodal correspondence theory have recently been mechanized, which makes it easier to study the interactions of modalities such as belief, seeing to it that, possibility, obligation and temporal notions (Georgiev et al. 2006).
-
On the fringe of logic and beyond logic: probabilistic reasoning, moral belief revision, the moral frame problem, non-monotonic reasoning and non-inferential moral judgment, for example case-based judgment and pattern recognition with neural networks (see Gärdenfors 2005 for more about these topics).