Stability in action selection
As with all human and other ape (Whiten and van Schaik
2007) behaviour, our ethics is rooted both in our biology and our culture. Nature is a scruffy designer with no motivation or capacity to cleanly discriminate between these two sources of behaviour, except that what must change more quickly should be represented more plasticly (Depew
2003; Hinton and Nowlan
1987). As human cultural evolution has accelerated our societies’ paces of change, increasingly our ethical norms are represented in highly plastic forms such as legislation and policy (Ostas
2001).
The problem with a system of action selection so extremely plastic as explicit decision making is that it can be subject to
dithering —switching from one goal to another so rapidly that little or no progress is made on either (Humphrys
1996; Rohlfshagen and Bryson
2010). Dithering is a problem potentially faced by any autonomous actor with multiple goals that at least partially conflict and must be maintained concurrently. Conflict is often resource-based, for example visually attending to two children at one time, or needing to both sleep and work. An example of dithering in early computers was
thrashing—a process of alternating between two programs on a single CPU where each required access to the majority of main memory. Poor system design could result in an operating system allocating a slice of time to each process shorter than the time it took to be read into main memory from disk, preventing either program from achieving any of its real functions. More generally, dithering implies changing goals—or even optimising processes—so frequently that more time is wasted in the transition than is gained in accomplishment.
Perhaps to avoid dithering, we as humans prefer to regulate social behaviour even in an extremely dynamic present by planting norms in a “permanent,” bedrock past, like the anchoring of tall buildings built over a swamp. For example, American law is often debated in the context of the US constitution, despite being rooted in British Common Law and therefore a constantly changing set of precedents. Ethics is often debated in the context of holy ancient texts, even when the ethical questions at hand concern contemporary matters such as abortion or robots about which there is no reference or consideration in the original documents. Societies tend to believe that basic principles are rational, fixed, and universal. Enormous changes in social order such as universal suffrage or the end of legalised human slavery are simply viewed as corrections, bringing about the originally-intended rather than a newly-improved (or worse, locally-convenient) order.
In fact our ethical structures and morality do co-evolve with our society (Waal
1996). When the value of human life relative to other resources was lower, murder was more frequent and less sanctioned, and political empowerment was less widely distributed (Johnson and Monkkonen
1996; Pinker
2012). When women can support themselves and their children independently, infidelity is viewed less harshly (Price et al.
2014). What it means to be human changes, and our ethical systems have to accommodate that change.
Fundamental social behaviour
As I implied when defining
ethics, an ethical systems will contain components addressing two problems:
1.
Defining a society—discriminating it from others, and
2.
Maintaining a society internally.
The first problem may underpin our psychological obsession with ingroup-outgroup dynamics. I have suggested elsewhere that a society may be defined by the public goods it creates and defends, thus the scale of a coherent economy may limit the size of a society (Bryson et al.
2014, cf. Powers et al.
2011). The second problem could however at least in theory be universal, and as such could also be a candidate for describing how AI might become a moral subject. Maintaining a society internally is also the topic of the rest of this section.
I begin by considering the most basic component of social behaviour: whether that behaviour is for or against society—pro- or anti-social. Assessing morality is not trivial, even for apparently trivial, ‘robotic’ behaviour of single cell organisms, which also behave pro- and anti-socially. For example MacLean et al. (
2010) demonstrate the overall social utility of organisms behaving in a way that at first assessment seems to be obviously anti-social—free riding off of pro-social agents that manufacture costly public goods. Single-cell organisms produce a wide array of shared goods ranging from shelter to instructions for combating antibiotics (Rankin et al.
2010). MacLean et al. (
2010) focus on the production of digestive enzymes by the more ‘altruistic’ of two isogenic yeast strains. Having no stomachs, yeast must excrete such enzymes outside of their bodies. The production of these enzymes is costly, requiring difficult-to-construct proteins, and the production of pre-digested food is beneficial not only to the excreting yeast but also to any other yeast in its vicinity. The production of these enzymes thus meets the common anthropological and economic definition of
altruism: paying a cost to express behaviour that benefits others (Fehr and Gächter
2000).
In the case of single-cell organisms there is no ‘choice’ as to whether to be free-riding or pro-social. This is genetically determined by their strain, but the two sorts of behaviour are accessible from each other during reproduction (the construction of new individuals) via common mutations (Kitano
2004; Youk and Lim
2014). For such systems, natural selection performs the ‘action selection’ between goals by determining what proportion of which strategy lives and dies. What MacLean et al. (
2010) show is that selection can operate such that the lineage as a whole benefits from mixing both strategies (cf. Akçay and Cleve
2016). The ‘altruistic’ strain in fact
overproduces the public good (the digestive enzymes) at a level that would be wasteful, while the ‘free-riding’ strain of course underproduces. Thus the greatest good—the most efficient exploitation of the available resources—is achieved by the species as a whole.
Why can’t the altruistic strain evolve to produce the right level of public goods? This returns to my earlier point about rates of plasticity. The optimal amount of enzyme production is determined by available food and this will change more quickly than the physical mechanism for enzyme production in a single strain could evolve. However death and birth can be fast and cheap in single-cell organisms. A mixed population composed of multiple strategies, where the high and low producers will always over and under produce respectively, and where their proportions can be changed very rapidly, is thus an agile solution. Thus the greater good for the species is served by the ‘selfishness’ of many of its members, but would not be so served without the presence of altruists.
Human society also appears to up
and down regulate investment in public goods (Bryson et al.
2014). We may increase production of public goods by calling their creation ‘good’, and associating ‘good’ with a social status that is beneficial in the socio-economic contexts where more public goods are beneficial. Meanwhile, self interest and individual learning from direct reenforcement can be relied on to motivate and maintain the countervailing population of underproducers. For human society too the ‘correct’ amount of investment may vary quickly due to shifts in socio-economic and political context. For example, national military investment may be worthwhile under threat of invasion, but investment in local businesses may be more advantageous at other times. This implies that the
reduction of other’s ‘good’ behaviour can itself be of public utility in times when society benefits from more individual productivity or self-sufficiency (cf. Trivers
1971; Rosas
2012). If so, we would expect that in such contexts it may also be easier for human institutions to change their overall assessment of which public goods require investment than to change their exact rate of output for all individuals (Bryson et al.
2014).
Is does not imply
ought. The roots of our ethics do not entirely determine where we should or will progress. But roots do affect our intuitions. Our intuitions towards inclusion of artefacts in our society are probably driven by the extent to which we identify with such artefacts (Bryson and Kime
2011). This goes back to the biological account for altruism given in the definitions section: we are by nature willing to pay a higher cost for those more related to us. For humans, this ‘relatedness’ seems to extend also to those whose ideas we share (Plotkin
1995; Gardner and West
2014). This would allow us to be a phenomenally agile species, rapidly generating new societies to exploit available opportunities, particularly if (as seems to be true, Coman et al.
2014) we can prompt each other to focus on particular identities in particular circumstances.
Others have proposed using our intuitions as a mechanism for determining our obligations with respect to robots and AI (Dennett
1987; Brooks
2002; Prescott
2017). Because of their origins in our evolutionary past, and the simple observation of how patiency can be attributed to plush toys (Bryson and Kime
2011), I do not trust this strategy to create coherent ethics. I do however trust those with vested interests—such as interests in selling weapons, robots, or even books—to exploit such intuitions (Bryson
2010; Bryson et al.
2017). Although established precedent is close to my second objective proposed earlier for the justification we seek for a normative recommendation, I consider picking a precedent (in-group identification) that divides as much as it unites to be unsatisfactory. Such divisions seem particularly dated given that we can expect communication technology to increase the potential size of our social group (Roughgarden et al.
2006; Bryson
2015). In the next section I turn as an alternative established source of criteria for making a normative recommendation to philosophy, which I exploit in the sections following to propose a more coherent, minimally disruptive path to situating AI in our society, and (therefore) our ethics.