One reason why the discussion about human enhancement is interesting is that it helps us to question the philosophical-anthropological basis of the capability approach and its assumption that the relation between capabilities and technology is merely instrumental. Let me explain this.
Human enhancement and the question concerning the relation between humans and technology
Human enhancement aims at using technology to create better humans. What this means can best be clarified by saying what it is not: its aim is not therapeutic: it does not restore humans to a ‘normal’ state but wants to create humans that are ‘better than normal’, ‘better than human’. Information technology is one of the ‘converging technologies’ that can be used for this purpose, next to interventions in the human genome (e.g. by means of germ line engineering). For instance, Ray Kurzweil has suggested that we might ‘upload’ ourselves into digital spheres (Kurzweil
2005). Perhaps we are already ‘enhanced’ given that we extend our mind by using the internet: cognitive functions such as memory are ‘extended’ with the help of information technology. Applied to social network sites, one could say that its related social-technological practices ‘extend’ or ‘enhance’ our capability of affiliation. Like letters, phones, mobile phones, and other communication technologies, they extend our capability from face-to-face interactions to distant interactions. Moreover, they expand the quantity and frequency of possible contacts, enlarging our social world and revealing it as a network.
As this example shows, the distinction between therapy (and current or ‘normal’ use) and enhancement is vague. The alphabet, letters, and letter-writing were already a form of ‘enhancement’, not only in terms of communicative abilities but also of memory and reflection. Writing has always been created ‘external’ capacity for memory and has influenced the way we think (writing as a technology for thinking). Consider also the dramatic increase in the life span of people in the course of the twentieth century, caused by mere ‘therapeutic’ measures in medicine and health policy. Put in the language of capabilities, these ‘hardware’ modifications and ‘extensions’ meant that people could do more, here: could enjoy improved capabilities of affiliation, health, and other capabilities.
In spite of this blurred boundary between enhancement and what is considered to be ‘normal’ or ‘therapeutic’, proposals for future human enhancement (e.g. by means of genetic modification) made by ‘transhumanists’ such as Bostrom (
2003,
2005) and Harris (
2007) have invited opposition from philosophers who fear a ‘Brave New World’ or see human enhancement as a threat to human dignity (see for example Kass
2003; Habermas
2001; Fukuyama
2002). For instance, Habermas has argued that if parents can decide about the genome of their (future) children, this would restrict their children’s range of options.
Against the ‘Brave New World’ objection, Agar and Harris have proposed ‘liberal’ versions of human enhancement (Agar
2004; Harris
2007). And in response to Habermas’s objections to genetic enhancement, Bostrom has argued that human enhancement is neither a threat to freedom nor to dignity. First, he argues that the child would not have less freedom than in the case when its genome is left up to chance. Rather, an enhanced individual ‘would enjoy significantly more choice and autonomy in her life,
if the modifications were such as to expand her basic capability set. Being healthy, smarter, having a wide range of talents, or possessing greater powers of self-control are blessings that tend to open more life paths than they block’ (Bostrom
2005, p. 212; my emphasis). I will return to this thought. Second, Bostrom sees no reason why ‘posthumans’ would have less dignity:
Transhumanists (…) insist that dignity, in its modern sense, consists in what we are and what we have the potential to become, not in our pedigree or our causal origin. What we are is not a function solely of our DNA but also of our technological and social context. Human nature in this broader sense is dynamic, partially human-made, and improvable. (Bostrom
2005, p. 213)
It is neither my purpose to defend human enhancement nor to provide a comprehensive overview of the arguments for and against. Here I am interested in how this discussion can contribute to a critique and alternative development of the capability approach. Let us consider the following significant assumption of the capability approach. Nussbaum’s version of the approach sees technology only as a means, an instrument, which has nothing to do with what the human is (and with human dignity and the central capabilities). However, the transhumanist view as summarised above suggests otherwise. Capabilities—and hence what humans are—are not fixed but change together with our technological and social context. This is not only true for specific new capabilities (imagine that technology would give us wings) but also for the core capabilities that are related to what we consider (the core of) the human. Technology is not a mere ‘condition’ for human being in the sense of a means that can be used to achieve human ends; rather, human existence is
already a human-technological existence. As Plessner put it in the language of an entirely different tradition (twentieth century philosophical anthropology), we are ‘naturally artificial’ (Plessner
1928), technology is part of what we are.
If this is true, then those who oppose human enhancement can no longer rely in their arguments on a static view of human nature and an instrumental view of technology. Moreover, a discussion of the relation between human enhancement and capabilities allows us to reframe the normative question of the capability approach and to reformulate the normative question concerning human enhancement at the same time.
Using the language of capabilities, we could start by asking: Should we aim at human development (reaching minimum levels of capabilities) and perhaps human excellence (maximising levels of capabilities), or should we aim at human enhancement (changing the capabilities by technological or other means)? Considered as a normative project and redefined in terms of capabilities, transhumanist visions of human enhancement such as Bostrom’s aim not at evaluating or measuring the distance between existing levels and fixed, ideal levels of capabilities, but aims at moving and lifting the very ‘ideal’ or ‘maximum’ levels of capabilities themselves. Perhaps it even wants to add new capabilities and erase others.
(It remains unsure what the status of new capabilities would be. Would they be ‘central’? Arguably, the status of capabilities could change and further reflection on the status of new capabilities might blur the strict distinction between ‘central’ and ‘specific’ capabilities: some capabilities which we first regarded as specific may come to be seen as ‘central’. Take up the example again: if humans had wings then we might come to regard that as a ‘central’ capability and anyone taking away the wings would then be regarded as violating entitlement to a human capability. Indeed, flying with wings could be regarded as a changed capability of bodily integrity/freedom of movement. A human without wings might be considered as violated in her bodily integrity. Thus, it is not always clear if a capability is ‘new’ and whether or not it is, is subject to societal-cultural change. The example of wings also suggests that proponents of human enhancement may want to enhance aspects such as beauty or intelligence and it remains to be discussed if and how these relate to capabilities—central or otherwise. Many of us would not put them on the list of central capabilities. However, as Nussbaum argues, the list is open-ended and its very boundaries could be the object of political deliberation.)
Of course, as with the standard capability approach, various technologies (as well as educational and other practices) can be considered as means for reaching the aim (maximizing capabilities and adding new ones). But the difference with Nussbaum’s view is that here the norms (defined as end-levels of the capabilities) or new norms (the precise form of the new capabilities) are not fixed. In this way, this view goes beyond the ends/means scheme: the meaning of the capabilities themselves (the standards, the norms, the criteria, and the aims) are no longer considered as unchangeable. There is a dynamic relation between capabilities and technologies which can neither exhaustively nor adequately be defined in terms of ends and means.
However, if capabilities are already changing and have always changed, then the initial ethical question with regard to human enhancement (development or enhancement) is not the right question to ask. The normative question is no longer if we should change human nature but how we should change it. Moreover, given the blurred line between ‘enhancement’ and what is ‘normal’ or ‘therapeutic’, the question is no longer about ‘enhancement’ but about change. But what is the object of change? Using the capabilities concept, we can now ask more precise questions at a more ‘workable’ level of analysis. First we must ask descriptive questions about which capabilities and related practices change in which contexts, how they change, and as a result of which interventions they change; we must ask about the possible effects if we did this particular technological-social intervention, created this environment etc., before we can make decisions about which changes and indeed norms are desirable. For example, we should first know what some particular Internet-related practices like on-line social networking do (or would do) to our capabilities before we can decide the normative question about the desirability of these techno-human practices. If there are plans for a new technology, it means we have to try to imagine what it would do to human capabilities.
Using the language of capabilities allows us to ask precise normative and descriptive philosophical questions at a level that is situated between vague general notions such as ‘human nature’ (often used in traditional philosophy) and all too concrete and ontologically atomistic notions such as ‘genes’, ‘neurons’, or ‘codes’ (often used by scientists). This can throw new light on a long-standing methodological difficulty. Ethics and philosophical anthropology are usually challenged to choose between naturalist and non-naturalist approaches. For instance, we are asked to choose between a naturalist view of the human as embodied brain (neuroscience as a gateway to knowledge about the human, determinism) and the human as having a ‘special’ moral and metaphysical position (involving notions such as free will—for instance Kantian views of the human). But if we use capabilities as an in-between concept, this might turn out to be a false dilemma for at least the following reasons.
Scientific challenges to the instrumentalist assumption
First, naturalist or scientific views need not be reductionist and even from a ‘purely’ scientific point of view it is not very fruitful to deny the interrelations or even merging of ‘natural’ and ‘artificial’, of ‘nature’ and ‘culture’, of humans and technology. To forge firm conceptual connections between, on the one hand, the human and, on the other hand, technological practices and social contexts, is not only imperative for philosophers who want to understand the human but is also a matter of doing good science.
Consider controversies about (interventions in) the human genome. Those who argue against ‘genetic enhancement’ or those who embrace the idea without knowing much about it, may actually overestimate the impact of interventions in the ‘genes’ of people and hold a more deterministic view than the scientists who work on it. As Lewontin has argued, genes should not be considered in isolation; instead, biologists now accept that there is a complex interplay between genes, organism, and environment (Lewontin
2001). There is neither determinism nor reductionism here: biological traits are understood as being the result of genes, chance, and environment; organisms are open systems (Lewontin
2001, 113). Genes are neither a ‘blueprint’ nor a central controller. Moss has argued that ‘gene-centric’ views, which place genes in central control of the organism’s development, must be replaced with a de-centralized approach that includes intercellular, biochemical and sociological factors (Moss
2003). And Salvi has shown that the claim that it is possible to manipulate human germ cells in a pre-ordinate way is unrealistic, since the long-term consequences of such interventions in individuals, in generations, and in populations is unpredictable: ‘we cannot predict whether the gene manipulation will produce a possible expression of the desired character or if it will cause a cascade of events determining a gene dysfunction […] or subsequent mutations’ (Salvi
2002, 74). And if we cannot predict the phenotypic expression of bioengineered genes, then we cannot know what it will do to the individual—including whether or not it will ‘enhance’ that individual. Salvi even calls germ line engineering to enhance humans ‘biologically nonsensical’ (Salvi
2002, 76). This casts doubt on the promises concerning germ line engineering made by transhumanists and suggests that both defenders and opponents of human enhancement risk to assume a simplistic, reductionist view of what science and technology can know and do.
Ethics of information technology is vulnerable to a similar risk
if and in so far as its arguments are based on outdated and inadequate ontologies and anthropologies. People working in information science and information technology have moved on from considering symbolic systems in isolation (traditional AI) to non-Cartesian approaches in cognitive science and philosophy of mind focusing on embodied cognition, learning by interaction with the environment, extended mind (Clark and Chalmers
1998; Clark
2003), and so on. AI has moved from the design of intelligent computers (say, a computer that can play chess) to the design of intelligent
robots, that is, embodied and interactive AI systems that learn by interacting with their environment, display ‘emotions’, etc. This orientation may still be ‘naturalist’ or ‘informationalist
2’ but fits better with approaches in the social and human sciences—and indeed the life sciences—than previous methodologies. Moreover, views such as the ‘extended mind’ thesis (see again Clark and Chalmers) are compatible with non-instrumentalist views of the relations between humans and technology, since they do not consider information technologies as mere tools that stand apart from our minds but as part of our cognitive-embodied whole.
3
If ethics of information technology wants to take these new developments into account, it has various options to do this. Here I propose that we explore ‘capabilities’ as a concept that allows us to make ‘translations’ from science to ethics and back. Capabilities depend on minds-as-embodied, but also on the technologies and social environments that are firmly linked with that cognitive-embodied whole. Technologies, then, are not a mere means that contribute to human ends, but are part of a techno-anthropological whole that has technological, cognitive, biological and social dimensions and which constitutes individual capabilities. Moreover, these capabilities are not fixed but are unstable and changing. For instance, the capability ‘political participation’ emerges from a dynamic interplay of beliefs, values, emotions, and the technological-social environments in which these dimensions are shaped (which in turn changes these environments). If the concept of ‘capabilities’ is understood in this way, it allows us to talk at a sufficiently high level of abstraction and organisation, thus avoiding atomistic and reductionist views of humans, while at the same time taking distance from philosophical approaches that use vague notions such as human dignity without making explicit what it means for beings like us who can only function, exist, think and live by interaction with concrete technological, social environments as cognitive-embodied beings.
Viewed from this perspective, the discussion about human enhancement in relation to information technology is not about ‘technology’—at least if that term is understood in terms of material devices such as computers and mobile phones considered in isolation from the human. It is about the human-as-already-shaped-by-technology. Furthermore, and this is important given my purpose in this paper, this perspective also allows us to revise some crucial assumptions that support current versions of the capabilities approach. Technology does no longer appear as a mere means to human ends—i.e. as material or technological conditions for capabilities—but as part of continuously changing human-technological functionings and practices which resist categorisation in terms of means or ends alone.
An additional advantage of the capability approach is that it allows us even to go beyond ‘extensionalism’ and discussions about ‘where the mind is’ since it takes a functional approach. The stress is on what people (are able to) do rather than on the mind or cognitive architecture and its much discussed relation to brains and bodies. For instance, instead of comparing brains (wetware) to computer hardware, or human minds to robot ‘minds’, a focus on capabilities allows us to focus on how it is to live our lives with a particular information technology or information practice like using a social network site. The entry of analysis is the capability (e.g. social affiliation), what the technology enables us to do and what we actually do, rather the particular hardware or software (e.g. a PC or a mobile phone). As such, the concept of capability as a functional approach transcends mind–body, software-hardware, and other dualisms and is in this form different from other scientific approaches that try to solve mind-brain and mind–body puzzles. At the same time, it pays sufficient attention to the material and social conditions that make functioning possible.
Second, however, if this methodological intervention is to be really successful with regard to the aim of overcoming the previously mentioned naturalism/anti-naturalism problems and the problem regarding the human-technology relation, a further step is necessary. Scientific, naturalist versions alone are not enough; we also need to attend to human subjectivity and turn to concepts that belong to a different, more phenomenological-hermeneutic tradition: engagement, interpretation, translation.