Skip to main content
Top
Published in: Ethics and Information Technology 4/2021

Open Access 17-05-2021 | Original Paper

Non-consensual personified sexbots: an intrinsic wrong

Author: Karen Lancaster

Published in: Ethics and Information Technology | Issue 4/2021

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Humanoid robots used for sexual purposes (sexbots) are beginning to look increasingly lifelike. It is possible for a user to have a bespoke sexbot created which matches their exact requirements in skin pigmentation, hair and eye colour, body shape, and genital design. This means that it is possible—and increasingly easy—for a sexbot to be created which bears a very high degree of resemblance to a particular person. There is a small but steadily increasing literature exploring some of the ethical issues surrounding sexbots, however sexbots made to look like particular people is something which, as yet, has not been philosophically addressed in the literature. In this essay I argue that creating a lifelike sexbot to represent and resemble someone is an act of sexual objectification which morally requires consent, and that doing so without the person’s consent is intrinsically wrong. I consider two sexbot creators: Roy and Fred. Roy creates a sexbot of Katie with her consent, and Fred creates a sexbot of Jane without her consent. I draw on the work of Alan Goldman, Rae Langton, and Martha Nussbaum in particular to demonstrate that creating a sexbot of a particular person requires consent if it is to be intrinsically permissible.
Notes

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Introduction

Humanoid robots used for sexual purposes (hereafter ‘sexbots’) are becoming increasingly lifelike due to advances in robotics and silicon-based materials. It is now possible for users to have a bespoke sexbot created which matches their exact requirements in skin pigmentation, hair and eye colour, body shape, and genital design (see Realdoll, 2019). This means that it is possible—and increasingly easy—for sexbots to be created which bear a very high degree of resemblance to a particular person.
In the UK, as well as in many other countries, there are currently no legal restrictions on the production or ownership of sexbots which are copies of real adults.1 In 2016, a high-profile case involved a man who created a sexbot to represent actress Scarlett Johansson (see Glaser, 2016) and although this seems distasteful and intrusive, Johansson was unable to bring any legal action against the sexbot creator, as he had not broken any law.2 Sexbot production company Realdoll produce sexbot replicas of pornographic actress Stormy Daniels, and these are on sale to the public; Realdoll have also made a single sexbot replica of comedian Whitney Cummings for her stand up special Can I touch it? (Grey Ellis, 2019). Evidently celebrity sexbots are already being created—and so are sexbots of non-celebrities. An article in The Mirror reports that some widowers are purchasing sexbots which have been made to look like their dead wives, as a way to deal with the bereavement, and they have not faced any legal restriction in doing so (Parsons, 2017). It seems only a matter of time before people will begin to create sexbots of their colleagues, friends, acquaintances, partners or ex-partners—with or without their consent. It is legally and philosophically essential that the potential wrongs of such acts are discussed and understood before they begin to occur with any frequency.
In this essay I demonstrate that creating a sexbot to represent someone is an act which morally requires consent, therefore to do this without consent is intrinsically wrong. I acknowledge some negative consequences which may occur due to sexbot production, but I set these aside as they turn out to ground solely instrumental reasons; I focus on intrinsic wrongs alone. I argue that regardless of any harms which may or may not ensue, creating a sexbot to represent someone without their consent is intrinsically wrong.
Although the cases above and others like them have captured the attention of the popular media (Cuthbertson, 2015; Glaser, 2016; Parsons, 2017), this area is still under-theorised in the philosophical literature. There is a steadily increasing literature on sexbots, and the ethical problems they raise. One topic which has gone largely unexplored, however, is the wrongness—either instrumental or intrinsic—of creating a sexbot to look like a particular person. I offer a novel argument on this issue herein.
My essay proceeds as follows: in the “Terminology” section, I outline some of the terminology I use, and in the “Scenarios” section I present the scenarios under consideration. In the “Objectification and sexbotification” section I examine some literature on sexual objectification which I draw on in the arguments which follow, and I show why creating a sexbot to represent someone is an act of objectification. Then in the “Sexbotification without consent” section I show that creating a sexbot which represents someone is a sexual act directed towards a particular person, and morally requires consent. I detail why consent is morally transformative in the scenarios I outline, by comparing an example of non-consensual sexbot creation with a consensual example. Ultimately, I establish that when consent is obtained, creating a sexbot to represent someone is intrinsically permissible, but when consent is not obtained, it is intrinsically wrong.
In the scenarios I outline, the sexbot creators are male and the sexbots (and people whom they represent) appear female. This fits with the general trend in the creation of sex dolls and sexbots; however, it should not be inferred that this is the only possible distribution of sexes in sexbot debates.

Terminology

All humans mentioned in this paper are adults of normal mental faculty; they are all persons (by whatever criteria we use to define personhood), but the sexbots are not. In future, it may be possible for roboticists to create artificially intelligent robots which could meet some philosophers’ criteria for personhood; love robots or ‘erobots’ are being developed (see Anctil & Dubé, 2019) and programmers are making huge leaps forwards in the creation of interactive and socially intelligent robots (Dautenhahn, 2007). Nevertheless, robotic persons do not exist at the time of writing; sexbots which are currently on sale commercially (4woods, 2018; Realdoll, 2019; TrueCompanion, 2019) have some rudimentary robotic functions, such as smiling, moving their heads, eyes and arms, and having simple conversations, but they fall short of philosophical definitions of a person. I write as if the same is true of the sexbots under discussion in this paper. When I say that a sexbot is lifelike, I mean it is lifelike in its looks, rather than in its cognitive capacities. Although one might be morally concerned about the ethical implications of creating a non-robotic sex doll which represents someone against their wishes, the intrinsic wrongness of making a sex robot is all the more pertinent because the creations have human-like robotic functions: robotics and artificial intelligence are improving rapidly, and so sexbots are becoming more lifelike in their behaviour and appearance, and less like mere inert sex dolls.3 The movements and conversational abilities of a sexbot contribute to its appeal, making it seem more lifelike and more similar to a human than a non-moving sex doll would be (Danaher, 2017a). Despite their humanlike features, the sexbots available for sale and under discussion in this paper are objects which cannot be morally wronged.
Because philosophical exploration of sexbots is still in its infancy, there is presently no terminology to differentiate between types of sexbot; it will be useful for me to separate sexbots into two broad types, which I call generic sexbots and personified sexbots. The two categories are exclusive but not exhaustive, and although there may be some sexbots which do not fit neatly into either category, differentiating between the two types of sexbot will be useful to my argument herein, and to future arguments regarding sexbots. I define the first type of sexbot (a ‘generic sexbot’) as one which has not been intentionally made to represent or resemble a specific person. The creator would simply choose a set of features such as blonde hair, an hourglass figure and so on, and synthesise these to form a sexbot. I define the other type of sexbot (a ‘personified sexbot’) as one which is intentionally a representation of a particular person (the ‘human subject’). In personified sexbot creation, the creator uses the human subject as a template, and ‘copies’ their features to form a sexbot which is intended to, and does in fact, look just like the human subject. I use the term ‘sexbotification’ to refer to the process of using someone’s looks as a template for creating a personified sexbot.
Consider a case where someone creates a sexbot of his imaginary ideal woman, but by coincidence the sexbot closely resembles someone he has never seen. It might be thought that this is an example of a personified sexbot, however it is not, as the sexbot is not a representation of that particular human subject. Hilary Putnam gives an example which helps to clarify why the creation of a sexbot which accidentally resembles someone does not count as a sexbot representing that person. Putnam’s example involves an ant which crawls on some sand and by chance, carves out something which resembles a caricature of Winston Churchill (Putnam, 2012, p. 1). Putnam rightly argues that the lines in the sand are not a representation of Churchill because there was not an appropriate accompanying disposition in the mind of the creator—in other words, the ant was not trying to draw a picture of Churchill (ibid 1–5). If an artist intentionally produces a caricature of Churchill, then even if the ant’s line in the sand is qualitatively indistinguishable from the artist’s work, we should still say that the artist’s picture is a representation of Churchill, but the ant’s is not (ibid). Analogously, suppose two men create sexbots, where Alan creates a generic sexbot of his imaginary ideal woman, and Bob creates a sexbot which represents a woman he knows—Cara. Even if by pure chance both sexbots are qualitatively indistinguishable from one another (and from Cara), we should still say Alan’s sexbot is a generic sexbot which does not represent anyone, while Bob’s is a personified sexbot which represents Cara. Although Alan’s sexbot serendipitously resembles Cara, this is not sufficient for us to say that it is a representation of her. This is because (analogous to the ant example) there was not an appropriate accompanying disposition in the mind of Alan: he was not trying to produce a sexbot which represents Cara. Thus, Cara has not been sexbotified by Alan, but she has by Bob.
Whilst the distinction between personified and generic sexbots is very useful, there is not always a neat dividing line between the two types of sexbot I distinguish. For example, someone might create a sexbot which is intended to represent a particular human subject but it turns out to bear little resemblance to her. Alternatively, someone may make a sexbot which is a hybrid of the ‘best’ features of two human subjects—for example, the face of one woman, but the body shape of another. Although there are cases such as these which demonstrate that the distinction between personified and generic sexbots is not always clear, the broad categories are still useful. Besides, such difficulties are not problematic for my argument herein, since the scenarios I outline below concern clear cases of personified or generic sexbots. I draw on the distinction between these different types of sexbots in the argument that follows. I do not address whether generic sexbots objectify everyone or even just one person who serendipitously resembles the sexbot. I merely argue below that consent is required from the human subject when making a personified sexbot.
I will now briefly distinguish between two ways in which an action can be considered to be wrong; it is important to delineate these ways because my argument focuses on only one way in which an act can be wrong, viz. intrinsic wrongness. According to moral philosophers, the wrongness of actions can be characterised in different ways: instrumental wrongness and intrinsic wrongness. Broadly speaking,4 consequential ethicists such as utilitarians claim that an act is instrumentally wrong if it brings about negative consequences (see Bentham, 2000, pp. 14–18). A second way of understanding what makes an act wrong is more deontological in nature; a prominent example of such a moral system is that outlined by Kant (1997). For Kantian ethicists, the rightness or wrongness of an act is determined by the nature of the act itself, meaning that an act can be intrinsically wrong even if it does not cause any harmful consequences. For example, stealing is said to be intrinsically wrong, even if no one ever finds out or is harmed by it. Instrumental wrongness and intrinsic wrongness are not mutually exclusive: someone might consider an act to be both instrumentally wrong and intrinsically wrong, such as murdering a child.
Several writers debate whether it is wrong to create or use sexbots (Danaher, 2017b, c, 2019; Di Nucci, 2017; Harvey, 2015; Levy, 2008; McArthur, 2017; Migotti & Wyatt, 2017; Richardson, 2016). These writers only discuss the potential consequences of creating what I am calling generic sexbots, but no writers that I am aware of address the creation of personified sexbots.5 This is what I attempt to do below.

Scenarios

I consider two scenarios to highlight the intrinsic wrong in creating a personified sexbot without consent, as compared to creating one with the consent of the human subject. The scenarios are:

Scenario A

Roy and Katie are acquaintances. Roy asks Katie whether she would agree to him creating a sexbot to represent her. Katie gives her consent,6 and Roy creates the sexbot. The sexbot is lifelike in its looks, and is almost indistinguishable from Katie herself. Roy then uses the sexbot to perform sex acts.

Scenario B

Fred and Jane are acquaintances. Fred asks Jane whether she would agree to him creating a sexbot to represent her. Jane does not consent: she asks him not to do it, but Fred creates the sexbot anyway. The sexbot is lifelike in its looks, and is almost indistinguishable from Jane herself. Fred then uses the sexbot to perform sex acts.
I will consider but ultimately dismiss the possibility that the actions of Roy are intrinsically wrong; instead, I will maintain that Roy does something intrinsically permissible, whereas Fred does something intrinsically wrong when he sexbotifies Jane without her consent.
There is a difference of opinion among philosophers and legal scholars regarding whether consent is binary (a person either does or does not consent) or whether there is a spectrum of consent (a person can consent a little bit, a medium amount, quite a lot, and so on). It is not necessary for me to explore this issue in detail here, as my argument should be appealing to proponents of either view of consent. Proponents of the ‘spectrum’ type of consent can assume that when I refer to someone as ‘giving consent’ or ‘not giving consent’, I mean to refer to someone whose consent-giving (or withholding) lies at one end of the spectrum or the other.
Presently, let me explain why I consider making a sexbot and having sex7 with a sexbot in tandem with one another. It would be a mistake to presume that in all situations, creating and using an artefact are moral equivalents. For example, using a murder weapon seems to be more morally condemnable than making the weapon, whereas the reverse seems to be the case with revenge porn.8 With personified sexbot creation, creating the sexbot and having sex with the sexbot are very much linked. It may be possible for someone to consent to the creation of a sexbot without consenting to anyone having sex with the sexbot (although this would be odd, and we might think that the resulting robot is not really a sex bot). However, the reverse is not the case; if one consents to someone using a sexbot, one must consent to the existence of the sexbot, for a sexbot cannot be used if it has not been created. Thus, consent to the creation of a personified sexbot is a prerequisite to consenting to a person having sex with it. In a case such as Jane’s (Scenario B) where she has not even consented to the creation of the sexbot, it is necessarily implied that she does not consent to anyone having sex with the sexbot. There would be a real tension if a human subject were to say they definitely do not want a sexbot representing them to be created, but if one is created against their wishes, they consent to someone having sex with it. It should now be clear that the creation of a personified sexbot and its use are inherently linked. Since sexbotification occurs prior to sexual use, hereafter, I shall only refer to the former. Thus ‘creating a sexbot’ (or ‘sexbotification’), should be understood as meaning ‘creating a sexbot and someone having sex with it’.9
I will now outline some of the arguments on sexbots and objectification; the literature will be essential when I come to make the moral assessments of the two scenarios in “Sexbotification without consent” section. Philosophical literature regarding sexbots is still embryonic. That which does exist takes a balanced (albeit cautious) approach to their creation (Danaher, 2017b, c, 2019; Di Nucci, 2017; Harvey 2015; Levy 2008; McArthur, 2017) but philosophers generally fall short of calling for an outright ban (cf. Danaher (2017a), who suggests there may be reason to ban sexbots designed to simulate rape and child sex abuse). Some of the most vehement arguments against sexbots come from feminist pressure groups such as the Campaign Against Sex Robots, spearheaded by Kathleen Richardson. Through her online campaign and her academic writing Richardson maintains that it is wrong to create even what I call a generic sexbot (Richardson, 2016). She states that sexbots:
1.
contribute to inequalities in society
 
2.
sexually objectify women
 
3.
show the immense horrors still present in the world of prostitution which is built on the perceived inferiority of women
 
4.
reduce human empathy that can only be developed by an experience of a mutual relationship
 
5.
reinforce power relations of inequality and violence
 
6.
create more demand for human bodies, rather than reducing prostitution. Summarised from Campaign Against Sex Robots (2015)
 
This is an incomplete list of the plethora of potential harms which sexbots could cause or exacerbate. Points 1, 4, 5, and 6 all refer to instrumental rather than intrinsic wrongs, and if true, women will be the primary victims, given that the vast majority of sex dolls and sexbots are female in appearance (4woods, 2018; Realdoll, 2019; TrueCompanion, 2019). With regard to point 3, ‘showing immense horrors’ might not be called wrong at all—after all, the news shows the horrors of the world (child abductions, mass shootings and so on) but it is not generally considered wrong to broadcast or print the news. Nonetheless, even if one maintains that it is wrong for a roboticist to show the horrors of the world by creating a sexbot, then this too is instrumentally rather than intrinsically wrong, because it is wrong in virtue of its bringing about bad consequences such as our feeling upset. It is evident that points 1 and 3–6 are potentially negative consequences rather than intrinsic wrongs, by considering a thought experiment such as this:
Imagine that a man – Eric – ceases contact with all other people in society (perhaps he is on a deserted island), and he creates a generic sexbot which is female in appearance. No one else ever discovers what has taken place (perhaps the sexbot is destroyed in a fire which also kills Eric).
It is easy to see how points 1, 3, 5 and 6 are potential harms, but not necessary features of sexbot creation, because they do not occur when Eric creates his sexbot in isolation. The reduction of empathy (point 4) does not seem problematic for someone such as Eric who does not interact with other people; even if his empathy were reduced by the sexbot, this would not be intrinsically wrong. Since in this essay I am only concerned with intrinsic wrongs, points 1 and 3–6 can therefore be set aside; they are not necessary features of sexbot creation, nor do they refer to intrinsic wrongs. Unlike the other points, point 2 (that sexbots objectify women) does seem to describe something which could be an intrinsic wrong if it is true. However, below I argue that although creating a personified sexbot is an act of objectification, such objectification is only impermissible when non-consensual.
Although there is no discussion in the philosophical literature of sexbots resembling real people, there is plenty of literature regarding objectification and consent. Below I outline some conceptions of objectification which I later build on to highlight why non-consensual sexbotification is intrinsically wrong.

Objectification and sexbotification

In this section I outline why creating a personified sexbot is a form of sexual objectification; I will later establish that doing so without consent is intrinsically wrong. Objectification involves using a person as if they are an object rather than a person. Kant famously instructed us never to treat a person as “merely as a means” (Kant, 1997, p. 38). Kant’s claim is that we should respect the humanity of other people, and not use them solely for our own gain.10 More recently, this idea has been taken up by feminist philosophers who write about sexual objectification (Dworkin 1974; MacKinnon, 1987, 1989; Nussbaum, 1995, 2007). Drawing on the work of Kant, Nussbaum maintains that objectification is not always wrong, but it becomes wrong when it involves treating someone as a mere means to an end; this is the conception of objectification that I follow. Nussbaum outlines seven features which form a cluster concept of objectification. These involve treating someone as if:
1.
they are a tool (‘instrumentality’);
 
2.
they lack autonomy (‘denial of autonomy’);
 
3.
they lack agency (‘inertness’);
 
4.
they are interchangeable with other things (‘fungibility’);
 
5.
they lack boundaries (‘violability’);
 
6.
they can be owned/bought (‘ownership’);
 
7.
their feelings need not be taken into account (‘denial of subjectivity’) (Nussbaum, 1995, p. 257)
 
Nussbaum acknowledges that it is of course morally permissible to treat an inanimate object as a tool, and this is not objectification (ibid). I commit no intrinsic wrongdoing when I show no concern for an object such as my mobile phone, or I treat it as a tool for my own use. The sexbots I consider herein cannot be morally wronged, so it may appear that showing no concern for a sexbot or using it as a tool is as permissible as treating a mobile phone in this way. Clearly, one cannot commit an intrinsic wrong against a sexbot when it is a mere object—however, one can commit an intrinsic wrong against the human subject whom the sexbot represents.
Langton largely supports Nussbaum’s very useful list of seven features, but adds another three which she suggests are also salient features of objectification:
8.
Identifying someone as their body parts (‘reduction to body’)
 
9.
Treating someone in terms of how they appear to the senses (‘reduction to appearance’)
 
10.
Treating someone as if they cannot speak (‘silencing’) (Langton, 2009, pp. 228–229)
 
Langton’s first two features (although very similar to one another) help to distinguish sexual objectification from, for example, a burglar, who meets most of Nussbaum’s features, but whom we would not think of as objectifying his victim (Langton, 2009, p. 230). Langton’s ‘silencing’ bears some similarities to Nussbaum’s ‘denial of subjectivity’, but they are distinct: for example, I may treat a toddler as if he cannot speak, whilst still acknowledging his feelings and taking these into account.
For now, let us leave aside the issue of whether objectification is wrong, and consider solely whether sexbotification is a form of objectification, using Nussbaum’s and Langton’s ten features.11 Firstly, sexbotification meets two of Langton’s features of objectification (Langton’s features no. 8 and 9) because it reduces the human subject to her body and her appearance. Indeed, because the types of sexbots that can currently be created are very lifelike in their looks but not in their cognitive capacities, the primary appeal of a sexbot lies in its human-like bodily appearance. A skilled sexbot creator can reproduce a person’s bodily appearance with astonishing similarity, but there will be little other similarity between the sexbot and the human subject. The human subject’s body and appearance are what is important when creating a personified sexbot: their personality, intelligence, interests, sense of humour, and other features are seemingly unimportant: a personified sexbot is a copy of a person’s bodily appearance, so it seems apt to say that sexbotification involves reduction to body and appearance to the senses. When a sexbot creator uses the human subject’s appearance as a ‘blueprint’ for the sexbot’s looks, this also seems to be a form of instrumentalisation (Nussbaum’s feature no. 1). Of course, we may say that a portrait artist also instrumentalises people in this way, and that such an act is prima facie morally permissible. But at this stage I am not commenting on the permissibility of sexbotification, merely establishing that it is a form of sexual objectification. We might also say that creating a personified sexbot involves treating the human subject as someone who lacks boundaries (Nussbaum’s feature no. 5) because the sexbot will be ever-consenting. The sexbot can also be bought or owned (Nussbaum’s feature no. 6). However, we might think that a sexbot creator can believe that a sexbot can be bought and has no boundaries, whilst recognising that the same is not true of the human subject whom it represents. Nonetheless, since the sexbot is a physical representation of the human subject, creating it evidently involves treating the human subject in a sexual way, just as a pornographic photographer or film director treats the actors in a sexual way by creating the pornography.
Given that Nussbaum’s and Langton’s features form a cluster concept of objectification, it is not clear how many of the features need to be met in order for something to count as objectification. I have claimed above that sexbotification meets roughly five of the ten features identified by Nussbaum and Langton, and thus it is probably apt to be called a form of objectification; others may disagree. But I will show below that non-consensual sexbotification meets more of Nussbaum’s and Langton’s features of objectification, and so even if someone maintains that (consensual) sexbotification is not a type of objectification because it does not meet enough of the features, then they can still accept my main argument that non-consensual sexbotification is objectification, and an intrinsically wrong form of objectification at that.
Even if one rejects Nussbaum’s and Langton’s list of features, it is clear that sexbotification is a form of sexual objectification because of the similarities it bears to other types of pornographic creations. Pornography treats women as sex objects (Jütten, 2016, p. 41, Langton, 2009; MacKinnon, 1987), displaying their bodily orifices as ripe for male penetration. Taking intimate photos of someone is sexually objectifying them, even if no direct physical contact occurs between the human subject and the photographer. Indeed, deepfake pornography12 can be produced by an agent who has never even met the person in the videos they create. Non-consensual sexbotification would appear on McGlynn et al’s (2017) ‘continuum of image-based sexual abuse’, along with upskirting,13 revenge porn, and deepfake porn. But sexbotification is even more clearly a sexual act than is making pornographic photos or videos of someone, because the result (i.e. a sexbot) is fully physically interactive and capable of sexual intercourse, whereas photos and videos are merely for looking at.
It may be the case that it is intrinsically wrong to non-consensually create any kind of sexual product (such as sex dolls, pornography, and realistic sexual artwork), though I do not establish that here. However, the highly lifelike appearance and interactive nature of sexbots means that their creation is all the more troubling: a sexbot could potentially look and feel just like the human subject. If a user wants to have a sexual experience with S, then a personified sexbot of S will provide a far more realistic approximation than artwork, photographs, videos, or even an inert sex doll would provide. The intrinsic wrongness of making a fully interactive sexbot of someone without their consent therefore seems more pressing than other sexual wrongs such as taking photos without consent and looking at them whilst masturbating. Sexbotification is a shortcut; a way of fulfilling one’s fantasies without making direct physical contact with the human subject whom the sexbot represents. But as pornographic photography shows us, one need not touch another person in order to sexually objectify them. Although sexbotification falls short of using and abusing the human subject herself, there are few acts closer to engaging in real sexual intercourse with her; in sexbotifying a woman, the creator produces an ever-consenting14 automaton with her face and appearance. Sexbotification of a person replicates and represents the desired human subject, but with just a few modifications: the most depraved and violent sex acts can be performed on a sexbot without recourse. We might even refer to sexbotification as a form of “remote sexual violence”, as it has been suggested that upskirting and revenge porn are (Wittes et al., 2016).
Creating a personified sexbot is different from creating a generic sexbot, probably because we see our images as an extension of ourselves; sexbotification utilises the looks of the human subject, and an individual’s looks are an essential feature of their being. If someone sees me, then intentionally makes photorealistic drawings of me and stabs the drawings, this is an act which is directed towards me, as compared to someone stabbing images of an imaginary character he has created which does not represent anyone in particular. A sexbot is a sexual artefact by definition, so sexbotification is a sexual act which is clearly directed towards a particular individual: the human subject. This seems as intuitive as the suggestion that taking pornographic photos or videos of someone is an act of sexual objectification directed towards that individual.
Someone can be objectified in a non-sexual way, and so we may think that creating any kind of robot—not just a sexbot—to represent another person meets many of Nussbaum’s and Langton’s features of objectification, and that if it is done without consent, it is impermissible objectification. Although this seems plausible, consent is generally considered to be more pertinent in sexual matters than it is in other matters. For example, suppose someone takes photographs of two women without their consent: in the photographs, Gillian is having sex, and Heidi having lunch. The former seems more clearly intrinsically wrong than the latter; if the photographs are then shared online without Gillian’s and Heidi’s consent, the wrongdoing against Gillian again seems markedly worse than the wrongdoing (if wrong at all) against Heidi. Analogously, it would seem that creating a sexbot of someone without their consent is more clearly intrinsically wrong than making a non-sexual robot of someone without their consent. This is because we consider consent to be absolutely pivotal in sexual matters, to the degree that a non-consensual sexual act is seen as a violation and is legislated against, even when there are no harmful consequences arising from the act—this is something I return to later.
Sexbotification shares much in common with the creation of deepfake porn. Both involve creating a visual product which depicts the victim in a sexualised way, and the creator can do this without ever coming into contact with the victim. Indeed, one need not even be in the same country as their victim, as the Scarlett Johansson case cited at the outset demonstrates. Deepfake pornography and so-called ‘sextortion’15 can also cross international borders (Wittes et al., 2016, p. 18); both are morally reprehensible16 forms of objectification which have been called “remote sexual violence” (McGlynn et al., 2017; Wittes et al., 2016). If we accept that deepfake porn and ‘sextortion’ are forms of objectification, it is reasonable to maintain that sexbotification is also a form of sexual objectification, and as such it should require consent of the human subject.
Some writers might argue that when a man creates any female-like sexbot—whether generic or personified—he intrinsically wrongs women qua women as a social class. This is not a position I defend here, but it can be consistent with my argument. Consider: even if all pornography is wrongful objectification, it is still reasonable to maintain that revenge porn, ‘sextortion’ and upskirting have an additional layer of wrongness on top of their objectifying women qua women by being pornographic; we can recognise that there is a particular person who is a victim in an additional way that others are not. Analogously, even if one maintains that all sexbot creation wrongs women qua women, it can still be the case that sexbotification without consent is an additional intrinsic wrong towards the human subject. Hereafter, I write as if sexbot creation is generally permissible; even if other writers deem generic sexbot creation and consensual sexbotification to be impermissible, they can still be sympathetic to my argument that they are more permissible than non-consensual sexbotification.

Sexbotification without consent

Recall that I am following the Nussbaumian thesis that objectification is not always wrong, but it becomes wrong when it involves disregarding the subject’s humanity by treating them as a mere means to an end (Nussbaum, 2007, p. 51). Nussbaum builds on the Kantian notion that humans are intrinsically valuable and should not be used merely for one’s own gain—to do the latter is intrinsically wrong, regardless of consequences. Although I have suggested in the previous section that sexbotification is a form of objectification, this would not necessarily mean it is intrinsically wrong; its intrinsic wrongness is established below.
Consent is pivotal to my argument regarding personified sexbot creation. Goldman presents a view of sexual consent which seems intuitively plausible, and can be utilised to support my argument. Goldman argues that sexual relations must be consensual if they are to be morally correct (Goldman, 1977); I take this as a baseline standpoint in general sexual practices, but I argue for it below in relation to sexbot creation. Sexual activity is the most intimate of human activity and we are inclined to think that all sexual contact or activity requires consent from all parties involved. Goldman writes that sexual activity is immoral when “the transactions are not freely and rationally endorsed by all parties” (Goldman, 1977, p. 282). When A obtains consent from B, this transforms the sexual interaction from an intrinsically impermissible one (where A was using B merely as a means to A’s end), into an intrinsically permissible one (where A is valuing B as an end in themselves) (Goldman, 1977, p. 283). Like Nussbaum, Goldman does not characterise objectification as something which is always impermissible, but it becomes impermissible when particular conditions (such as mutual consent) fail to be met. So both Nussbaum and Goldman argue along Kantian lines that sexual objectification without consent is intrinsically wrong because it uses a person without regard for their humanity (Goldman, 1977, pp. 282–283; Nussbaum, 1995, 2007). This is the account of objectification which I adopt below in relation to sexbotification.
Recall the two scenarios mentioned earlier:
A.
Roy creates a personified sexbot with the consent of the human subject (Katie)
 
B.
Fred creates a personified sexbot without the consent of the human subject (Jane)
 
Nussbaum accepts that objectification can be permissible and sometimes even ‘wonderful’ (Nussbaum, 1995, p. 251); it is only when someone uses a person merely as a means to an end (without valuing their humanity) that the act is intrinsically wrong. We should note that in Nussbaum’s account, like Kant’s, all the work is being done by the word ‘merely’; the claim is that it is permissible to treat someone as a means to an end so long as you value their humanity as well, but it is impermissible to treat them as a mere means without valuing their humanity. Valuing someone’s humanity would often involve taking their wishes into consideration where possible. Failing to obtain consent from the person being sexbotified is not valuing their humanity.
In Scenario A, Roy values Katie’s humanity, because he asks for her consent to create the sexbot, and only when Katie gives her consent does Roy proceed. So although Roy uses Katie as a means to an end (of sexual gratification), he also values her as ‘an end in herself’ by asking her permission; this means he does not treat her merely as a means to an end. Thus, Roy’s sexbotification of Katie is intrinsically permissible according to the Nussbaumian characterisation of objectification (Nussbaum, 2007, p. 51). Although Roy objectifies Katie, because this is done consensually, he has not committed an intrinsic wrong towards her, because he has valued her humanity. Sexual relations are immoral when they are not endorsed by both parties, and even if sexual use is one-sided (as in Scenario A where Roy sexbotifies Katie but Katie does not sexbotify Roy) this is permissible when done with sensitivity to the desires and demands of the other (Goldman, 1977, p. 282). I stipulated earlier that Katie has freely and rationally endorsed the creation of the personified sexbot which represents her, and Roy was sensitive to her desires and demands. We can say that Roy’s treatment of Katie has respected her wishes, and the creation of the Katie-sexbot has been endorsed by both parties involved. This is what makes scenario A morally permissible, and differentiates it from Fred’s treatment of Jane in scenario B.
Someone might assert that it is intrinsically wrong to create any personified sexbot. However, making such an assertion would be flawed because it would capture too much; it would capture instances of consensual sexbotification which are not prima facie instances of intrinsic wrongs. Sharkey et al. discuss our sexual future with robots, and suggest that sexbots of partners in long-distance relationships could be created where the couple can speak directly to one another through the robots’ mouths, in order to create “a mutual sexual experience” (Sharkey et al., 2017, p. 6). Although this sort of setup involves sexbotification (and therefore objectification), it does not resonate as something which is intrinsically wrong because the experience is consensual. On Nussbaum’s and Goldman’s account of objectification, this sort of arrangement is intrinsically morally permissible, as Roy’s consensual creation of the Katie-sexbot is.
In Scenario B, it does not take much scrutiny to realise that Fred is using Jane merely as a means to an end. Jane’s looks are an essential part of herself—they are unique and fundamental features of Jane. In creating the Jane-sexbot Fred capitalises on Jane’s very existence, performing a sexual act directed at her, in the same way that ‘sextortion’ and the creation of deepfake porn are sexual acts relating to particular victims—namely, those who are depicted in the images (McGlynn et al., 2017).
In “Objectification and sexbotification” section I showed that all sexbotification meets some but not all of Nussbaum’s and Langton’s features of objectification. Let me briefly explain how non-consensual objectification in particular meets more of the features. Firstly, Fred’s treatment of Jane denies her autonomy, agency and subjectivity (Nussbaum’s features no. 2, 3 and 7) because when he ignores her wishes he acts as if her feelings do not exist or need not be taken into account, and that she has no agency. Indeed, an agent respects a person’s autonomy, agency and subjectivity when he obtains her consent. Fred, of course does not obtain consent, and so he meets these three features of objectification. We can also say that Fred acts as though Jane has no boundaries or that those boundaries can be violated (feature no. 5). Jane had essentially drawn a personal boundary inasmuch as she had decided against having a sexbot made in her image; this is a line she is not willing to cross—a boundary. Fred violates that boundary and acts like it does not exist when he sexbotifies Jane. Furthermore, when Fred ignores Jane’s protests and creates the sexbot representing her against her wishes, this is effectively silencing her (feature no. 10). Even if someone is unconvinced that consensual sexbotification qualifies as sexual objectification (it met only half the features of objectification), we can be satisfied that non-consensual sexbotification is objectification (it meets 9 of the 10 features).17 There is insufficient space for me to argue for each of these in detail, and some opponents may feel that non-consensual sexbotification does not meet some of the features I have claimed above. However, as a cluster concept, it need only meet some of the features in order to be considered an example objectification, and it seems evident that making a sexbot to represent someone against their wishes would qualify as an act of sexual objectification, just as creating pornography is. Simply being a form of objectification does not make it wrong though—the (intrinsically) wrong-making feature of non-consensual sexbotification is its lack of consent.
Why should we think that there is a significant moral difference between consensual sexbotification and non-consensual sexbotification? We should think this because consent plays a pivotal role when making a moral evaluation of sexual activity between two people. Consider:
Lawrence and Michelle have sexual intercourse (both of them are over 18, conscious, of normal mental faculty, they are not under the influence of drugs or alcohol, and neither is in a position of power over the other).
Is their sexual intercourse morally permissible or impermissible? The answer, of course, is that we don’t know. Unless we know whether both parties consented to the intercourse, we are unable to make a moral determination regarding the permissibility of the act. If it were the case that both parties consented, then we would likely acknowledge that the sex was permissible. If, instead it were the case that one party—Michelle, for example—did not consent to the intercourse, then this fact is morally transformative; consent is the ‘moral magic’ of sexual matters (Alexander, 1996; Hurd, 1996). The non-consent of Michelle transforms the moral status of the act from permissible to impermissible in the laws of many countries, and it is (justifiably) seen as intrinsically wrong by any reasonable person (Archard, 2007; Cowling, 2001; Goldman, 1977; Wertheimer, 1996). In fact, we place such a high value on consent that even in cases where no harm ensues from a particular case of non-consensual sexual activity, we still believe that there is a victim who has been intrinsically wronged. Consider an example:
Natalie was drugged and unconscious when she was raped. The rapist used a condom, and Natalie was not injured nor impregnated, and she has no recollection of the incident. (Adapted from Gardner and Shute in Stewart 2010: 26–27).
In this example, Natalie has not suffered any negative consequences, and is unaware that anything untoward has taken place. Despite the fact that no negative consequences have occurred, we can still legitimately claim that the assailant’s actions were intrinsically wrong, because Natalie did not consent. Stewart remarks that a case such as Natalie’s is “a violation of her right to sexual autonomy” (Stewart, 2010, p. 27). In Nussbaum’s and Kant’s terms, Natalie has been used merely as a means to an end, rather than being valued as an end in herself (Nussbaum, 2007, p. 51). The wrong-making feature of Natalie’s treatment is her lack of consent, rather than anything else the rapist did. In the case of Natalie, we accept that what was done to her was wrong, even though no harm occurs; in other words, we see it as intrinsically wrong even though it is not instrumentally wrong. The salient fact is that sexual activity morally requires consent, and Natalie had a sexual activity performed on her without her consent, which is intrinsically wrong. I maintain that the same is true in a case where someone is non-consensually sexbotified. We see this in other areas of sexual activity: consent is morally transformative in the taking and sharing of explicit images of a person. It is generally seen as permissible to take or share images of an adult who consents, but it is a different matter entirely when the person depicted does not consent.
Some philosophers may argue that women cannot freely or fully consent to sexual treatment or activity in the same way and to the same extent that men can, particularly in a society where there is some degree of sexual inequality (Langton, 1993, pp. 323–326), however this is not a position I defend here. I take the position that (generally speaking) women are capable of consenting (or withholding consent) to sexual activity. Even if it is the case that women’s consent is less free than men’s consent is, we still tend to think that there is a difference between woman X who engages in consensual sexual activity, and woman Y who is forced to engage in sexual acts against her wishes. If a woman can consent to sexual activity such as sexbotification, and consent is morally transformative in sexual matters—as was shown above—then we should conclude that sexbotifying someone with their consent is intrinsically permissible, whereas doing so without their consent is intrinsically wrong.

Conclusions

Ethical issues involving lifelike robots are going to be increasingly significant over the coming years; they are already making the leap from science fiction and philosophical theory into real-world fact. Sexbots are the pioneers of this robotic revolution, and although sexbots will probably become increasingly commonplace over the years, philosophical (and ideally, legal) clarity is required now regarding the wrongs of creating personified sexbots without consent.
Although sexbots may bring about a multitude of negative consequences for individuals and society, I have set these aside in order to focus on the intrinsically wrong act of creating a personified sexbot without the consent of the human subject. I have maintained that creating a personified sexbot is an act of sexual objectification directed towards that particular person which may or may not be permissible, depending on whether the human subject’s consent was obtained. Using Nussbaum’s Kantian-inspired argument, I have shown that non-consensually sexbotifying a human subject involves using them merely as a means, which is intrinsically wrong. Meanwhile, in a sexbotification case where the human subject’s prior consent is obtained, she has not been intrinsically wronged by the creation of the sexbot because she has not been used merely as a means to an end. With personified sexbots, consent of the human subject is a moral prerequisite, and is transformative when obtained. In other words, in cases of non-consensual sexbotification, the lack of consent is the wrong-making feature of the act. Even if it were the case that creating any sexbot is intrinsically wrong because it objectifies women qua women, it is still right to maintain that sexbotifying a woman without her consent is an additional intrinsic wrong.
Just as we have seen a rise in deepfake pornography as the technology has advanced, we are also likely to see an increase in sexbot creation over the coming years. Someone will fall victim to non-consensual sexbotification in the not-too-distant future, and my argument herein demonstrates why this is a pertinent issue which should be morally and legally explored, because creating a such a sexbot without the consent of the human subject resonates as one which is deeply objectionable and intrinsically wrong.

Acknowledgements

I would like to thank the anonymous reviewers who provided useful feedback on this essay, as well as Zachary Hoskins, Neil Sinclair, and Joseph Kisolo-Ssonko who gave suggestions of improvements to earlier drafts. I presented this paper at the Society for Philosophy in the Contemporary World conference in Portland, Oregon, USA in 2018, and at Feminist and Gender-Philosophical Perspectives Human Rights Symposium at the University of Vienna, Austria in 2018, and I would like to thank the delegates at these conferences for their thought-provoking questions and comments.

Declarations

Conflict of interest

All authors declare that they have no conflict of interest.
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Footnotes
1
There are restrictions on childlike sex dolls and sexbots, but this is because they are childlike, not because they resemble a particular child (see Dearden (2017)).
 
2
It may be wrong for a sexbot creator to make money from selling sexbots representing celebrities, in the same way that it is wrong to create toys of copyrighted characters. I set aside such financial wrongs and consider the wrongs even if no money is made from the creations.
 
3
The difference between sex dolls and sex robots is a difference in degree rather than a difference in kind; some sex dolls are sufficiently realistic-looking that my arguments herein may also pertain to them. My arguments do not, however, pertain to non-sexual non-realistic creations.
 
4
There are large bodies of literature discussing consequentialism and Kantian ethics, and there are differences of opinion regarding how each ought to be understood. My outlines of them here are basic and open to dispute. An exact understanding of these theories is not necessary for the project at hand, because these are just being used as starting points to differentiate between instrumental and intrinsic wrongness.
 
5
The idea of creating sexbots to resemble real people is very briefly mentioned by John Danaher as a possible solution to the problem that most sexbots have ‘porn star’ physiques. Although he acknowledges that “it is very difficult to avoid the problematic symbolism involved in the creation of a robot that looks like a real woman and is to be used solely for sexual purposes” (Danaher, 2017b, p. 116) the issue is not explored further therein.
 
6
Let us suppose that Katie gives her consent freely and of her own volition; she is not being coerced by Roy nor anyone else, and she does not stand to gain or lose anything from giving or withholding consent.
 
7
Migotti and Wyatt (2017) provide an interesting discussion of whether sex with a robot qualifies as sex, or whether it is mere masturbation. For our purposes here, I shall call it sex.
 
8
Revenge porn involves the sharing or distribution of pornographic videos/photos, where typically, the victim is an ex-partner. Generally, the videos/photos and the sex acts they depict were consensual at the time, but their later distribution online is non-consensual, and motivated by revenge. The distribution of revenge porn was criminalised in the UK in 2015 (National Archives, 2015).
 
9
Generally, sexual consent needs to be established for each new act; we cannot infer that because S consented to sex at time T1 she also consents at time T2. But viewing pornography is different. The actors’ consent is obtained at the point of creation (and possibly confirmed at the point of distribution) and after that the actor need not be repeatedly contacted to obtain their consent to view the video. It would make sense for sexbots, being sexual artefacts, to be treated in the same way as pornography, and deem that it is not necessary for personified sexbot users to obtain consent each time they wish to use the sexbot.
 
10
Kant wrote extensively about moral law, and his works have been studied and interpreted in varying ways; I cannot hope to do justice to all these different interpretations in this paper, but I follow Nussbaum’s interpretation of Kant, which she builds on with her work on objectification.
 
11
Alternative theories exist which do not characterise all forms of sexual activity as objectifying. However, Nussbaum’s and Langton’s features are widely discussed and are selected here because they allow us to distinguish between consensual objectification and non-consensual objectification, which is the salient difference between scenarios A and B.
 
12
Deepfake pornography involves getting a pornographic photo or video and superimposing someone else’s face onto the body depicted. This can be done remarkably convincingly, and means that pornographic videos and images can be created of a person without their consent, and without them ever having removed their clothes.
 
13
Upskirting involves surreptitiously taking photos or videos from underneath a woman’s skirt, looking up towards her underwear. This is now a criminal offence in England and Wales, (National Archives, 2019).
 
14
There are some exceptions to this: sexbot retailer TrueCompanion makes a sexbot with a ‘Frigid Farrah’ setting, where the sexbot will attempt to resist any sexual advances made towards it, but a man can forcibly have sex with it, thus ‘raping’ it (see Timmins, 2017).
 
15
‘Sextortion’ involves blackmailing someone into providing sexual photos or videos of themselves and/or blackmailing someone by threatening to share sexual photos or videos of them online or with their contacts.
 
16
Although it is possible for someone to consensually create deepfake porn, there would be little motivation to do so, because it is much easier to take original sexually explicit videos or photos of a person, rather than to painstakingly photoshop their head onto someone else’s body. Just about all deepfake porn has not been consented to by the person whose head is depicted.
 
17
The features of objectification that I suggested were met by any form of sexbotification were numbers 1, 6, 8, and 9 (and possibly 5) in Nussbaum’s and Langton’s list. Features 2, 3, 5, 7, and 10 are additionally met when sexbotification is non-consensual. This means that non-consensual sexbotification meets features 1–3 and 5–10 on the list. The only feature not met is fungibility, because a personified sexbot does not seem interchangeable with other sexbots; it is special precisely because of its unique resemblance to the human subject.
 
Literature
go back to reference Alexander, L. (1996). The moral magic of consent (II). Legal Theory, 2(3), 165–174.CrossRef Alexander, L. (1996). The moral magic of consent (II). Legal Theory, 2(3), 165–174.CrossRef
go back to reference Archard, D. (2007). The wrong of rape. The Philosophical Quarterly, 57(228), 374–393.CrossRef Archard, D. (2007). The wrong of rape. The Philosophical Quarterly, 57(228), 374–393.CrossRef
go back to reference Danaher, J. (2017a). Robotic rape and robotic child sexual abuse: Should they be criminalised? Criminal Law and Philosophy, 11(1), 71–95.CrossRef Danaher, J. (2017a). Robotic rape and robotic child sexual abuse: Should they be criminalised? Criminal Law and Philosophy, 11(1), 71–95.CrossRef
go back to reference Danaher, J. (2017b). The symbolic-consequences argument in the sex robot debate. In J. Danaher & N. McArthur (Eds.), Robot sex: Social and ethical implications (pp. 103–132). MIT Press.CrossRef Danaher, J. (2017b). The symbolic-consequences argument in the sex robot debate. In J. Danaher & N. McArthur (Eds.), Robot sex: Social and ethical implications (pp. 103–132). MIT Press.CrossRef
go back to reference Danaher, J. (2017c). Should we be thinking about robot sex? In J. Danaher & N. McArthur (Eds.), Robot sex: Social and ethical implications (pp. 3–14). MIT Press.CrossRef Danaher, J. (2017c). Should we be thinking about robot sex? In J. Danaher & N. McArthur (Eds.), Robot sex: Social and ethical implications (pp. 3–14). MIT Press.CrossRef
go back to reference Danaher, J. (2019). Building better sex robots: Lessons from feminist pornography. In Y. Zhou & M. Fischer (Eds.), AI Love You: Developments on human-robot intimate relations. Springer. Danaher, J. (2019). Building better sex robots: Lessons from feminist pornography. In Y. Zhou & M. Fischer (Eds.), AI Love You: Developments on human-robot intimate relations. Springer.
go back to reference Dautenhahn, K. (2007). Socially intelligent robots: Dimensions of human-robot interaction. Philosophical Transactions of the Royal Society B: Biological Sciences, 362(1480), 679–704.CrossRef Dautenhahn, K. (2007). Socially intelligent robots: Dimensions of human-robot interaction. Philosophical Transactions of the Royal Society B: Biological Sciences, 362(1480), 679–704.CrossRef
go back to reference Di Nucci, E. (2017). Sex robots and the rights of the disabled. In J. Danaher & N. McArthur (Eds.), Robot sex: Social and ethical implications (pp. 73–88). MIT Press. Di Nucci, E. (2017). Sex robots and the rights of the disabled. In J. Danaher & N. McArthur (Eds.), Robot sex: Social and ethical implications (pp. 73–88). MIT Press.
go back to reference Goldman, A. H. (1977). Plain sex. Philosophy & Public Affairs, 6(3), 267–287. Goldman, A. H. (1977). Plain sex. Philosophy & Public Affairs, 6(3), 267–287.
go back to reference Harvey, C. (2015). Sex robots and solipsism. Philosophy in the Contemporary World, 22(2), 80–93.CrossRef Harvey, C. (2015). Sex robots and solipsism. Philosophy in the Contemporary World, 22(2), 80–93.CrossRef
go back to reference Hurd, H. M. (1996). The moral magic of consent. Legal Theory, 2(2), 121–146.CrossRef Hurd, H. M. (1996). The moral magic of consent. Legal Theory, 2(2), 121–146.CrossRef
go back to reference Kant, I. (1997) [1785]. In M. Gregor (Ed.), Groundwork of the metaphysics of morals. Kant, I. (1997) [1785]. In M. Gregor (Ed.), Groundwork of the metaphysics of morals.
go back to reference Langton, R. (1993). Speech acts and unspeakable acts. Philosophy & Public Affairs, 22, 293–330. Langton, R. (1993). Speech acts and unspeakable acts. Philosophy & Public Affairs, 22, 293–330.
go back to reference Langton, R. (2009). Sexual solipsism: Philosophical essays on pornography and objectification. Oxford University Press.CrossRef Langton, R. (2009). Sexual solipsism: Philosophical essays on pornography and objectification. Oxford University Press.CrossRef
go back to reference Levy, D. (2008). Love and sex with robots. HarperCollins. Levy, D. (2008). Love and sex with robots. HarperCollins.
go back to reference MacKinnon, C. A. (1987). Feminism unmodified: Discourses on life and law. Harvard University Press. MacKinnon, C. A. (1987). Feminism unmodified: Discourses on life and law. Harvard University Press.
go back to reference MacKinnon, C. A. (1989). Toward a feminist theory of the state. Harvard University Press. MacKinnon, C. A. (1989). Toward a feminist theory of the state. Harvard University Press.
go back to reference McArthur, N. (2017). The case for sexbots. In J. Danaher & N. McArthur (Eds.), Robot sex: Social and ethical implications (pp. 3–14). MIT Press. McArthur, N. (2017). The case for sexbots. In J. Danaher & N. McArthur (Eds.), Robot sex: Social and ethical implications (pp. 3–14). MIT Press.
go back to reference McGlynn, C., Rackley, E., & Houghton, R. (2017). Beyond “Revenge Porn”: The continuum of image-based sexual abuse. Feminist Legal Studies, 25(1), 25–46.CrossRef McGlynn, C., Rackley, E., & Houghton, R. (2017). Beyond “Revenge Porn”: The continuum of image-based sexual abuse. Feminist Legal Studies, 25(1), 25–46.CrossRef
go back to reference Migotti, M., & Wyatt, N. (2017). On the very idea of sex with robots. In N. McArthur & J. Danaher (Eds.), Robot sex: Social and ethical implications (pp. 15–27). MIT Press. Migotti, M., & Wyatt, N. (2017). On the very idea of sex with robots. In N. McArthur & J. Danaher (Eds.), Robot sex: Social and ethical implications (pp. 15–27). MIT Press.
go back to reference Nussbaum, M. C. (1995). Objectification. Philosophy & Public Affairs, 24(4), 249–291.CrossRef Nussbaum, M. C. (1995). Objectification. Philosophy & Public Affairs, 24(4), 249–291.CrossRef
go back to reference Nussbaum, M. C. (2007). Feminism, virtue and objectification. In R. Halwani (Ed.), Sex and ethics: essays on sexuality, virtue, and the good life. Palgrave-Macmillan. Nussbaum, M. C. (2007). Feminism, virtue and objectification. In R. Halwani (Ed.), Sex and ethics: essays on sexuality, virtue, and the good life. Palgrave-Macmillan.
go back to reference Richardson, K. (2016). The asymmetrical “Relationship”: Parallels between prostitution and the development of sex robots. SIGCAS Computers & Society, 45(3), 290–293.CrossRef Richardson, K. (2016). The asymmetrical “Relationship”: Parallels between prostitution and the development of sex robots. SIGCAS Computers & Society, 45(3), 290–293.CrossRef
go back to reference Sharkey, N., van Wynsberghe, A., Robbins, S., & Hancock, E. (2017). Our sexual future with robots. Foundation for Responsible Robotics. Sharkey, N., van Wynsberghe, A., Robbins, S., & Hancock, E. (2017). Our sexual future with robots. Foundation for Responsible Robotics.
go back to reference Stewart, H. (2010). The limits of the harm principle. Criminal Law and Philosophy, 4(1), 17–35.CrossRef Stewart, H. (2010). The limits of the harm principle. Criminal Law and Philosophy, 4(1), 17–35.CrossRef
go back to reference Wertheimer, A. (1996). Consent and sexual relations. Legal Theory, 2(2), 89–112.CrossRef Wertheimer, A. (1996). Consent and sexual relations. Legal Theory, 2(2), 89–112.CrossRef
Metadata
Title
Non-consensual personified sexbots: an intrinsic wrong
Author
Karen Lancaster
Publication date
17-05-2021
Publisher
Springer Netherlands
Published in
Ethics and Information Technology / Issue 4/2021
Print ISSN: 1388-1957
Electronic ISSN: 1572-8439
DOI
https://doi.org/10.1007/s10676-021-09597-9

Other articles of this Issue 4/2021

Ethics and Information Technology 4/2021 Go to the issue

Premium Partner