From what has been suggested, it seems that considering artificial autonomy only as the ability to operate ‘as an independent unit or element over an extended period of time’ is not enough. Although independence—understood here, from the viewpoint of robotics, as the ability to act with the aim of carrying out some predefined tasks without any external intervention or human help—can be one of the characteristics and consequences of artificial autonomy, it is not what appears to establish or explain this very same capacity. Even philosophical language, to which we have referred extensively in this paper, specifies a substantial distinction between autonomy and independence, as the latter postulates (almost in an individualistic or anarchic way) a rejection of bonds, affiliations, rules, and regulations, while this does not happen for autonomy (its etymology, after all, recalls the presence of a law, which is possibly moral and to be shared interpersonally). The difference lies in the relationship: while independence is the denial of any contact (except those strictly indispensable), and refers to a way of thinking, deciding, and living detached from any subordination to external authority and judgments, autonomy is philosophically based on relationships of mutual listening and interaction, and therefore calls for a moral coexistence which is the source of a common good.
If so, the definitions of robot autonomy based on the concept of independence are not satisfactory. Artificial autonomy requires, preventively, being able to learn expertise and skills (including social norms) from the environment and context. Such complex learning, a sine qua non of the capacity under consideration, seems possible only through the interaction with human agents and with the environment in which the robot operates. This is the reason why independence as such (understood as the denial of any dependence, law, or relationship) cannot ground robot autonomy.
Several doubts emerge: first, principles may be at odds with one another (Tzafestas
2016). There could be inconsistency between divergent indications and therefore operational paralysis, if not rambling actions. Furthermore, a purely deontological top-down approach could never identify all the ethical norms for robots, especially where these are context and culture-specific. Some rules are only tacitly expressed when respected (or transgressed) by humans: morality can never be fully specified.
7 If, in addition to moral rules, one tried to picture all the general prototypes of possible situations, this incompleteness would be even greater: the exhaustiveness of potential scenarios and behaviors cannot be predicted ex ante (Muehlhauser and Helm
2012). The main issue, however, concerns artificial autonomy itself. Providing a robot with many a priori stringent rules, and possibly pre-set situational patterns, would hinder any autonomous attempt. Indeed, in the top-down approach, the installed algorithms produce predictable results: programmers embed in the machine what they consider to be ethical behaviors, and the robot only has to determine when to apply them (Alaieri and Vellino
2016). As we have seen, high predictability is a characteristic of automatic systems: in adopting this approach with artificial agents, we would still face a schematic predetermination. If the goal is to build autonomous robots that are increasingly independent and adaptable, the path of high predictability has to be excluded. While it remains necessary to equip artificial agents with some fundamental ethical principle, the all-encompassing ex ante approach does not seem to be suitable: autonomy is not made up of preventive formulas. At this point, our question must be reiterated: how do we solve operational and ethical problems while guaranteeing the possibility of artificial autonomy?
5.1 Interdependence as a bottom-up approach to machine ethics
Despite robots cannot (yet) give themselves the law, i.e. decide their own final goals by themselves, and therefore be fully autonomous, they are somehow always in relationship with humans. In particular, this is true for social robotics (which is the primary reference of our analysis), whose purpose is to build intelligent machines to be used in different social contexts and at various levels of human interaction. If autonomy is a relational notion (Castelfranchi and Falcone
2003), and if this relationship occurs in a social context, artificial autonomy can only result from social interdependence. Upstream, in the definition of robot autonomy, there cannot already be independence. If that were the case, there would be no adequate contextual learning. When a robot is intended within a social environment, its autonomy must also be: an artificial agent, as an active entity, relates to a context in which there are other active entities, primarily human. Far from quickly becoming independent of them, a robot has to interact with humans in order to improve its decisions and ‘moral’ expertise: its artificial autonomy is built interdependently. This means that, before being able to act socially without being supervised (or, more likely, under reduced control), that is, before being independent, an artificial agent must learn how to act, what is normatively adequate or inadequate to do in certain social contexts, even at intercultural levels. Such acquisition relies on interdependence: this concept seems to guarantee the possibility of artificial autonomy while safeguarding the ethical aspect. We understand interdependence as a mutual, social, and contextual dependence, a reciprocal dependence
8 so that robots can learn how to operate in an increasingly opportune way and at the same time humans can employ them for wider and more complex tasks, within increasingly less structured scenarios. One depends on the other for learning and autonomy, although obviously control is still on one side (however mild this can be).
Artificial autonomy is based on social relationships: the concept of interdependence reminds us that “a [social] robot is never isolated and that the human is always involved in some way” (Tessier
2017: p. 182). Relationships, however, are dynamic by definition: artificial autonomy belongs to a relational continuum (Defense Science Board
2012), i.e. it depends on the context of action, the task assigned, and the operational capabilities of the artificial agent. Put it differently, artificial autonomy is granted by humans to different degrees according to need: it is in the social relationship that robots are enabled to be more or less autonomous. It is the human, be it operator or user,
9 who allows the artificial agent to exercise a certain level of initiative. Similarly, though not entirely, Murphy (
2019) speaks of an “adjustable autonomy” in which the human dynamically adjusts the autonomy levels of the robot.
Artificial autonomy represents a set of delegated capabilities (Defense Science Board
2012): depending on the situation, on its delicacy and vulnerability, as well as on the robotic design, a larger or smaller set of skills and tasks can be delegated to a machine with progressively less supervision. This shows that the autonomy we are looking for “is not an intrinsic property” (Tessier
2017: p. 182) of an artificial agent: as the robot is able to learn from experience and from a particular social context interacting with humans, its artificial autonomy is granted to always different (potentially ever wider) ‘circles’. This is why artificial autonomy dwells in social collaborations based on interdependence. The same happens in the human dimension, which, after all, is the conceptual model for robotic autonomy: “genuine autonomy resides in the interaction between individuals and society. […] It is in this dialectical relation between the social and the individual that real human autonomy resides” (Dupré
2001: p. 18).
The biggest misconception that revolves around the concept of artificial autonomy is that robots can be completely independent of human control. Unfortunately, oversight is inescapable, albeit high-level (Defense Science Board
2012), as autonomous systems programming embodies the design limitations of decisions and actions delegated to machines, however, general and wide their goals may be. This means that the same artificial agent can be more autonomous in one situation than in another. This is why the issue of levels is both ambiguous and impractical: it is not a question of producing robots that are fully independent regardless of the context of operation, it is a question of building artificial agents capable of learning both from the specific environment in which they are located and from the entities they interact with. The point is to elaborate and develop the capacity for robotic interdependence: learning in collaboration, that is, maintaining interactivity during cooperation (Castelfranchi and Falcone
2003). This applies to delegation too, which does not mean totally uncontrolled reliance. Adopting the issue of autonomy levels as an operational reference implies that full autonomy, i.e. total independence, is always the most desirable end of robotic development (see Murphy
2019: p. 77): would not it be better to recognize that the degree of artificial autonomy has to dynamically depend on the task, the situation, the socio-cultural context, and the architecture of the robot?
The computational and material structure of the artificial agents that we want to be autonomous must be carefully considered: despite an undeniable sophistication, capable of overcoming human abilities under certain aspects, the implemented algorithms and the synthetic body remain operationally limited. The decisions and actions of a robot, however, general its purpose may be, cannot demonstrate absolute autonomy. Interdependence, as the inevitable relation existing between social actors, resizes robotic activities, establishing an independence of ‘thought’ and action that is always bounded (Defense Science Board
2012). This means that, in interdependence, artificial autonomy and consequent independence are never outright. Robots, far from being self-governing or autarchic, can only be “partially autonomous agents” (Tzafestas
2016: p. 2) since they have structural limits and cannot decide their own ends for themselves: their autonomy is defined only in relation to the decisions and actions taken to complete the required tasks.
10 Artificial autonomy, therefore, is an instrumental autonomy: a robot can decide how to behave in order to achieve pre-established human goals, be they specific or general. This agent, unlike an imaginary general AI, is autonomous but at the same time human-bound (Lo Presti
2020). In the social interdependence, a robot must be able to act autonomously but adequately by achieving objectives that are entrusted to it externally: its autonomy is executive, it concerns the choice of means, of intermediate and instrumental ends (sub-goals), and not of ultimate ends (see Castelfranchi and Falcone
2003: p. 106).
Artificial agents can be autonomous precisely, because they are related to (in a relationship with) humans: the mutual dependence that is established through the social implementation highlights that artificial autonomy cannot be absolute freedom. In analogy with humans, machines need constraints to be able to exercise autonomy. Beyond any logical paradox, the concepts of autonomy and dependence are compatible (Coeckelbergh
2006), as they are united by that of relationality: this is the dimension that establishes not only the advantages, but also the necessary limitations—which are ethical, not just structural, in the case of social robotics. Artificial agents can be autonomous while still in a social dependence that ethically delimits their action: not all types of external control threaten autonomy. The point for a robot is to learn these ethical limitations, respect them, and put them into practice. Indeed, the more autonomy is granted to a robot, the more “ethical sensitivity” is required (Tzafestas
2016: p. 66) not only from the developer and operator but also and above all from the robot itself.
Let us resume a question raised earlier: is it enough to implement ex ante moral rules in its code for a robot to be reliable and autonomous at the same time? As we have proposed, some ethical norms implemented a priori, i.e. during the design stage, are necessary (those that impose not to kill or harm sentient beings). An extensive top-down approach, however, would prove incoherent with the goal of building autonomous artificial agents, as we would fall back into automation. Simply put, ex ante basic ethical rules are necessary but insufficient. This being the case, it seems that these a priori norms need to be accompanied by a bottom-up approach in order to have ethical autonomous robots. It is, therefore, a question of considering a hybrid morality (Allen et al.
2005; Wallach et al.
2010), a sophisticated moral sensibility: the fundamental rules embedded a priori has to be supplemented by the learning of others, as well as by the learning of the ability to understand in which contexts to respect and exercise them. Thus, we would have robots endowed with an ethical autonomy that would allow them to be dynamic, flexible, and righteous.
The proposal that artificial autonomy should be based on social interdependence drives the development of this sophisticated moral sensibility too: put it differently, hybrid morality is the actual concretization—or, if preferred, realization—of artificial autonomy theoretically conceived as deriving from social interdependence. While, on one hand, the issue of the hybrid approach appears to be an inevitable and concrete consequence of the proposal advanced in this paper (here we quote it mainly to support what we suggest, seeking at the same time to understand how robotic interdependence could be implemented), on the other hand, the idea of interdependence as the source of robotic autonomy can justify and validate the hybrid approach itself.
Just as robotic autonomy (both behavioral and motivational) is achieved by interacting with the human environment, so the acquisition and operational understanding of broad ethical norms can only arise from the relationship with other moral agents, primarily humans. In a context of interdependence, artificial agents—through a sophisticated interweaving of sensors, deep and reinforcement learning algorithms, natural language processing, actuators, and so forth—can learn specific moral rules in a gradual and cumulative way by interacting with moral biological agents. This perspective is also relevant at an intercultural level (Dignum
2018), where the programming of a robot takes place in one socio-cultural context and its application in another. The concept of interdependence, therefore, can solve ethical problems while ensuring the possibility of artificial autonomy. This notion implies the introduction of a bottom-up approach in which “the programmer builds an open-ended system that is able to collect information from its environment, to predict the outcomes of its actions, to select among alternatives and, most importantly, has the capacity to learn from its experience” (Alaieri and Vellino
2016: p. 161). Robots endowed with this hybrid morality are able to learn from their attempts and errors, from experience, and from the surrounding environment (unsupervised learning) while keeping on relating to the pre-established fundamental principles (supervised learning).
Clearly, there is no lack of challenges: in this mixed moral perspective, robots should be able to adequately address the moral issues encountered in the interactions with humans, that is, they should interpret the moral relevance of situations and actions, formulate moral judgments, and communicate on morality (Malle and Scheutz
2014). Moreover, how to integrate different moral philosophies and dissimilar architectures? Probably, the ability to make moral decisions will require “some form of emotions, consciousness, a theory of mind, an understanding of the semantic content of symbols” (Allen et al.
2005: p. 154; Tessier
2017: p. 190), as well as the ability to grasp what is culturally meaningful and appropriate in each and every social context of operation. Finding an answer to these limitations, however, is not the purpose of this article—although framing artificial autonomy as interdependence could help in finding some solutions: a robot could learn how to manage variability and moral differences precisely through interaction and mutual dependence. For a more detailed explanation, we refer to future research.
While interdependence may place ethical limitations on the autonomous operation of artificial agents, it nevertheless envisages collaboration and cooperation capable of leading to more incisive and effective results than actions undertaken in full independence. A social robot, never being completely autonomous and independent, can behave more successfully if it is part of a group, if it helps it and at the same time learns better how to do it, being helped. Social actors, be they humans or artificial agents, make up a team: the more interdependent this group is, the more successful it will be. Indeed, the best teams are highly interdependent: together, robots and humans can achieve higher levels of innovation and better decisions, as well as reduce errors (Lawless et al.
2019; see also Lawless and Sofge
2017). If artificial autonomy results from interdependence, this dimension not only establishes moral constraints (Arkin et al.
2012), but also and above all social gains.
Mutual social dependence means that artificial autonomy can be seen as authority sharing.
11 Where human capabilities are limited for biological or cognitive reasons, an artificial agent can complement them, for instance by seeing more accurately or by operating in dangerous contexts. The benefits of artificial autonomy are evident (Redfield and Seto
2017). Nevertheless, as we have already mentioned, robotic abilities and decisions are limited too, as they are the result of algorithmic computations often modeled on a compromise between quality of the outcome and speed of calculation (Tessier
2017). These mutual limits can complement each other towards more fruitful actions: decision-making authority is shared, and so artificial autonomy integrates human fallibility, just as human cleverness compensates robotic constraints. With regard to the artificial agent, its autonomy has to be considered in a framework of authority sharing with the operator or user: the robot is allowed to take certain decisions and actions on the basis of its adequate learning, but never in a fully independent way. As for the human, it should not always be considered as “the last resort” (Tessier
2017: p. 186): human beings are prone to making mistakes or overestimating robotic decisions (over-confidence), as well as to intentionally hurting. Although authority sharing involves issues still to be solved (for example: which decision prevails? Who can act and when? In the event of a conflict between decisions, must the human always have the last word?), it nevertheless confirms the link between interdependence and artificial autonomy: where the human is limited, for whatever reason, granting an artificial agent a certain degree of decision-making and operational authority within social relations means making it autonomous, albeit never completely. Again, the fact remains that the robot has to learn how to decide and act ethically.
Interdependence guarantees the possibility of artificial autonomy: through contextual relationships a social robot can learn better and better how to act and behave appropriately. If it were primarily independent, without interactional capacity, its autonomy would be empty: it would rather be a different form of automation. The peculiarities of artificial autonomy, which we introduced above negatively, now find an explanation in the dependence existing at the social level between human and artificial agents. Postulating that a social robot is always equipped with machine learning algorithms in dialogue with sensors, actuators, and effectors, its ability to perform reasoned and non-trivial actions derives from its being in relationship with human agents: its computational structure and its synthetic-sensory body, constituting a sophisticated whole, ensure that the robot is able to learn from the human with whom it interacts. It is in the relationship, and therefore, in the mutual dependence, that learning is built and improved, both from a motivational and an ethical perspective. The adaptation to different and complex contexts, both environmental and cultural, can be explained in the same way: the interaction with the human makes the robot understand what is appropriate to do in a given situation, even in the face of unexpected changes (Redfield and Seto
2017; Murphy
2019). By doing so, the artificial agent makes sense of what happens in its context (Lawless et al.
2019): in social cooperation, the robot expands its representation of the situation to act consistently with the general goals entrusted to it. At first, this autonomous artificial agent can be quite unpredictable—though not necessarily harmful—but by continuing to learn, and therefore to interact socially, human supervision can be gradually reduced, making the robot more and more independent, albeit never completely. The human presence remains, if only for learning, but it fades in control. In interdependence, the artificial agent gradually acquires certain characteristics of autonomy that allow it to better help the human operator or user.