These are just some of the motivations for humans to have avatar proxies. But what are the potential consequences of such arrangements?
3.2 The Proxy Epistemic Gap and the Personal Responsibility Risk
The most significant risk of direct harm is that proxies open the represented person up to being personally responsible for actions that are not within their control. In Sect. 1 I noted two features of proxy representations. The first is that when a proxy, A, both stands for and stands in for a person, B, B is responsible for A’s actions. I also described an epistemic gap that will arise when a proxy has to decide how to act without explicit instructions. I gave an example in which a proxy does not have explicit information on how to proceed but is responsible for casting a vote on the represented person’s behalf. The obvious risk in such a case is that the proxy can make a decision or undertake an action that the represented agent is responsible for but that the latter had no direct part in making. Furthermore, it is left unclear how the proxy might appropriately fill the epistemic gap.
The risk that comes from the proxy epistemic gap can arise even in quite tightly constrained cases. For example, you may have given your proxy strict instructions on who to vote for at the local council election apparently eliminating any risk of misrepresentation, but you may not have specified what the proxy should do if your preferred candidate pulls out of the race. Should your proxy abstain, or should they choose another candidate for you? Or, to take another example, if your preferred candidate is exposed as a fraud right before the vote is to be cast, is your proxy right to still vote for them even if they don’t know how you would respond to this updated information?
What are the duties of proxies in cases where they are faced with an epistemic gap? In particular, where is it appropriate for them to look for supporting or countervailing evidence and how should they balance these kinds of evidence? A human proxy might first try to fall back on adjacent preferences. For example, if a medical proxy knows that the person they are representing was in favour of donating their organs on their death, and they are asked whether the represented person’s body could be donated to medical science, they might use the willingness for organ donation as evidence of amenability to body donation. On the other hand, if they also know that the person had wanted a traditional funeral this might be evidence to be counted against donating their body to science. The proxy has a responsibility to balance these considerations. Beyond that the proxy might seek to gather more information, perhaps by asking the represented person’s friends or relatives for their opinion of what the preferences might be. But how is the testimony of different individuals to be weighted? Is testimonial evidence of more recent interactions to be given more significance than that of historic interactions?
The important point here is that, on reflection, the limits, scope and even the basic mechanics of the proxy relation turn out to be incredibly ill-defined and vague. And this is not just a feature of a lazy set up; the epistemic gap arises because the proxy relation is trying to replicate as closely as possible the mechanics of personal action and decision-making, something that is sensitive to numerous small changes in context.
The epistemic gap is a particularly significant concern in the case of avatar proxies. With human proxies the epistemic gap is left to be filled in an ad hoc, ‘common sense’ way. How could the epistemic gap be filled with an avatar proxy? The obvious approach would be for the avatar proxy to be enabled with some form of intelligent decision support system; the kinds of systems that are currently used to assist decision-making in areas such as finance, healthcare, marketing, commerce, and cybersecurity. These are systems that are designed to mimic human cognitive capabilities in some way. There are familiar concerns about the reliability of such systems. The high-profile cases are the ones that have made it to court such as the use of COMPAS to inform court sentencing and the use of algorithmic tools to evaluate teacher performance in some US states. (Rubel et al.,
2021) But there are likely to be many more hidden examples where, for example, an automated decision system has resulted in unfair treatment to an individual through the rejection of a loan application or a decision to remove their candidacy from a pool of potential job applicants. As Rubel et el. (2021) put it, ‘There is widespread recognition that there are ethical issues surrounding complex algorithmic systems and that there is a great deal of work to be done to better understand them.’ (Rubel et al., 10).
Note how the situation of the avatar proxy decision maker differs from those familiar cases of algorithmic harms. In those cases, it is difficult to know where to lay the blame when things go wrong: Is the algorithm designer to blame? The people who selected the data set? The organisation who utilised the system to make decision? In the avatar proxy case, the proxy arrangement identifies the responsible agent for us. Under a proxy arrangement the represented individual will directly and personally bear the consequences of mishaps and the responsibility for any resulting harm to others. That is, we have already identified the significant negative consequences resulting from actions informed by algorithmic decision systems; the avatar proxy arrangement brings those decisions into the realm of personal action, with all of the personal responsibility that follows from it.
Even leaving catastrophic harms aside, there are less significant wrong decisions that could impact on one’s reputation or, minimally, on one’s sense of self. We have daily evidence of how it feels when algorithms get us wrong; the recommendations of music and books that widely miss the mark and the suggestions of things you might like to buy. Being ‘misunderstood’ or miscategorised by algorithms can feel like a personal insult. Imagine how it will feel when these mistakes are performed publicly by an avatar proxy that is representing you. Even small and apparently innocuous poor choices could impact on the reputation that you have with others, and perhaps ultimately on your own sense of self.
There is a further concern arising from the availability of avatar proxies. Consider again the example of using an avatar proxy in the workplace. The motivation in the example as it was described came from the employee themselves. But there may also be a sinister motivation for
employers to encourage avatar-employee proxies. Danaher (
2016) has written on the retribution gaps that can arise when an AI system causes some undesirable outcome. In the absence of a human to be held accountable for the outcome, impacted people can sometimes turn to the leaders of organisations or the owners of the relevant technology company to seek retribution. A leader who was eager to redirect responsibility for the actions of their organisations might be motivated to establish proxy relations between employees and avatar representations of them, to provide a plausible associated responsible human agent on which to hang the blame if things go wrong. That is, proxy AI systems could provide a direct route to a ‘responsible’ human agent to act as scapegoat, despite the epistemic gap that will exist between the agent and their proxy.
We might think that an avatar proxy could be given explicit instructions on how to act and that this would remove the risk that arises from the epistemic gap. But real-world contexts are incredibly fine-grained making it unlikely or even impossible that all eventualities can be considered in advance. As such, in order to be useful, we will need to have avatar proxies that can respond to unforeseen circumstances. Perhaps the risk could be averted if we made it that the avatar would simply not make a decision when faced with an epistemic gap? This won’t work for two reasons. First, as noted above, the fine-grained nature of decision contexts means that epistemic gaps between the avatar and the represented person are likely to be ubiquitous so this restriction would severely reduce the usefulness of the proxy arrangement. Second, a decision not to act due to epistemic limitations could have its own significant consequences; that is, we can be liable for the negative consequences of acts of omission as well as those of performance. (Clarke,
2022)
As I have introduced the notion of a person’s responsibility for the actions of their avatar proxy it follows from the conditions of the proxy arrangement itself; when a full (non-degenerate) proxy relation obtains, the represented person is responsible for their proxy’s actions. As such, whenever we judge that a full proxy relation exists between an agent and their avatar, the responsibility condition kicks in. One might question whether this would actually be the case. Would we actually hold individuals responsible for actions that their avatar proxies perform in either real or virtual environments? I see no reason why we would not. The actions of such proxies can have real-life consequences and, as Danaher (
2016) noted, in such situations we tend to look for someone to be held accountable for them; the represented agent who voluntarily entered into the proxy arrangement is the obvious ‘someone’. Existing traditional proxy arrangements set precedent for such lines of responsibility. And even if we try and force a gap between the avatar proxy and the represented person, perhaps arguing that the avatar proxy representing the person is more like an employee relation than a standing-in-for relation, under the legal theory of ‘respondeat superior’, an employer is responsible for the acts or omissions of its employees. (Thornton,
2010) Either way, when looking for someone to hold responsible for an avatar proxy’s actions, it seems inevitable that the represented person will be liable.
Clearly there are significant personal risks that arise directly from the avatar proxy relation. In order to limit these risks an ad hoc or common-sense approach to proxy arrangements will not do. We need to understand the precise limits of the proxy relation, the responsibilities of the proxy and the responsibilities of the represented agent. But increasingly sophisticated avatar proxies are being introduced without any such clarity.
The significant question we should ask ourselves is, is it possible for the loosely defined human-to-human proxy relationship to be transferred to avatar proxies in a way that doesn’t expose us to significant personal risk? If not, we would be well-advised to be wary of the creeping use of avatar proxy representation.