In this section I will present some definitions of intent whose inspiration is the criminal law. These definitions will be semi-formal, in the sense that they can be converted into a fully formal language, suitable for an algorithms, but their description does not rely on a huge amount of notation. I have decided not to present a fully formal approach because I feel that would narrow its utility and audience. When criminal law does eventually tackle the problem of intent in algorithms, it should do so in a way that does not preclude any particular AI paradigm. From a practical perspective this is so as to make it applicable to the widest set of A-bots possible and to ensure the timely delivery of justice. From an economic perspective it wouldn’t be desirable to design a legal treatment for a certain type A-bot. Large neural networks are popular at the moment but the history of AI has had many different most favoured technologies over time. In comparison, the evolution of the law can seem glacially slow. The legislators should impose requirements on A-bots but as far as possible not try picking a winning technology. The approach of this section reflects my belief in this minimally prescriptive approach.
3.1 Definitions of intent
With the desiderata of Sect.
2.5 in mind, we are now in a position to present three definitions of intent. We begin with direct intent, being the simplest of intentional concepts and the highest level of intent. It is a foundational concept on which our other definitions are built.
On notation, we will use upper case letters to represent variables and lower case letter to represent realisations of those variables. The statement \(X=x\) is taken to mean that variable X takes realisation x. We define \(\mathcal {R}(X)\) to mean the range of all possible values that variable X can take.
The first three requirements in this definition should not be surprising or particularly contentious. The condition of
Free Agency ensures that the agent D genuinely had a choice about their behaviour.
Knowledge implies that an agent can only intend things that they can measure and Foreseeable Causality, ensures that the agent can only intend results which they can realistically cause ex-ante subject to their own world model. The Explicit Aim clause requires some exploration. If it were D’s aim or desire to cause result
x, then we should consider this sufficient for intent. The difficulty comes in defining what aim or desire should be in the case of an artificial agent. As Smith (
1990) observed, endeavours to define intent often just end up shifting the ambiguity to other words (in that case purpose). An A-bot might be designed in such a way where it has values over every state of the world (as a Reinforcement Learning agent does), in which case aims or desires, at least locally could be feasibly extracted. Kenny (
2013) uses a failure test which he states as a question to the actor which to paraphrase is as follows: If the (proposed) intended outcome of your actions had not occurred, would you be sorry or would you have failed in your endeavour? This question invokes the counterfactual in a way which is quite appealing to a causal scientist and offers a potential route to establishing aims or desires.
The definition only makes reference to information available at the point of commission; the importance of achieving the desired result is subsumed. Intent, is the same regardless of whether the desired result is obtained or not in line with the desiderata. This means Definition
1 is useful when considering inchoate crimes such as crimes of attempt, as discussed in Sect.
2.3.
Unfortunately there is no guarantee that an A-bot will have an amenable cognitive mechanism that numerically values states. An alternative counterfactual approach would be to define an aimed outcome as one, which if impossible to achieve would mean that some alternative action
\(a'\) would be taken instead of a by D.
(DI4’)
Counterfactual Aim D aims or desires result \(X=x\) by a if in another world where \(X=x\) is not possible by performing a, then some other action \(a'\) would be chosen instead.
An alternative, but equivalent version of direct intent is required, namely what Bratman (
2009) calls means-end intent and which according to Simester et al. (
2019) is deemed equivalent to direct intent. All intermediate stages caused by an agent which are necessary to obtain for some ultimate intended outcome, are also intended.
For completion, we state the equivalence of Means-End Intent with Direct Intent as asserted both in Simester et al. (
2019) and Bratman (
2009).
Next we will consider oblique intent, which like Means-End intent, relies on a definition of direct intent already being in place.
Note that two probabilities are relevant in this definition. Firstly the probability of the side-effect happening as a result of action, and secondly the probability of the side-effect happening, contingent on the directly intended outcome
\(Y=y\) coming to pass. Smith (
1990) terms the latter "
A result which will occur if the actor’s purpose is achieved." An feature of oblique intent over direct intent is that there is no requirement to know the aim of D, only that one in exists (because it intends
something through its actions). The abstraction of aim might be time-saving both for an A-bot using this as a planning restriction and a court which is considering an Agent’s actions.
In the spirit of Child (
2017) we will now present a definition of ulterior intent, that is to say the intent of doing something in the future to cause some result. This is different from Definition
1 which defines intent at the point of commission (whereby the intended result will occur in the future). Aside from the existence of ulterior offences, this is an extremely useful thing to do from the perspective of planning ahead. An A-bot will have to plan ahead such that it can never be put itself in a position in the future where it breaks some law by default. In the field of model checking (Baier and Katoen
2008), this called deadlock, and techniques have been developed to check for it in algorithms. Given the track record of AI finding various ways of cheating in any task (Lehman et al.
2020), one can imagine an A-bot deliberately finding ways to narrow its future choices to one, thereby sidestepping the definition of intentional action. Child does not require an agent with ulterior intent to make any forecasts about the likelihood of the conditions under which something is intended in the future, nor does he require the agent to have a ‘pro-attitude’ towards the conditions under which they intend to do something in the future.
The second point coincidence requirement is one of time consistency. D should not be said to be intending to do something in the future, unless there exists a point in the future where they intend to do that thing. The commitment requirement is present to distinguish between a potential plan and an intention to do something. Proving that an D will act in a certain way in the future is potentially easier when D is an A-bot then when they are a human, because we do at least have the potential to examine the inner workings of the A-bot and simulate future action. An implication of the UK Criminal Attempts Act is that on deployment, an AI with some ulterior intent to commit a crime, under any particular circumstance in the future is already committing a crime. This is pre-crime of the Minority Report variety and might lead to unexpected problems though is certainly an incentive for developers to understand and monitor what their creations intend on releasing them.