Skip to main content
Top
Published in: AI & SOCIETY 1/2020

Open Access 08-05-2018 | Original Article

Legal framework for small autonomous agricultural robots

Authors: Subhajit Basu, Adekemi Omotubora, Matt Beeson, Charles Fox

Published in: AI & SOCIETY | Issue 1/2020

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Legal structures may form barriers to, or enablers of, adoption of precision agriculture management with small autonomous agricultural robots. This article develops a conceptual regulatory framework for small autonomous agricultural robots, from a practical, self-contained engineering guide perspective, sufficient to get working research and commercial agricultural roboticists quickly and easily up and running within the law. The article examines the liability framework, or rather lack of it, for agricultural robotics in EU, and their transpositions to UK law, as a case study illustrating general international legal concepts and issues. It examines how the law may provide mitigating effects on the liability regime, and how contracts can be developed between agents within it to enable smooth operation. It covers other legal aspects of operation such as the use of shared communications resources and privacy in the reuse of robot-collected data. Where there are some grey areas in current law, it argues that new proposals could be developed to reform these to promote further innovation and investment in agricultural robots.

1 Introduction

Self-driving vehicles are rapidly arriving both on (Guizzo 2011) and off (Blackmore et al. 2004) roads. In the agricultural setting, technology has progressed from tractor driver-assistive systems such as RTK-GPS displays, to fully autonomous, self-driving platforms capable of carrying out agricultural tasks with no human intervention (Pedersen et al. 2006). While legal aspects of on-road autonomous vehicles have been well studied (Beiker 2012; Anderson et al. 2014, Pinto et al. 2012; Douma and Palodichuk 2012; Brodsky 2016), there is a need for an analogous understanding of off-road agricultural vehicles’ legal positions, despite the forecast for the agri-robotics section to reach 5.7bn USD by the year 2024 (Transparency Market Research 2017). The present study reviews the relevant legal frameworks from a practical engineering implementer of agricultural robotics technologies to fill this need. It is intended as a self-contained guide for practising engineers to find all the information needed to get their autonomous agricultural robotic research systems up and running, quickly and easily within the law. As such it does not represent the formal legal advice, which should be taken in addition to the overview given here.
Autonomous agricultural vehicles have been developed in two broad classes: automated large tractors and smaller (e.g., < 1 tonne) precision robots. Automated tractor systems have been developed (Ishida et al. 1998; Michio et al. 2002; Blackmore et al. 2004; Dvorak 2016) based on existing manual-drive tractors, which already have commercially available high precision GPS guidance. These systems compute paths to swathe fields, typically in rows with headland turns. In some cases, this guidance consists of telling the human operator precisely what angle to turn the steering wheel at each second (e.g., Trimble EZ-Guide Lightbar, http://​www.​trimble.​com); others show deviation from the computed path and leave the human driver to correct for it. Automated tractors typically aim to perform the same type of work as manual-drive tractors, namely bulk operations across whole fields, such as seeding, spraying and harvesting of row crops.
In contrast, small autonomous robots for agriculture (“agribots”, Fig. 1) have focused on precision applications. Large vehicles are required for bulk operations due to the need for physically transporting bulk materials such as seeds, fertiliser and produce. Small robots make up for reduced bulk transport capability by aiming, ultimately, to work on a per-plant basis. This enables them to transport smaller amounts of more targetted materials, including reduced herbicide doses to apply to individual weeds (Binch and Fox 2017), detect the fertiliser needs of and apply fertilisers to (Singh and Shaligram 2014) individual plants; and harvesting of plants (Bac et al. 2014) when they are individually optimally ready for consumption.
In some cases, precision systems are also found on large tractors, with variable rate controls uses to make bulk operations more precise (Escola et al. 2013).
The legal implications of these technologies are different from those of on-road self-driving vehicles. On-road vehicle operations take place in public places—highways—which are governed by highways legislation. In contrast, most agricultural robots are intended to operate on privately owned agricultural land, governed by different business, agricultural and environmental laws. However, such land is not free from interactions with humans, whose safety and legal positions must be considered. Farm owners, managers and workers may be present as well as walkers on public footpaths and trespassers. In the event of accidents, the roles of owners, managers, manufacturers and designers of systems must be considered. Existing legal, environmental restrictions and responsibilities must be taken into account—damage caused to the environment is a greater concern than in the on-road case, including the application of chemicals and damage to crops and soils.

1.1 Overview

In these respects, this article addresses three questions: What is the legal regime on the liability of manufacturers, suppliers and users of autonomous robots in the UK/EU? Does the law provide any mitigation of liability which could promote innovation in autonomous robots? Apart from the law in the UK/EU, what are the current debates and legal outlook on robotics and how can these shape the law in the area of small autonomous agricultural vehicles?
After brief introductions to engineering for lawyers and law for engineers, the Sect. 2 of the article examines the liability framework which applies to autonomous robots given lack of a separate or specific framework for robotics in the UK and the EU. Section 3 considers how the law may provide mitigating effects on the liability regime. These parts are intended for use by practising roboticists in need of a self-contained guide to their legal environment.
Section 4 presents the debates on grey areas in the law and proposals which may be adopted to reform the law and promote future innovation and investment in small autonomous agricultural vehicles. This part is intended both for use by policymakers and as a demarcation of potentially dangerous uncertain legal areas for practising roboticists.
The article concludes that the law could facilitate innovation in the agribots for the following reasons: The legal framework for autonomous robots cuts across different laws. Therefore, liability could be shared or distributed among different parties to a contract for the use or operation of an agribot. All legislation which imposes liabilities also provides corresponding defences which may aid the avoidance of liability or the mitigation of damages. There are in particular specific defences which address the peculiarities of technologies. Contracts can be used to define the rights and obligations of respective parties, and unless the law otherwise specifies, the contract can exclude liabilities for specific claims. Courts are legally obliged to consider the utility and social and economic value of an activity in awarding damages for loss and injury. Current debates suggest an appreciation of the new challenges posed by innovations in robotics for law and policy. These can facilitate resolution and legal intervention in the grey areas surrounding the legal framework for small autonomous agricultural vehicles.

1.2 Basic self-driving technology concepts (for lawyers)

All vehicle automation systems are in practice probabilistic in their behaviour to some extent. Modern machine navigation (Thrun 2005) and object recognition systems use Bayesian probability frameworks (Bishop 2006). Probabilities appear in these models in two distinct ways. First, the models assume precise models of the probabilistic distribution of sensory features given states of the world. By itself, this assumes that the world is random and probabilistic, though the probabilities in the equations can, in theory, be manipulated precisely and deterministically. Outdoor environments, weather, and the complexity of plant biology and animal behaviours ensure that the world is indeed random for all practical purposes—this contrasts with other robotics applications such as food processing production lines where the environment and produce can be tightly controlled and standardised (Chua et al. 2003). However, second, the Bayesian inference is known to be computationally intractable in general (Cooper 1990). This means that system designers can work only with approximation algorithms. Some of these approximations are deterministic such as Variational Bayes (Fox and Roberts 2012), while popular Monte Carlo approximations use random number generation as a seed for stochastic sampling (Andrieu et al. 2003). Stochastic methods do not have deterministic behaviour, though they converge to exact answers in the theoretical case of infinite computation time.
The bayesian theory may further make use of prior information in addition to real-time sensing (Bernardo and Smith 2001). This means that the perception of a state, and action selection based on it, can be determined not just by current inputs but also by assumptions and/or observations from the past about similar states. In the on-road case, historical data might show that other road users of particular demographics have statically predictive tendencies to behave in certain ways during interactions with the autonomous vehicle. Statistically, making use of such information as well as real-time sensors is optimal for decision-making. However, the ethics of doing so are controversial. Use of prior information is expressly prohibited in most legal systems, even though it is known to give more accurate judgements (Levitt and Laskey 2000). For off-road agricultural vehicles, such human interactions are of less concern, but similar questions about the use of priors may arise. A vehicle may behave in ways unexpected by its owner or operator if its designers have programmed it with different prior expectations that those of the owner or operator. For example, a weed spraying robot designer might assume that weeds are a priori more probable to be found near walls than in the open field, but a farmer’s particular field might have all the weeds in the centre, leading to the farmer complaining about poor quality spraying decisions. Rather than made such assumptions manually, the designer may have the system learn from data. The designer can collect historical data and analysed before use of the vehicle, to inform and fix the priors. As with manual assumptions, the choice of this data is essential and will still reflect the designers’ assumptions about what constitutes “typical” data. Again, if this differs from the users’ assumptions, then problems may occur. In some systems, the learning from data process may continue after the sale and use of the vehicles, with algorithms updating their priors to include observations from the user’s runs, including data from the present day’s work right up to the present decision time. In this case, the prior information may now include a mixture of the designers directly programmed assumption, the designer’s historical data, and the user’s data, which has previously been identified as a legal problem (Beck 2016).
As well as use for training priors, data collected from users’ farms during operation is valuable for analysis. For example, yield maps (Blackmore et al. 2003) can contain information not only financially valuable to the farmer but also to neighbouring and distant farms when used to predict trends and correlations. As with other “big data” systems such as in-car GPS route planning, which collects data on drivers’ locations, questions arise about who owns this data—the owner, operator or designer?
The purpose of the law is to enable all stakeholders to get along with one another in a society. This includes regulating how they should share scarce public resources, and how they should handle externalities caused to one another as side effects of their private contracts. For example, communicating with outdoor robots requires sharing of the radio spectrum with other local users, whilst applying fertiliser which runs off into a river may have negative externalities to both the general public, who use it in their water supply, and to individual neighbouring farmers.
There is no specific regulatory regime for agricultural robots or for robots generally, and indeed it may be difficult to have a single regulatory regime as robots differ on a number of criteria including functions, level of autonomy and human–machine interaction.1 Therefore, liability could cut across different areas of the law including tort, contract and criminal laws, (as well as administrative actions) as shown in Fig. 2.

1.3.1 Torts

A tort is a civil wrong (committed by a person called the tortfeasor) that results in loss or damage to another person, and anyone who has suffered a loss as a result of another’s civil wrong can bring an action for redress. For example, the manufacturer or producer of a defective product can be held liable for the tort of negligence if the product causes personal injury or other damage to the user. Therefore, because the agribot is likely to be regarded as a product, its manufacturers/designers would be subject to laws regulating liability for defective products. Also, agricultural contractors and farmers as users or owners of the agribot, and their respective agents can be subject to different legal rules and statutory provisions governing negligence, accidents and injury to individuals as well as for loss of or damage to property.

1.3.2 Contracts

While a tort is a civil wrong entitling a party to sue the other party for a breach of duty owed under the law, a contract is an agreement which the law would enforce. The parties voluntarily agree the terms of a contract and where permitted by law, the parties may exclude or limit their liabilities for certain acts or omissions.2 Stated differently, a contractual relationship is governed by the contract rather than by law and parties may bring a (civil) action to enforce the terms of the contract including claims for damages for breach of the contract. In civil actions such as tort and contract, the required standard of proof is the ‘balance of probabilities’, and the claimant must discharge the burden of proofing any loss, injury or damage. Contracts may, in some cases, be used to transfer a liability between parties.

1.3.3 Crimes

Unlike civil proceedings that are initiated by private citizens against other citizens or organizations or the government, it is the state that initiates criminal proceedings for a breach of the criminal law. A crime is a wrong against the society, and any act that constitutes a crime must be so defined by the law and punishment there for stipulated by the law. The standard of proof in criminal cases is ‘proof beyond reasonable doubt’, and the onus of discharging the burden lays on the prosecution or the state.3 Punishment for crimes can range from minor fines to lengthy imprisonment. It is important to note that although there is a development towards ‘corporate criminal liability’ (the concept that corporations should be held liable for criminal acts of officials such as directors, managers and employees), there is no common European approach in this area and domestic laws vary. Some countries (such as Germany) do not impose criminal liability on corporations, others rely on administrative sanctions (See notes below).

1.3.4 Administrative actions

Administrative actions are concerned with activities of administrative agencies to which Authority is delegated based on the agency’s expertise on the subject matter. Administrative actions, therefore, involve oversight functions through the enforcement of statutory laws (laws made by parliament) and rules made by the administrative agencies themselves. Unlike torts where individuals can bring actions for damage or injury and criminal prosecutions initiated by the state, administrative actions involve ensuring compliance through oversight which may entail levying fines and other sanctions on organizations in breach of the law. To invoke sanctions, damage or injury need not occur, all that is required is non-compliance with standards set by the law. For example, data protection authorities can impose fines for non-compliance with the principles of data protection even when data has not been lost or stolen as a result of such non-compliance. Also, environment protection agencies can impose fines for failure to report pollution or to take remedial actions.
It is important to note that although liability is discussed under the separate heads below, in practice, civil and criminal liability may arise from the same activity and administrative actions may overlap with civil and criminal sanctions (See notes on data protection below).

1.3.5 Laws, regulations, directives and standards

The potential for torts and crimes is introduced by legal acts of national and international parliaments. In the EU, Regulations and Directives are different types of legal acts of the EU. According to Article 288 of the Treaty on the Functioning of the European Union,
A regulation shall have general application. It shall be binding in its entirety and directly applicable in all Member States.
A directive shall be binding, as to the result to be achieved, upon each Member State to which it is addressed, but shall leave to the national authorities the choice of form and methods.
Thus, an EU regulation is an immediately binding law without further actions required, while directives are typically ‘transposed’ by member states into new local laws which implement them. In the UK, directives with relevance to Health and Safety are implemented in the form of regulations under the powers granted by the Health and Safety at Work Act 1974.
Technical standards are distinct from laws, and their use is usually voluntary. Standards are defined by technical committees, including the International Standards Organization (ISO), the European Committee for Standardization (EN), American National Standards Institute (ANSI), national committees such as the British Standards (BS) in the UK, local industry sector organizations, and sometimes within a single organization. Reasons for voluntary use include the ability to provide the customer with a guarantee—via contract law—of meeting a publicly known and accepted level of quality or safety; and also, the desire to make use of industry-wide technical best practices consolidated in a standard. Like directives, standards are often transposed between regions and subregions, for example a standard named with “ISO EN BS” may have begun as an international ISO standard, then transposed downward via both Europe and UK organizations; or it may have begun as a UK standard and been transposed upwards through the EU and ISO.
In some cases, the law may grant special status to a standard, giving it legal force, such as requiring all manufacturers to implement it for certain types of product. This is known as “calling up” the standard.
In the EU, compliance with Product Safety standards which are published in the Official Journal of the European Union is assumed to demonstrate compliance with relevant directives supported by the standards. Some directives will recommend the creation of such standards to be created along with their legislation, to aid compliance with that legislation. These are known as ‘harmonised standards’.
Several EU directives require products to obtain a special “CE mark” standard (Conformite Europeanne) before sale. The CE mark then allows sale across the European Economic Area (EEA), showing compliance with all relevant directives.
The relationships between laws, directives, and standards are illustrated in Fig. 3.
In the light of the above, the sections that follow examine how tortious, contractual and criminal liabilities, and use of standards could arise in the manufacture, use or operation of agricultural robots.

2.1 Liability in tort

2.1.1 Product defect

EU Directive regulates liability for defective products on the approximation of Laws, Regulations and Administrative Provisions of the Member States for Defective Products.4 [transposed in the UK as the Consumer Protection Act (CPA) 1987]. The directive applies to all types of products including agricultural products. Under the law, “product” is defined as all movables even if incorporated into another movable or an immovable (See art 2 of amendment to directive). A producer means the manufacturer of a finished product, the producer of any raw material or the manufacturer of a component part. A producer also includes any person who, by putting his trademark or other distinguishing feature on the product presents himself as its producer (art three directive).
The Directive lays down the principle of liability without fault or strict liability which means a person injured by a defective product can claim damages even if the defect was not due to the producer/manufacturers negligence. A defective product is one which does not provide the safety which a person is entitled to expect, considering, all circumstances including, the presentation of the product, such as adequacy of the warning,5 the use to which it could reasonably be expected that the product would be put and the time when the product was put into circulation are factors (art 6). The standard of the defect is, therefore, an objective one. For example, a product is defective if its safety is not such as persons generally (everyone and not the particular claimant injured by the product) are entitled to expect. Also, the law does not infer defect from the fact that a better or safer product was subsequently put into circulation or permit persons to expect standards of safety that are unknown or which do not exist at the relevant time (art 6, 7 Directive, s 3 CPA).
Moreover, to succeed in an action for damages, the claimant or injured person must prove the damage and the defect in the product as well as the causal relationship between the damage and the defect (art 4). In other words, the claimant must prove that he suffered damage, that there was a defect in the product and that the defect caused the damage. Presumably, therefore, if a claimant is unable to prove defect, he cannot prove that loss or damage resulted from such defect. However, in cases where the causal link is established, the law also provides for defences which are of particular relevance to the manufacturer of the agribot. As examples, it is a defence that the producer (or manufacturer) did not put the product into circulation or that the defect did not exist at the time the product was put into circulation (art 7). These arguably cover instances where someone caused the fault after the manufacturer supplied the agribot or where interference with software causes the agribot to malfunction or where the agribot has been used for a purpose for which it was not intended (See notes on dual-use below).
Other grounds for avoiding liability include a claim that the safety fault was an inevitable result of obeying the law (e.g., the agribot could be safer but for provisions of the law which excludes the use of certain technology). Also, it is a defence that the manufacturer could not have made the product more secure or safer given the state of knowledge in science and technology (‘development risk defence’) (art 7(e)). Therefore, it is a defence under the law that the state of scientific or technical knowledge at the relevant time is such that the manufacturer could not have known the defect in the product. This suggests that the law does not expect manufacturers or designers to wait until a safer technology is available before introducing their products. All that is required is that the standard of safety corresponds to state of the art in scientific or technological knowledge at the relevant time. However, the Directive makes this defence optional, and it would, therefore, only avail the manufacturer where it is provided for under national law.6
It is important to stress that requirement for proof, and indeed the definition of a defect under the law is not intended to undermine consumer protection. Rather, it is intended to strike a reasonable balance between the obligation to protect consumers and the need to promote innovation in a fast-evolving technology environment. For example, while owing to the complexity, technicality and probabilistic behaviour of products like an agribot, it may be difficult and expensive for claimants to prove a defect, it must also be assumed that developments in artificial intelligence, robotics and machine learning would mean that safety standards become outdated fairly quickly. Therefore, unless the law limits the liability of manufacturers to safety standards based on the state of scientific and technical knowledge, their liability could be indeterminable or infinite, and this may adversely affect innovation and development.
It is also relevant to note that damage includes damage caused by death or personal injury and damage or destruction caused to property other than the defective product itself (art 9). Liability imposed by the law cannot be excluded or limited by contract and can be joint and several.7 However, member states may provide for the limitation of liability for damage resulting from death or personal injury provided that the amount shall not be less than 70 million ECU (art 5,12).

2.1.2 Accidents and health and safety law

In the UK, health and safety law is implemented through the provision of the Health And Safety At Work etc Act (HASAWA) 1974. The Act enables the enforcement body, the Health and Safety Executive (HSE) to bring criminal prosecutions under Section 33 of the Act against organizations deemed to have breached the statutory duties it imposes.
The primary duties imposed by the act are described in Sect. 2 and Sect. 3 of the Act. The former imposes duties on employers to ensure the safety and health at work of employees; the latter on employers (and self-employed persons) to ensure the safety at work of those persons other than their employees who could be harmed by the employers’ undertaking. An undertaking is defined by the set of activities carried out by an organization; this extends to the design and manufacture of products such as agribots and includes their use. Therefore, an accident whereby a member of the public is injured by an agribot could result in a criminal prosecution against the owner/user of the agribot, and/or the designer/manufacturer. The balance of this prosecution depending mainly on the nature of the accident.
Section 6 of HASAWA1974 imposes duties on the manufacturers etc. (including designers) for the safety of articles used at work. Therefore, prosecutions could hypothetically also be initiated for a breach of this Section; however, in reality, this is seldom the case.8 Further, the duties of designers and manufacturers of agribots are better described under the Consumer Protection Act 1987, and / or the Supply of Machinery (Safety) Regulations 2008.
Prosecutions for breach of duties under Sections 2–6 of HASAWA1974, if elevated to the Crown Court, invoke a potential maximum penalty of two (2) years imprisonment, and / or an unlimited fine. In all cases described above, the duty is qualified and limited by the term ‘so far as is reasonably practicable’ (SFAIRP). This is also commonly phrased as the duty to reduce risk to a level that is ‘As Low as Reasonably Practicable’ (ALARP). These terms are largely interchangeable, the former used in legislation, the latter commonly used in engineering communities.
The key element is the concept of reasonable practicability. This was defined in common law decades before9 the implementation of HASAWA1974 and provides a fundamental means to both limits the duty imposed by the Act and mitigate the liability incurred following an accident and resultant prosecution. If the defendant(s) can demonstrate that all reasonably practicable measures were taken to reduce the risk, they thereby demonstrate that they fully discharged their duties under HASAWA1974.
Demonstration that all reasonably practicable measures have been taken (often termed ‘demonstration of ALARP’) requires the following measures be taken10:
1)
Identification of reasonably foreseeable hazards and assessment of risk;
 
2)
Adoption of authoritative good practice for control of risk;
 
3)
Identification of further practicable risk reduction measures;
 
4)
Implementation of identified risk reduction measures unless it can be demonstrated that the sacrifice (cost, time, effort) associated with doing so is grossly disproportionate to the safety benefit gained from the measure.
 
The above steps (2)–(4) are further predicated on the assumption that the overall risk to the safety and health of persons affected by the activity/product/system under assessment is in general, tolerable. If the risk is assessed as intolerable, then the owner of the duty to reduce that risk must do so regardless of any consideration of sacrifice. HSE guidance R2P2 provides a quantitative baseline definition of intolerable and tolerable risk.11
Where risks are well understood and defined by an industry body of knowledge, completion of steps (1) and (2) above will be sufficient to demonstrate ALARP. This can include compliance with legislation, approved codes of practice (ACOP) and in some cases engineering standards, where these can be shown to be directly and fully applicable and correctly applied.
Where such compliance is not possible, for example, because the technology associated with activity/product/system is new or novel, or because it is not possible to fully comply with relevant standards, further effort will need to be expended on risk assessment and/or engineering study, to determine what can be practically done to reduce the risk.
Demonstration of gross disproportion relies upon the assessment of the benefit of the risk reduction measure and consideration of the sacrifice (e.g., financial cost) of implementation of the measure. The concept of gross disproportion ensures that this is not a straightforward cost–benefit analysis, whereby the owner could demur if the sacrifice simply exceeds the benefit; rather the sacrifice must grossly exceed the benefit before the duty to implement the measure is discharged.
The above assessment can often be carried out qualitatively, for example, through use of a continuous matrix (such as the Boston Square), placing the effectiveness of a risk reduction measure on one axis, and difficulty involved in implementing the measure on the other axis. Potential improvement measures are then ranked relatively against each other. There are also a number of simplified screening tools in general use that highlight qualitatively those measures that should be implemented, should not be implemented, and those which require further study. In all cases, these qualitative methods will need to take account of the requirement to demonstrate gross disproportionality between the sacrifice and the safety benefit.
Where sufficient information is available, and where the resolution of the cost/benefit decision is less clear (for example, an initial screening tool results in the requirement for further study), a full quantitative assessment can be undertaken. This requires the quantification of the full lifecycle risk without further mitigation (sometimes termed the vanilla risk), for example, in terms of Potential Loss of Life (PLL) or Fatalities and Weighted Injuries Rate (FWI); similar quantification of the risk reduction measure(s); and combination of these values with a Value for Preventing a Fatality (VPF).12 The sacrifice associated with implementing these measures is then calculated, and the measure implemented unless the sacrifice is found to be grossly disproportionate to the safety benefit.
Definitions of gross disproportion vary dependent on context; however, a useful rule of thumb is to consider the initial level of risk. Where that initial risk is tolerable but high, i.e., close to the border with the intolerable region, the gross disproportion factor should be similarly high. Where the risk is tolerable but low, the gross disproportion factor may also be lower. In some industries, in some circumstances, a sacrifice that is 3 × the benefit may be considered grossly disproportionate; whereas in other cases, a factor of 10 × may be required before a measure should be considered not reasonably practicable to implement.
The Management of Health and Safety at Work Regulations 1999 impose a duty on employers to undertake a suitable and sufficient risk assessment in support of the duties placed upon them by Sections 2 and 3 of HASAWA1974. However, even were this not the case, a demonstration that risk has been reduced ALARP is challenging to achieve without carrying out such an assessment. In fact, the requirement for risk assessment has arguably been part of UK common law since 1949.13
The requirement for risk assessment, should not be confused with a requirement for risk analysis. For a risk assessment to be suitable and sufficient, it must demonstrate that appropriate action has been taken to reduce the risk. Where sufficient information is available, a detailed analysis in support of this action may be beneficial. However, this is often not required, and sometimes not justifiable. For example, where there are high levels of uncertainty associated with a particular hazard, which render conventional risk assessment techniques unreliable, a precautionary principle14 should be adopted. This principle requires that the assessment and action were taken to be based more on the putative consequences of a risk, rather than the likelihood.
In the case of agribot use/design/manufacture, where authoritative good practice is still primarily to be defined, compliance with health and safety law will depend on the suitability and sufficiency of the risk assessments carried out by duty holders. Further, whereas the balance of prosecutions in the UK as a whole tends to focus more on immediate causation15 (i.e., who are the persons/organizations who ‘last touched the risk’), the nature of the autonomous robots may largely necessitate a greater focus on the prosecutions of designers and manufacturers. They may be more frequently called upon to present formal safety justifications of their autonomous products that demonstrate anterior identification, consideration and management of relevant hazards and risks. Complete justification will necessarily include the documentation of critical design decisions, the identified practicable risk reduction measures, and reasonable justifications for the measures rejected, as well as those, adopted.
For users/owners to discharge their safety and health duties, they may be largely dependent on the decisions are taken autonomously by the agribots. In corollary, the extent to which they can be held liable for those autonomous decisions is limited by the extent to which they can train/teach the agribot before full operations; this is in turn limited by the safeguards and risk reduction measures defined by the designer as a result of their risk assessment. As with all risk reduction measures, a hierarchy of control16 should be adopted by designers.
Elimination of hazards during the early phases of design should be prioritized; where hazards cannot be eliminated they should be controlled primarily be engineering means, for example, safety functions17 that bring the agribot into a safe state upon detection of a failure or the presence of a member of the public in close proximity. Lower levels of this hierarchy will necessarily include the provision of instructions for use, informed by the suitable and sufficient designer risk assessment. In effect, the users will be responsible for management of the residual risk associated with the agribot, i.e., those risks which could not be designed and engineered away.
Notwithstanding the above, there is guidance available that will be partly applicable to the use of agribots and may assist users of agribots with the implementation of safe systems of work. This will necessarily include appropriate traffic management arrangements18, including measures to ensure exclusion of the public, route planning, lighting and visibility, where necessary, as guided by manufacturer-provided instructions for use in combination with suitable and sufficient user risk assessment.
Health and Safety law in the UK is primarily goal-setting and requires a regime of self-regulation to ensure compliance with the HASAWA1974, particularly Sections 2 and 3. Therefore, the measures, guidance, and techniques outlined above are applicable, regardless of whether any specific, prescriptive regulation exists. In all cases, applicable good practice should be sought, and the duty owner(s) should determine appropriate measures to reduce the risk to a demonstrably ALARP level using an appropriate hierarchy of risk control measures.
For example, in the event that an agribot may be used in low-visibility environments, such as mist/fog, or nighttime working, the designers would need to consider the measures that could be designed into the system to reduce the risk. For example, a designer could not demonstrate that risk had been reduced ALARP by recommending in the instructions for use that the agribot wear hi-visibility clothing, regardless of how humanoid in appearance the agribot may be! Firstly, this is because Personal Protective Equipment (e.g., hi-vis jackets) always forms the lower ranks of any hierarchy of risk control measures; correct use of PPE is always subject to human error or violation. Secondly, hi-visibility clothing is used primarily to protect the wearer, whereas in this scenario, persons most at risk would likely be those driving other vehicles that could potentially impact the agribot. It should be clear to a designer that, even in the event of a hypothetical stipulation in the instructions for use that the agribot should not be used in periods of low visibility or nighttime; use in such conditions would certainly constitute reasonably foreseeable misuse. As such, the designer has an obligation to ensure that the agribot is provided with reasonably practicable measures to increase visibility (e.g., lights) and / or other measures to avoid collision (e.g., horns / audible warnings). For example, practicable measures could include (but not limited to): collision detection systems based on radar scanning and autonomous avoidance; built-in lighting systems, potentially with safety systems that prevent operation in low-visibility environments when lighting systems are non-functional; hi-visibility paintwork; reflective strips, reflectors. A combination of these elements would likely be necessary to demonstrate that risk is reduced ALARP, subject to assessment as described in the paragraphs above.
A further example, is the use of agribots on public highways. From the above discussion and example, it should be clear that no agribot should be used on public highways unless reasonably practicable risk reduction measures are implemented. Inherent in the definition of reasonable practicability is the concept of proportionality; measures taken to reduce the risk should be proportional to the risk. Therefore, in the event that an agribot is required to autonomously travel on or across public roads then collision avoidance safety systems must be designed-in, similar in extent to those required for autonomous road vehicles. However, in the event that an agribot can be supervised across a road crossing in manual mode or remote mode the exposure to risk is lower, and it is reasonable for the designed-in safeguards to be less onerous (of course providing that suitable controls are designed-in to prevent inadvertent agribot access to public roads).
For the scenario of an agribot crossing a road in a supervised/manual/remote mode, the extent to which risk reduction measures can be designed-in would be firstly dependent on the extent to which risk reduction measures are practicable, i.e., technically feasible. For example, crashworthiness/impact absorption, to prevent damage in passenger carrying vehicles, and/or collision avoidance systems that effectively distinguish between vehicle hazards, users, members of the public, and livestock (which may be crossing simultaneously with the agribot). Secondly, the designers would need to be assured that they are not introducing additional hazards that are potentially higher risk than the hazard they are trying to control. For example, designer risk assessment may determine that any collision detection system should be deactivated, while in manual or remote mode to avoid risks to the local user—such as autonomous avoidance resulting in the robot reversing into a manual remote controller walking closely behind it—or risks increased by non-execution or delays to command responses. In this case, the system would not be effective for mitigating risk of vehicle impact when crossing roads. In such a scenario, complete with supporting risk assessment, it may be that the designer is able to reasonably discharge their responsibility for further reduction of risk. This is providing that: suitable arrangements are provided in design for agribot visibility as discussed above; the manual/remote mode is generally and demonstrably safe and reliable; a Safe System of Work can be adopted by the user that follows the highway code, providing suitable warning to other road users that a crossing is taking place, and controlling/excluding traffic, where necessary.

2.1.3 Accidents and negligence

Legal action in tort for negligence may also be taken against manufacturers, agricultural contractors, operators and farmers and their agents for injuries, loss or damages resulting from negligence or accidents involving the agribot. However, unlike strict liability or liability without fault, a claim in negligence requires the claimant to prove fault on the part of the manufacturer or other person being sued. The following must be established; the defendant(s) (such as manufacturer/designer, contractor or farm owner) owes a duty of care to the claimant, there was a breach of that duty (the defendant failed to take care), the claimant was harmed (that is personal injury, or damage or loss of property resulted).
Liability for negligence may fall on any of the parties depending on the cause of the accident, and who owes or is owed a duty of care in the circumstances of each case. For example, the position of the law is that the manufacturers owe a duty of care to persons who use their products and manufacturers would be deemed to have breached this duty where there is a defect in the product. A cause of action (the basis for suing the manufacturer) arises where injury or loss results from the defect. For liability, it is immaterial that the claimants did not purchase the product themselves. Therefore, suppliers, farmers, contractors and their agents or other users who may be injured by any defect in the agribot would be entitled to sue the manufacturer for negligence. From the perspective of the consumer, an action in negligence provides additional protection as product defect may raise a prima facie case of negligence.19
Apart from defects, liability for negligence may arise in cases of misuse mainly where manufacturers fail to provide instructions or where the instructions are inadequate or misunderstood. Under the EU Machine Directive,20 [transposed in the UK as the Supply of Machinery (Safety) Regulations 2008] the manufacturer or his authorized representative is required to provide necessary information such as instructions before putting machinery on the market and/or putting it into service (art 5 Machine Directive). Regarding the general principles for drafting instructions, the Directive provides that instructions must be drafted in one or more official Community languages (of the EU), and the case of machinery intended for use by non-professional operators, the wording and layout of the instructions for use must take into account the level of general education and acumen that can reasonably be expected from such operators (Machinery Directive item 1.7 annex 1).
It is, therefore, a question of fact depending on the circumstances of a case whether warning or instruction is sufficient and whether the manufacturer is liable or not. For example, instructions and warnings full of probabilities and equations provided to intermediaries (such as agricultural contractors) may be sufficient if the contractor is learned in and has a good understanding of the agribot. Conversely, the same instruction addressed to farmers who (presumably) have the less technical knowledge, may need to be more basic. Therefore, in a hypothetical scenario where a farmer misunderstands the instructions and assumes the agribot is safer than it actually is and thereby causes the agribot to malfunction and kill a walker, a brochure full of probabilities may be deemed too complicated, and the manufacturer may be held liable for accident caused by the farmer’s misuse. The key principle is, therefore, that instructions must be pitched at the level at which both technical other non-technical users of the agribot can understand them.
Other provisions of the Machinery Directive particularly relevant to the agribot include the requirement that the contents of the instructions must cover both intended use of the machinery and any reasonably foreseeable misuse. Also, where applicable, the instruction manual must contain warnings concerning ways in which the machinery must not be used that experience has shown might occur (item 1.7.4 annex 1 to the Machinery Directive). These provisions suggest the manufacturers would still be deemed to have complied with the law if they fail to give warnings on use and misuse which were not known at the time of manufacture or design but subsequently becomes known due to self-learning, the processing artificial intelligence (AI) or repurposing of the robot. They also suggest that apart from the manufacturer, other users of the agribot could be liable if they ignore clear instructions and warnings or continue to use the agribot after discovering that it has malfunctioned due to failure to follow instructions. However, to benefit from the presumption of conformity with the health and safety requirements under the Directive, manufacturers are required to affix CE marking on their product and comply with a declaration of conformity (arts 5,7).

2.1.4 Accidents caused by agents, employees and contractors

Liability for accidents caused by third parties depends on whether the person who caused the accident is an agent or an independent contractor. Under the law, a principal is vicariously liable for the acts and omissions of his agent when the agent is acting within the scope of his authority. The scope of an agent’s authority is defined by a contract between the agent and the principal. As an example, therefore, liability for acts or omissions of the operator of the agribot will depend on whether he is an agent of the manufacturer or the agricultural contractor or whether he is an independent contractor. Similarly, if there is a franchise agreement, the franchisor’s liability will depend on whether the franchiSee acts in the capacity of an agent. Therefore, while the law does not automatically infer an agency relationship from a franchise, the agency can be inferred from the contract and the circumstances of the case.
As also noted above, liability might depend on whether third parties, such as employees, agents or contractors, receive adequate instructions on the use of the product. As an example, under the Provision and Use of Work Equipment Regulations (PUWER) 1998 (UK), businesses which either use or hire out work equipment are required to manage the risks from the equipment. Risk management includes ensuring that all people who use or manage work equipment receive adequate instructions and appropriate training. Therefore, apart from the manufacturer, operators of the agribot, agricultural contractors and farmers are also legally obliged to assume liability for accidents caused by third parties due to misuse.
Furthermore, under the Occupiers Liability Act 1957 and 1984, an occupier, that is a person in control of land, premises or buildings can be held liable for injury or harm to another person on the land. Such persons can include workmen, residents, visitors, strangers or even trespassers. One of the conditions for the assumption of liability is that the harm is caused by a person over whom the occupier has control or over which he could exercise some degree of control. It is, however, important to note that this liability can be excluded by contract.
Finally, damage caused by the escape of things likely to cause mischief is borne by the owner of the land provided the damage be NIL a reasonably foreseeable consequence of the escape (this is the rule in Rylands v Fletcher).21 In practice, this might mean a farm owner or farm manager could be liable if he (or his agent or anyone under his control) allows the agribot or things used by the agribot such as herbicides to ‘escape’ to adjourning lands or farms and for damages resulting from such escape. This position poses a little problem when the agribot is operated in manual mode as the operator is deemed to be in control. However, when operating autonomously, the risk of ‘escape’ may heighten, and farmers or other users of the agribot may have to adopt additional measures to avoid liability. This may include closing escape routes and putting warning signs at different ends of a road when the agribot is in operation. Although this is not a legal requirement, in the UK, farmers routinely close local roads to move herds of animals by placing signs and/or people at both ends before releasing the animals. The Health and Safety Executive (HSE) has also issued a number of advice on public access for livestock which would be relevant to the operations of the agribot.22 It is, however, important to note that the Animals Act 1971(UK) impose strict liability on keepers of animals which are of a dangerous species.23
The outstanding challenge from the above liability allocation regimes relates to how to resolve the attribution problem. For example, despite the clear provisions of the law, it might be difficult to ascertain whether damage, injury or loss was caused by a defect in the product or misuse such as failure to follow instructions. It is conceivable for instance that contractors or farmers would tend to attribute loss or damage to product defect rather their misuse of the agribot. It is also conceivable considering the complex and technical nature of the agribot and the fact that law imposes liability on the manufacturer for insufficient and unclear instructions, that courts might be more inclined to hold manufacturers liable in negligence rather than hold users liable for misuse. One solution to the possible dilemma is to design the robot with detailed data logging system. This would create a form of ‘liability by design’ which enables the agribot to keep detailed logs of events and incidents including possibly replaying an accident to establish if was caused by a sensor failure or user command. A data logging system may, therefore, assist in identifying where liability falls where there is a dispute as to whether accidents are due to manufacturer defect or user misuse.

2.2 Administrative actions

2.2.1 Regulation of environmental damage and use of chemicals in general

The application of chemicals which may affect the environment is tightly controlled. This includes the robotic application of fertiliser and pesticide chemicals, and their potential effects on the human food chain, water supply, neighbouring farms, farm staff, and the more comprehensive public environment.
Liability for damage to the environment by activities of businesses is regulated by Directive 2004/35/EC of the EU Council on Environmental Liability regarding Prevention and Remedying of Environmental Damage [transposed in the UK as Environmental Damage (Prevention and Remediation) (England) Regulations 2015]. The Directive adopts an administrative approach. It does not, therefore, apply to cases of personal injury, damage to property or economic loss and does not affect any right regarding these types of damages (recital 14).
The relevant provisions of the law impose strict liability (based on a ‘polluter pays principle’) for pollution of the environment caused by certain activities including the manufacture, use, storage, processing, filling, release into the environment and onsite transport of plant protection product. Plant protection products include products for destroying undesired plants and damage includes damage to water and soil. Although liability is strict, a causal link between the activity and the damage must be proved, and the law allows cost allocation in cases of multiple party causation especially concerning the apportionment of liability between the producer and the user of a product (art 9). Where environmental damage has occurred, the operator is required to inform the competent authority and take practical remedial actions to remove or otherwise manage the contaminant (art 6). The operator bears the costs for the preventive and remedial actions taken under the Directive (art 8).
Effects on water supplies are controlled by Directive 2000/60/EC (Water Framework); Directive 2008/105/EC; and Council Directive 98/83/EC (Drinking Water Directive), which set limits on levels of chemicals which may enter public water systems. Restrictions on the classification, labelling, and packaging of substances and mixtures are defined in European Regulation (EC) No 1272/2008.

2.2.2 Use of fertilisers

Apart from general chemical laws, fertilisers are covered by addition laws.
The EU Nitrate Directive 91/676/EC aims to protect water quality across Europe by preventing nitrates from agricultural sources polluting ground and surface waters and by promoting the use of good farming practices. The 2003 European Fertilisers Directive covers Sale, manufacture and labelling of fertilisers. The Directive will apply to the sale of fertilisers, which may include the sale of fertilisers included as part of a robotic package.
Ammonium nitrate fertiliser may be used as an ingredient of explosives, so falls under anti-terrorism laws which control its storage security. In the UK these include Control of Major Accident Hazards Regulations (COMAH); Dangerous Substances (Notification and Marking of Sites) Regulations 1990; Ammonium Nitrate Materials (High Nitrogen Content) Regulations; and Planning (Hazardous Substances) Regulations.
The EU Single Payment Scheme subsidises farms but in return imposes environmental protection requirements on them which may include limits of fertiliser levels. Further, Nitrate Vulnerable Zones (NVZs) are areas designated as being at risk from agricultural nitrate pollution (e.g., including about 60% of land in England.) There are additional legal limits on amounts and times of year for fertilisers which can be used in them, imposed by the Nitrates Directive and Drinking Water Directive.

2.2.3 Use of herbicides, pesticides and biocides

In addition to general chemical laws, pesticides—and more generally, “biocides”—are covered by additional laws. A “herbicide” is a chemical which kills one or more plant types; a “pesticide” is a chemical which kills “pests” including weeds, fungi and insects; a “biocide” is a chemical which harms any animals, humans or the environment.
The 2009/128/EC Directive on Sustainable Use of Pesticides [implemented in the UK as “PA Certificates of Competence” via the transposed Plant Protection Products (Sustainable Use) Regulations 2012] aims to protect surface water and drinking water from pesticide contamination. Also, pesticide use is to be reduced in areas used by the general public and in nature conservation areas. It aims to reduce the risks and impacts of pesticide use on human health and the environment and promote the use of integrated pest management and alternative approaches, such as non-chemical ones. The directive requires operator training for different pesticide types and applicator types.24 It also bans aerial spraying in all forms, including by autonomous drones and manual piloted helicopters. In practice, this aerial ban has proved to be problematic for several weed types, including needle blight in trees, and bracken in moorland. However, the directive also allows member states to grant exemptions, on the application, to users for specific nationally approved plan types such as these, which are usually administered by their environmental agencies (For example, the UK currently has around three such approved plans, used under permits issued to tens or hundreds of individuals).
The EU Biocides Regulation 528/2012 regulates all substances harmful to humans, animals and/or the environment, i.e., biocides, requiring authorisation for their use. Bulk authorisation is provided to users of “on-label” products, where the substance manufacturer has handled safety testing and defines the appropriate dose size and use-case for application, on a product “label”, 25 and the user operates within these parameters. When using “on the label”, liability for damages caused by the product is transferred from the user to the manufacturer. If a user chooses to use the product at a different dose or for a different use-case, this is “off-label” usage, and the user retains the liability. To comply with the Biocides Regulation, the user must thus obtain their off-label authorisation, e.g., via an application for a permit from their national Environment Agency.
The certification system for human operators appears to pose little problem to the robotic application where the agribot is legally considered as a tool of a named human operator and uses an existing applicator type, such as a knapsack or bulk sprayer system. In this case, that human operator must hold the required certifications for the herbicide and applicator type. Definitions of applicator type may become problematic for robots using novel applicators, such as prototype per-plant precision devices. If the robot operates otherwise than as a tool for example under a framework which recognises the legal personhood of autonomous robots (See Legal Personhood section below), then the definition of certification again becomes problematic.
As with operator certification, definitions of on-label application use-cases are likely to function for agribots spraying using similar technology to a manual knapsack or tractor-mounted devices, under legal operation as human tools; but the use of novel applicator methods or non-tool operation are likely to be problematic or at least require custom national environment agency licensing.
For manufacturers and sellers of herbicides, additional rules are provided in the Machine Directive amendment 2009/127/EC on Herbicide Application (transposed in the UK as EC Fertiliser (England and Wales) Regulations 2006; See also the UK Fertilisers Regulations 1990/1991 UK). As with fertilisers, these may apply to agribot operators selling herbicide as part of a robot product or service package.

2.2.4 Radio communications—scarce spectra

An often-overlooked aspect of agricultural robotics systems is the need for long-range communications links from the robot in the field to a base station, which in some cases form systems as or more complicated than the robots themselves. Such communications links, as illustrated in Fig. 4, are required if the robot is operating as a tool rather than as a legal person—so that the named human operator can monitor its condition sufficiently to intervene in emergencies and to take responsibility for its actions. In practice, this will often require a video link to monitor the robot’s cameras in real time. The video is a bandwidth-hungry medium which often requires specialist communications links and equipment. Radio bandwidth is a limited and valuable26 resource which must be shared with other local users, so is tightly regulated in most countries. Hence the legal need for the human operator to take responsibility for the robot’s actions must be balanced against the need for legally restricted spectrum resources. In the EU, the restriction is performed by the Radio Equipment Directive 2014/53/EU, and currently in the UK by the Communications Act 2003.
Most current radio communications operate on single, or small groups of, identifiable frequencies. Radio communications have been used from 3 Hz to 300 GHz, with bands around higher frequencies carrying more bandwidth than those at lower frequencies, but lower frequencies propagating over long distances more efficiently. Two users transmitting on the same frequency in the same area will interfere with each other’s signals. Countries’ laws initially assign the rights to transmit on all frequencies to a government body called the regulator (In the UK this is Office of Communications OFCOM; in the USA, the Federal Communications Commission, FCC). The regulator is then responsible for managing allocations of these frequencies in local areas to users.
International standards exist, via the International Telecommunications Union (ITU), designating certain frequency bands for particular types of use, including for national broadcasting, cell phone data, emergency services and military communications, and amateur (‘ham’) radio. The same standards assign further bands for licensed commercial use, and others for unlicensed public use within the defined power and use-type limitations. This allows products to operate in the same bands between countries. The regulator typically implements these standards via its licensing to users.
Public channels. Domestic ‘WiFi’ (802.11) radio is often used for research agricultural robot communications, requiring no special permission from the regulator. In the UK, OFCOM allows transmission of data on several frequencies around 2.4 GHz for this purpose but limits transmitter power to 100 mW, which typically can stream video up to around 100 m ranges. Many domestic (e.g., up to 250 mW) and other devices (e.g., many watts) are technically able to transmit at higher powers (achieving longer ranges), but this would violate the OFCOM regulation.27 Specialist antennas can concentrate the beam transmitted in specified directions to enable long-range point-to-point communications. However, OFCOM power regulations apply to the power level receivable at any location rather than to the source transmission power. This means that no legal range extension benefit is obtainable through their use—a 100 mW source concentrated along a beam to a destination may have the same, illegal, received power as a 1W omni-directional source. Across the EU, a public 433 MHz band may also be used for low power, short-range communications, suitable for sending control commands and occasional sensor data, but not live video. Across the world, some bands are allocated for public amateur (“ham”) radio, by the regulator transferring use to hobbyist organizations, who then allow their certified members to use them, under restrictions such as preventing purely private use (such as encrypted or closed protocol data).
Where public radio channels are insufficient, the regulator may lease other dedicated bands in local areas for exclusive use by specified users, usually for a significant fee. Wider bands cost more but allow higher data rates (In the UK, OFCOM’s main schemes are called “technically assigned” and “area defined” licences).

2.2.5 Privacy and data protection

Perhaps one of the most significant aspects of the liability regime for the agribot is the privacy and data protection implications of the information collected during its operations. Data collection agribots tasks may include monitoring of soil and plant conditions, as well as building up maps of farms for general navigation. This data can be commercially valuable not just to the landowner but to others who might have financial interests in the land (such as deciding whether to buy it) or in collating data from millions of farms to perform the large-scale analysis. Many small farms are owned and operated as Sole Traderships rather than as limited company structures, linking their data directly to a named individual, sole trader and thus making it “personal” data.
According to the European Parliament on Legal Affairs, for example, AI and robotics can potentially generate large amounts of personal data that can be used as currency to purchase services.28 The relevant law is the EU General Data Protection Regulation (GDPR) 2016 which repealed Directive 96/46/EC (data protection directive). The Regulation entered into force on 24 May 2016 and will apply from 25 May 2018.29 The Regulation applies to the processing of “personal data” defined as information relating to an identified or natural person (data subject). While much of the provisions centre on bridging perceived gaps in the law given developments in information technology, provisions relating to principles of data processing, privacy by design and automated decision-making is particularly relevant to agribots.
(a)
Principles of data protection
 
The principles relating to the processing of personal data as follows;
1.
Lawfulness of processing—The Regulations provide that processing of personal data must be Fair, lawful and transparent. Consent of the data subject is one of the conditions for lawful processing (art 6). Moreover, where processing is based on consent, ‘the controller’30 shall be able to demonstrate that the data subject has consented to the processing of his or her personal data and the data subject shall have the right to withdraw his or her consent at any time (art 7).
 
It is important to note that although consent is not the only mechanism for justifying the processing of personal data, it remains a core principle of data processing. Therefore, where consent is the basis of processing, it must be clear and unambiguous as the consent of the data subject cannot be inferred from conduct or inaction. In the case of the agribot, there may be instances where it is unclear whether the data is personal or who owns the data for consent. For example, Company C operates the robot on farmer X’s land and collects detailed soil nutrient information during the run. C then operates on neighbouring (and competing) farmer Y’s land and makes use of X’s data to optimise the run on Y’s land with the result that Y ends up with better-informed run than X. Farmer Y might also be interested in buying land from farmer X and could obtain private information about its condition and value from the data. The collection is without X’s consent. The question may arise whether Company C owes any obligation to X concerning the collection and use of the detailed soil information. On the one hand, because detailed soil information relates to the soil condition and not to the individual, it cannot constitute personal data. On the other hand, because the collection may invariably involve the collection of geolocation data, (which is deemed personal data), Company C may require consent from X. It, therefore, seems reasonable to obtain consent to any collection of personal data where it would be difficult to isolate personal data from the information collected.
2.
Purpose limitation—a collection of personal data must be for specified, explicit and legitimate purposes and further processing in a manner that is incompatible with those purposes is prohibited.
 
3.
Data minimisation—personal data must be adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed.
 
4.
Accuracy—personal data must be accurate and where necessary, kept up to date, and every reasonable step must be taken to ensure that personal data that are inaccurate, having regard to the purposes for which they are processed, are erased or rectified without delay.
 
5.
Storage limitation—personal data must be kept in the form which permits identification of the data subject for no longer than is necessary for the purposes for which the personal data are processed. However, data may be stored for longer periods if it will be processed solely for archiving in the public interest, scientific or historical research purposes or statistical purposes.
 
6.
Integrity and confidentiality—using appropriate technical or organizational measures, personal data must be processed in a manner that ensures appropriate security, including unauthorised or unlawful processing and against accidental loss, destruction or damage.31
 
7.
Restrictions on transfer—although, transfer of personal data outside the EU does not require any specific authorisation, transfer to third countries or international organizations may take place where the (EU) Commission has decided that such third country or organization ensure an adequate level of protection for personal data.32
 
8.
Accountability—The controller shall be responsible for and be able to demonstrate compliance with the above principles.
 
(b)
Privacy by design and restrictions on automated decision-making
 
Article 25 of the Regulations mandate the implementation of privacy by design or privacy by default (PbD). The specific provisions of the law are that data controllers shall implement appropriate and technical measures for ensuring that by default, only personal data which are necessary for each specific purpose of processing are processed. The obligation to implement privacy by default applies to the amount of data collected, the extent of their processing, the period of their storage and their accessibility. In particular, the measures shall ensure that by default, personal data is not accessible without individual intervention to an indefinite number of natural persons (art 23).
The Regulations recommends pseudonymisation as an appropriate technical and organizational measure which meets the requirements of the regulations and protects the rights of data subjects. However, in implementing appropriate technical and organizational measures mandated by the law, the controller shall take account of the state of the art, the cost of implementation, and the nature, scope, context and purpose as well as the risks and likelihood and severity posed by personal data processing to the rights of natural persons. Under article 22, the data subject has the right not to be subject to a decision based solely on automated processing, including profiling. The data also subject the right to be informed of the existence of automated decision-making including profiling and a right to an explanation for the logic underlying such decisions as well as the significance and consequences of the processing (so-called right to an explanation).33
Although the provisions above may pose some difficulties for the development and functioning of agribots, (See notes on scope and application of data protection law below) they are based on the (arguably correct) presumption that the problem with automated decision-making is not so much the inability of humans to predict the behaviour of autonomous robots. The problem is the need for trust that the decision-making process is transparent for accountability, reliability and trust. As a result, the algorithms that underpin agribot systems need to be as transparent and as interpretable as possible, and the agribots must be able to explain their behaviour in terms that humans can understand right from how they interpreted their output to why they recommend a particular output (so-called explanation-based collateral systems).34
(c)
Data breach reporting and administrative fines and penalties
 
The Regulation also makes provisions for mandatory data breach notification and empowers supervisory authorities (national public authorities such as the Information Commissioner’s Office in the UK that would monitor and enforce the Regulation) to impose administrative fines which could be potentially large (a maximum of 20 million Euros or 4% of the global annual turnover of the preceding financial year whichever is higher) for infringements of certain provisions of the law.35 However, while supervisory bodies have the power to levy fines and other sanctions, this does not preclude individuals from bringing civil actions. In Vidal-Hall v Google Inc.36 for example, the court ruled that misuse of personal information is an actionable tort. Agribot operators may thus need to invest in specialised secure data storage facilities, and consider the use of cryptography to protect data stored on and communicated by agribots in the field to comply.

2.3 Liability under contract

Liability can arise under a contract between different parties concerning the use and operation of agricultural robots. Contractual agreements are particularly important, because contracts define the rights, obligations and liabilities of parties and the courts will enforce the terms of a contract voluntarily entered into. Therefore, where permitted by law, parties may by contract exclude or their limit liabilities. For example, parties may by contract agree that it is the duty of the agricultural contractor to provide training to the farmer on the use of the agribots. They may further agree that such training excludes or limit the manufacturer’s liability for accidents caused by misuse. The parties may also contract to contribute towards damages for loss of reputation which is likely to affect the manufacturer’s brand. It is also important to note that under rules of privity of contract, only parties to the contract can acquire rights or liabilities under the contract. Therefore, in a contract between the farmer and an agricultural contractor for the supply of agribots to be used for killing weeds, the farmer can only sue the agricultural contractor if the agribot was incapable of killing weeds. He cannot sue the manufacturer unless the manufacturer is also a party to the contract.37
Finally, the scope of remedies under a contract is wide, and a party can seek some reliefs for damages caused by a breach by the other party or parties. Therefore, an innocent party may ask to be discharged from further obligations to the party in breach, or claim damages for loss suffered. In cases of disputes or claims for breach of contract, courts will usually give effect to the terms of the written agreement between the parties without extraneous evidence. The contract, therefore, serves as evidence of the intention between the parties and must be carefully drafted particularly when it involves multiple parties.

2.4 Relevant standards

These are thousands of voluntary technical standards which have been established by many organizations for various uses, which are beyond the scope of this article. The following is thus only a small sample of relevant standards, as examples of a much larger collection:
BS EN 61508 “Functional safety of electrical/electronic/ programmable electronic safety-related systems” is a commonly used engineering safety standard which defines “Safety Integrity Levels” (SIL), and technical safety processes such as the use of hazard identification and mitigation, failsafes, and emergency stop systems.
BS EN 62061—implements principles of BS EN 61508 (above) and harmonised to parts of the EU Machinery Directive (ie. a “called up” standard with legal status.)
ISO 10218 “Robots and robotic devices—safety requirements for industrial robots.” provides best practices for industrial robot safety. ISO 15066 “Robots and robotic devices—collaborative robots” provides best practices for systems involving robots and humans working together.
ISO 18497 “Safety of autonomous tractors”—is under development at the time of writing, and aims to provide best practices for the safety of large autonomous tractors.

3 Law and mitigation of damages

A number of the laws examined above appear to be strictly worded concerning different forms of liabilities whether these arise under contract, tort or statute. However, despite the strictness, the law also provides defences and other legal means through which a stakeholder in an agribot supply chain may avoid liability or mitigate its damages:
(a)
Statutory defences statutory defences are defences allowed under the law. The relevant defences have been discussed under the Product defect above.
 
(b)
Defences to claims for tortious liability although, these have also been alluded to earlier, it is useful to briefly highlight how the manufacturer could in practice defend an action in negligence. As noted above, manufacturers may be liable for breaching a duty of care owed to users of their products, and the law places the burden of proving the negligent act on the injured party or the claimant. In effect, the claimant must prove that the manufacturer did not take reasonable care to avoid the injury or damage occurring. Conversely, it is a defence open to the manufacturer that he could not have reasonably foreseen the harm to the injured party and could, therefore, not have prevented it. This is based on the doctrine of the remoteness of damage where the manufacturer contends that there is no causal link between the manufacturer’s negligence and the injury to the claimant. Also, the manufacturer can plead that the claimant is contributorily negligent. Contributory negligence is a partial defence which enables the negligent party (e.g., the manufacturer) to claim mitigation of damages by proving that the claimant contributed to his loss or injury. For example, failure to read the instruction manual to take specific recommended steps in circumstances where the agribot malfunctions may lay a farmer or contractor claimant liable to a claim of contributory negligence.
 
(c)
Exclusion clauses where law permits it, parties may by contract exclude liability for certain acts or omissions. For example, the agribot manufacturer may exclude liability for illegal use of a robot or use for purposes other than that for which robot was manufactured. Liability may also be excluded for improper use or interference with specifications of agribot software or algorithm (See further notes on dual-use items below).
 
(d)
Regulation by design The law now actively promotes regulation by design. Under the GDPR, data controllers are required to implement appropriate technical and organizational measures and procedures in such a way that data processing will meet the requirements of the law. It is significant that the law allows data controller to take account of state of the art in technological development and the cost of implementation in conforming to the requirement (See notes above).
 
(e)
Insurance There is no specific insurance framework for robotics. However, insurance can be mandated by law or by contract between the parties. Also, specialist insurance may be required, and in this regard, it has been proposed that insurance develop new products and law mandate compulsory insurance scheme supplemented by a fund (See further notes below).
 
(f)
Judicial approaches The Legal regime for compensation and Judicial approaches to the award of damages could also have a mitigating effect on liability. Courts are careful not to expand the scope of existing liability regime. For example, the provisions of the Compensation Act and the approach by the courts suggests that courts may be circumspect in allowing claims for damages caused by the agribots considering its essential economic function of killing weeds and making more land available for farming. For example, under the Compensation Act 2006 (UK), courts are required to take into account the fact that allowing specific claims may have adverse consequences for innovation and investment in desirable activities.38 In other words, the law considers that if it is too easy to make successful claims concerning specific activities, the courts may be overwhelmed with cases for compensation. This risk often referred to as ‘opening the floodgates (of litigation)’ would inhibit investment in activities which are useful for the society or which are of economic, social or technological significance.
 

4 Grey areas relating to current legal concepts

The grey areas refer to aspects of liability in agricultural robotics where the law is unclear or uncertain. The most relevant issues considered here include the legal effect of the autonomy of agribot on the liability of the parties and liability for dual-use of the agribot as well as the likely effects of the EU data protection law. This section highlights the main arguments in this area and where relevant, the proposed solutions to the challenges.

4.1 The ethics of robot autonomy

Under EU law, (non-autonomous) robots can be classified as products and humans are ultimately responsible for defects, errors, or misuse of the robot (See notes on liability for product defect above). For autonomous robots, the applicable laws and principles are not so clear. Directive 85/374/EEC has no direct applicability to liability for damages caused by autonomous robots, and there is currently no definition of autonomous robots under EU laws.
Nevertheless, one proposal defines robot autonomy as the ability of the robot to take decisions and implement them in the outside world independently of external control or influence.39 The key features of robot autonomy include the development of autonomous and cognitive features such as the ability to learn from experience and take independent decisions, increasing capacity for adaptability and the exhibition of emergent behaviours. In effect, if an autonomous robot encounters difficulties that its design did not anticipate, its actions will not always be a result of programming as its learning abilities can cause the robot to develop sophisticated interaction with the environment which leads to unpredictability in its behaviour.40
Presumably, therefore, the more autonomous robots are, the less likely they will be considered as mere tools in the hands of other actors such as manufacturer, owner and users.41 However, it is not always clear whether and the extent to which robots should be autonomous. As examples, the UK House Committee on Robotics and Artificial Intelligence made the point that it is important that AI technology is operating as intended and that unwanted, or unpredictable, behaviours are not produced, either by accident or maliciously’.42 Also, in a report by the EU, it was suggested that it is inconceivable that once another actor no longer controls a robot, it becomes the actor itself. The report argues further that a robot being a mere machine and a carcass devoid of consciousness, feelings and thoughts or its own will cannot become an autonomous legal actor.43 This observation arguably undermines the very notion of robot autonomy. For example, since autonomy is taken to involve self-learning and the processing of artificial intelligence, then a design that limits that autonomy also limits the use of the robot and could potentially stifle further innovation in robotics.
As noted in another report by the EU, however, it is expected that ultimately AI could surpass human intellectual capacity in a manner which, if not prepared for, could pose a challenge to humanity’s capacity to control its creation …’44 This position suggests that robot autonomy is a given and technical (but legitimate) questions can be raised concerning the legal consequences of such autonomy particularly the legal responsibility arising from a robot’s harmful action. In a hypothetical scenario involving the agribot, the following could occur; the obstacle avoidance on the agribot works but the robot ‘decides’ it could overcome an obstacle. An accident occurs, and a walker is injured. All parties deny liability. The manufacturer argues that the accident occurred independently as the robot was acting autonomously, the insurers refuse to indemnify the manufacturer based on the argument that the operation which caused the accident is not a ‘defect’, the injured party claims that the accident is caused by manufacturer defect regardless of robot autonomy. The question this raises is, therefore, is whether and how a machine can be held liable for its actions or omissions.45
Although it is not yet clear what values machines should use, and how to embed these values in them, it has been suggested that they should function according to values that are aligned with those of humans and consider following, as much as possible, ethical theories defined for humans.46 Therefore, guiding legal and ethical frameworks for the design, production and use of robots and AI must be based on values such autonomy, individual responsibility, informed consent, and privacy and social responsibility.47 The proposals examined below are relevant in this respect.

4.1.1 Proportional liability

To promote certainty, responsibility and accountability, it has been suggested that a set of rules be developed which reflects the proportionality of liability depending on the instructions given to the robot and its capacity for self-learning as well as its level of autonomy.48 Assuming that damage, injury or loss could be established, the following rules of liability apply;
(a)
Manufacturers and producers should be strictly liable for damage that can be traced back to the robot’s design such as an error in the algorithm causing injurious behaviour.49 (See notes above on how technical designs can aid the law in this area mainly because of attribution problems)
 
(b)
For robots sold with open source software, liability should in principle be on the person who programmed the application which led to the robot causing damage. This is increasingly being incorporated into contracts.
 
(c)
When damage is caused when the robot is still learning, its user or owner should be held liable. However, liability should be further governed by whether the user is a professional user and whether or not they are the victim. If the damage is caused to a victim who is also a professional, this would be considered as an accident at work covered by existing laws governing such accidents. If the damage is linked to robot instruction given by a professional user which causes damage to a third party, then the situation calls for the development and application of new rules
 
(d)
In cases where the robot is hired out, the hirer should remain liable. The rationale is that it is difficult, given that each hirer may teach the robot different things, to determine which hirer is responsible for the acts of the robot.
For agribots, manufacturers and agricultural contractors are likely to fall into this category and would thus be deemed to be liable in cases where the agribot is hired out.
 
(e)
Finally, future legislative instruments should provide for the application of strict liability for damage caused by smart robots. In effect, only proof of a causal link between the harmful behaviour of the robot and the damage suffered by the injured party is required. There should also be no restrictions on the type and extent of damages which may be recovered, and there should be no limit on the forms of compensation which may be offered to the aggrieved party on the sole ground that damage was caused by a non-human agent.50
 
It is possible in theory to confer legal personality on robots. This allows the autonomous robot to have the status of an ‘electronic person’ for liability and rights.51 “Legal personhood” is a purely legal concept and is unrelated to the concept of “personhood” in Philosophy, which has been defined by various authors via difficult philosophical properties such as “free will” and “consciousness”. Legal personhood in robotics would be intended purely as a mechanism to assign legal liability, in the same way that corporations are sometimes considered to be legal persons. In particular, like corporate personhood, it provides a mechanism to replace liability assigned to individual human operators with liability assigned to some group of humans such as the robot design team. This is important and useful, for example, if individual human operators do not wish to take on potential personal liability for deaths caused by the robots, which could result in prison and other sentences on them as individuals. Spreading the liability across the design team via legal personhood would avoid this situation whilst still ensuring that the responsibility still exists in a suitable form.
However, objections to robotic legal personhood have been raised to this proposal on ethical and conceptual coherence grounds. It w argued for instance that legal personhood status for the robot would unavoidably trigger unwanted and nonsensical legal consequences including the need to determine what robots’ rights would be and how to respect those rights. In theory, a robot legal person (or more likely, a belligerent human claiming to act on its behalf, for example to sabotage a robotics company’s product or service, as human campaign groups currently do against animal testing companies by acting on behalf of the animals) might then be entitled to demand rights for the robot which were originally intended only for humans legal persons, such as employment leave, minimum wage, and refusal to work in dangerous environments.
Although conferring rights on robots could be potentially nonsensical, the problem only arises if the arguments are considered from a purely economic perspective. From a legal perspective, an artificial legal entity does not have to be conferred with the same rights as humans. In fact, taking the example of corporations, the law may not confer any direct rights or duties on the entity but rather on its directing minds or promoters. Therefore, for robots, electronic personhood would create the advantage of legal convenience such as making the robot a distinct legal entity which can sue and be sued. It would also vest the robot with the genuinely useful capability to apply for and obtain work or operating licence (e.g., an agribot (or rather its designers on its behalf) can apply for certification to use pesticides, removing the need for operators to hold the certificate, and transferring the liability onto the engineering design team). Electronic personhood can also help the robot (or rather, the human design team which it represents) fulfil obligations to self-insure and like corporations, pay compensation to those injured by its acts or omissions, again reducing the risk to individual operators.
It is important to note that the robot will have to be registered in the same way as corporations and may have to be vested or equipped with assets to enable it to carry out its duties and obligations. The promoters of the robot will make the choices about which party(ies) will fund the assets. More importantly, however, despite the electronic personhood, the court would be able to lift the veil of incorporation in appropriate cases to render the promoters liable for crimes and civil wrongs committed by the robot.

4.1.3 Registration and insurance

This is a recommendation for a system of registration for advanced robots based on a criterion established for the classification of robots.52 A Union-wide Agency would manage the registration which would serve the purpose of traceability for robotics and artificial intelligence.53 Similarly, the proposal for insurance advocates the establishment of an insurance scheme which obliges the producer to take out insurance for the autonomous robot it produces. It is proposed that a fund supplements the obligatory insurance scheme to ensure that damages can be compensated for in cases where no insurance cover exists.54

4.2 Dual-use products

EU law regulates Dual-use products.55 The Regulation sets up a Community regime for the control of exports, transfer, brokering and transit of dual-use items and aims to control trade in dual-use items to counter the proliferation of weapons of mass destruction and other items of potential military use.56 Therefore, the Regulation requires that Dual-use items (including software and technology) should be subject to effective control when they are exported from the European Community.57 Dual-use items are defined as ‘…items, including software and technology, which can be used for both civil and military purposes, and shall include all goods which can be used for both non-explosive uses and assist in any way in the manufacture of nuclear weapons or other nuclear explosive devices’. 58 Annexe 1 to the Regulation contains a list of dual-use items including nuclear materials (e.g., uranium), telecommunications and information security, sensors and lasers, various software, machine tools, chemical manufacturing equipment.
The law requires dual-use items to be registered and subject to authorisation and export control including a detailed register of exports (art 20) and a review and update as well as impact assessment of dual-use items.
It is notable that the law can be extended to products with potential dual-use that are not listed in Annex 1 to the Regulations (art 4, 15). In fact, in its draft rules on robotics, the EU legal committee recommends that the provisions on dual-use regulations should apply to robots.59 Perhaps, because the Regulation intends to ensure that dual-use items do not get into the hands of malicious actors, it only imposes liability on manufacturers for non-compliance with relevant provisions. However, new issues on liability can arise in the use and operation of the agribot. To illustrate, as agricultural robots are designed to operate in harsh outdoor conditions, they may bear functional similarities to, and be repurposable as, military systems such as explosive ordnance disposal (EOD), reconnaissance, and weaponised platforms. It is, therefore, conceivable that in the wrong hands they could be used to commit crimes including acts of terrorism, such as delivering lethal substances or weapons into crowded areas.
It is clear on the one hand that the malicious actor or any other person(s) that repurposed the agribot to carry out the criminal or terrorist acts would be deemed to have committed a crime for which he would be liable to punishment upon conviction. Also, he could be liable for damages to the parties thereby injured in a civil action. On the other hand, it is not clear whether the manufacturer bears (or should bear) any liability. As already noted above, the EU Directive on Product Defect applies only to defective products; that is, products not providing the safety to which a consumer is entitled. It is also notable that one of the factors to be taken into account - in determining whether a product is defective or not- include whether the product is being put to reasonable use.60 However, while unreasonable use can give rise to mitigation of damages, it does not entirely absolve the manufacturer of liability, and the problem can become particularly complex if such re-purposing is foreseeable or can be anticipated by the manufacturer. Under the EU Machinery Directive, for instance, the contents of the instructions must cover not only the intended use of the machinery but also take into account any reasonably foreseeable misuse.61
The question, therefore, is what uses should be deemed reasonably foreseeable? For example, is it reasonably foreseeable that an agribot can be used for criminal or terrorist purposes? If the answer is yes, then how is the position different from using a kitchen knife to commit murder. The knife is sold as a kitchen utensil, not as a weapon, so although the manufacturer can reasonably foreSee that the knife could be used for heinous crimes, he is not held responsible for the murderer’s action. Arguably, the argument would be different if the robot was developed purely for the purpose of committing crimes—such as a modified agricultural robot with a new implement attachment designed specifically for breaking and entering domestic windows and with no other clear function—then responsibility can lay with the manufacturer if the robot is repurposed for further criminal or unlawful purpose. As described in the discussion of health and safety law in Sect. 2, the resolution of this discussion is largely dependent on whether firstly, practicable risk reduction measures (i.e., what can be done about the reasonably foreseeable hazards) are readily identifiable, and secondly, whether implementation of those measures is reasonable. In the case of the knife, a well-established implement for which many examples of good practice design are available, it is unlikely that further risk reduction measures are practicable (i.e., technically feasible) that have not already been tried and their relative virtue exhaustively evidenced. In the case of the agricultural robot, the industry is still subject to errors in internal communication, for example, due to intellectual property protection, lack of established industry groups and forums, and general lack of publicly available evidence of safety improvements; therefore, the identification of practical risk reduction measures and the reduction in risk associated with reasonably foreseeable hazards may not be straightforward for designers.
Furthermore, in what ways should the instructions take into account reasonably foreseeable misuse? For example, the fact that instructions expressly prohibit certain re-programming or re-purposing would hardly deter a malicious actor bent on misusing the agribot. These issues would need to be addressed when developing rules applicable to robotics particularly small robots like the agribot.

4.3 Scope and application of data protection law

It was noted earlier that the new EU Regulations on data protection make significant provisions that would impact on developments in robotics and AI. While Much of the provisions address gaps in the law, they also raise difficult questions on the scope of the law and its impact and applicability to robotics. As examples, article 25 now makes PbD a legal standard, and arguably enhances the protection of individual privacy. However, given the rapidly evolving technology environment, and the fact that vulnerabilities and susceptibilities (to privacy infractions) may only become known subsequent to use or operation of products, as well as the expansive and ambulatory nature of the concept of privacy,62 the question must be asked whether it is possible (even using state of the art) to identify and assess all privacy implications and dimensions of particular technologies?
Furthermore, under article 22 relating to algorithms that make decisions based on user-level predictors which significantly affect users, the law effectively creates a ‘right to explanation’. This entitles users to ask for an explanation of an algorithmic decision that was made about them (See previous notes above). Although decisions based on algorithms raise difficult ethical and privacy questions, the provision also poses significant challenges for the AI and machine learning community. As examples, it is a common misconception that complex algorithms always do what their designers choose to have them do when in fact, it is difficult to understand, predict, and explain the behaviour of advanced AI systems because of the complexity of the systems and the large volume of data they use.63 Also, from a technical perspective, a requirement that algorithms offer explanations for their underlying decisions could potentially prohibit the algorithms currently in use. This means to comply with the law, a complete overhaul of standard and widely used algorithmic techniques may be required.64
Finally, while the GDPR applies directly to all EU member states, it is unclear, given the uncertain political terrain precipitated by Brexit, how and the extent to which the law would apply to the UK. For example, even if the UK adopts the GDPR, (which will take effect before the UK exits the EU), will the UK be bound to continue to implement the GDPR and its subsequent amendments? What would be the effect of the opinions, studies etc. conducted by the EU on the formulation of policies on robotics, machine learning, AI and cognitive computing in the future? More importantly, since the GDPR now establishes both a European Data Protection Board (EDPB) and national supervisory authorities, what are the effects of the multiple (or at least dual) administrative and compliance regimes that the Brexit could potentially create for the AI and robotics community in the EU?

5 Conclusion

The liability regime which applies to the use and operation of the agribot appears to be complicated. However, an essential aspect of this regime is that parties have different rights and obligations under different laws which make it possible to distribute liabilities. The law also allows defences which are particularly specific and relevant for promoting developments in technology. More crucially, where permitted by law, parties may re-allocate liabilities and claim contributions for damages arising from accidents involving the agribot.
The outstanding issue requiring consideration is how autonomy should be defined in the context of the operation of the agribot. Unless law, policy or (for present purposes) contracts define the scope of the autonomy of the robot, the liability regime may be challenged by technical legal arguments.

Acknowledgements

The authors would like to thank Ed Thomas at Risktec Solutions Ltd and the anonymous reviewers for their useful comments. This research was supported in part by the InnovateUK project IBEX2: “Autonomous robot weed spraying for less favoured areas”, grant number 131790.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/​), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Footnotes
1
For example, there are service robots, military robots, toy robots and so on.
 
2
Exclusion and limiting clauses allow parties to either limit or exclude liabilities for acts or omissions for which they would ordinarily be liable.
 
3
Health and safety law is the exception to this rule; the onus is on the defendant(s) to demonstrate to the satisfaction of the courts that they have discharged their duties under health and safety law.
 
4
See Directive 85/374/EEC on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products (as amended by Directive 1999/34/ec) The Directive is implemented in the UK by the Consumer Protection Act (CPA) 1987).
 
5
See e.g. UK CPA s 3 (2)(a).
 
6
See arts 7 and 15(1)(b) Directive, See also Sect. 4(1)(e) of the UK CPA which infact allows this defence.
 
7
Joint and several liability means the person injured by a defective product can sue multiple parties and recover full damages from one and/or all of them.
 
8
HSE public register of convictions indicates that this is around thirty (30) successful prosecutions under Sect. 6 in 10 years.
 
9
Judgement of Lord Asquith in Edwards vs National Coal Board 1949.
 
10
Health and Safety Executive. Reducing Risks Protecting People (R2P2).
 
11
1 × 10− 3 fatalities per annum for workers, 1 × 10− 4 fatalities per annum for members of the public.
 
12
R2P2 provides a value of £1,000,000 in 2001, however, this value, when subject to a reasonable allowance for inflation, should be considered a minimum value. Various higher values have been applied in different industries.
 
13
Edwards vs National Coal Board 1949; ‘Moreover this computation falls to be made by the owner at a point of time anterior to the accident.’
 
14
R2P2 Reducing Risks Protecting People.
 
15
Review of prosecutions for 2016/17 under the CDM Regulations 2015 describes a total of seven (7) potential breaches of duties for Principal Designer / Designer duties, whereas a total of ninety-nine (99) potential breaches of client duties, four-hundred and eighty-nine (489) potential breaches of Principal Contactor duties, and two-hundred and seventy-eight (278) potential breaches of Contractor duties were identified.
 
16
Several different hierarchies are available, for example, the commonly used ERIC PD (Elimination, Reduction, Isolation, Control, Procedures, Discipline) and the hierarchy provided in the Provision and Use of Work Equipment Regulations (PUWER) 1999, where fixed guards shall be provided to prevent exposure to dangerous parts of machinery, wherever practicable, and where not so, the provision of other guards or protection devices. Information, instruction, training and supervision are in all cases the lowest level of the hierarchy for the control of identified risks.
 
17
Safety functions designed in accordance with BS EN IEC 61,508 and BS EN IEC 62,061.
 
18
For example, INDG199 HSE leaflet on Workplace Transport Safety and HSG136 HSE guidance on Workplace Transport Safety, both of which are freely available electronically from the HSE website.
 
19
This means fact of defect is sufficient to raise a presumption of negligence unless it is disproved.
 
20
Directive 2006/42/EC came into effect on 29 December 2009 and replaced Directive 98/37/EC.
 
21
[1868] UKHL 1.
 
22
See e.g. HSE, ‘Cattle and Public Access in England and Wales: Advice for Farmers, Landowners and Livestock Keepers’ http://​www.​hse.​gov.​uk/​pubns/​ais17ew.​pdf accessed 03/05/2017.
 
23
Animals Act 1971, s 1.
 
24
Until 2015, a “grandfathering” scheme allows existing operators to practice without certification, this is no longer the case.
 
25
Usually a long and highly detailed legal document, not a physical label on a chemical container.
 
26
The high values have been most visibly demonstrated in many countries’ recent auctions of spectra to mobile phone companies.
 
27
From a safety perspective, it should also be noted that 2.4 GHz is a microwave frequency, similar to those used in microwave ovens, which operate at hundreds of watts for cooking. Multi-watt wifi transmission may be harmful to human tissue as well as illegal.
 
28
European Parliament, (2014–2019) Committee on Legal Affairs, ‘Draft Report with Recommendations to the Commission on Civil Law Rules on Robotics’ 2015/2103 (INL), p 8 (hereinafter Committee on Legal Affairs Draft Rules on Robotics).
 
29
See Regulation EU 2016/679 on the protection of Natural persons with regard to the processing of personal data and the free movement of such data, and repealing Directive 95/46/EC. The GDPR replaces Directive 96/46/EC on the protection of individuals with regard to the processing of personal data and on the free movement of such data.
 
30
The controller is a natural or legal person, public authority, agency or other body which, alone or jointly with others, determines the purposes and means of the processing of personal data. See GDPR art 4(7).
 
31
See generally GDPR art 5.
 
32
general principles on transfer are contained in GDPR arts 44–50.
 
33
See also GDPR art13(f).
 
34
European Parliament committee on Legal Affairs, ‘Artificial Intelligence: Potential Benefits and ethical Considerations’ p 4 http://​www.​europarl.​europa.​eu/​RegData/​etudes/​BRIE/​2016/​571380/​IPOL_​BRI%282016%29571380_​EN.​pdf accessed 13/03/2017.
 
35
See GDRP arts 33, 51, 58, 83(4) & (5).
 
36
(2014) EWHC 13 (QB).
 
37
Although the contractor can bring an action in tort if this is due to a defect in the product but only if he also suffers a damage. See notes on product defect above.
 
38
See Compensation Act 2006s 1(b).
 
39
Committee on Legal Affairs, ‘Draft Rules on Robotics’, Recital R.
 
40
Committee on Legal Affairs Draft Rules on Robotics, Recital Z.
 
41
Committee on Legal Affairs Draft Rules on Robotics, Recitals Q, R, S.
 
42
See House of Commons, Science and Technology Committee, ‘Robotics and Artificial Intelligence’ (12 October 2016) p 16.
 
43
European Parliament, European Civil Law Rules in Robotics (Study for the JURI Committee) 2016 p 13 (hereinafter EU Parliament, Civil Law Rules in Robotics).
 
44
Committee on Legal Affairs Draft Rules on Robotics, Recital I.
 
45
Committee on Legal Affairs Draft Rules on Robotics, Recitals S.
 
46
Francesca Rossi, ‘Artificial Intelligence: Potential Benefits and Ethical Considerations’ (European Parliament Legal Affairs Committee Briefing) 2016 p 4.
 
47
Committee on Legal Affairs, ‘Draft Rules on Robotics’, 2016 p 7.
 
48
EU Parliament, ‘Civil Law Rules in Robotics’, 17.
 
49
EU Parliament, ‘Civil Law Rules in Robotics’ p 17.
 
50
See generally Committee on Legal Affairs Draft Rules on Robotics, p 13.
 
51
This is similar to the concept of corporate legal personality which allows confers legal entities on companies and corporations, thus separating the corporation or company from its promoters, managers and directors.
 
52
Committee on Legal Affairs Draft Rules on Robotics, p 13.
 
53
Committee on Legal Affairs Draft Rules on Robotics p 13.
 
54
Committee on Legal Affairs Draft Rules on Robotics p 13.
 
55
See COUNCIL REGULATION (EC) No 428/2009 of 5 May 2009 setting up a Community regime for the control of exports, transfer, brokering and transit of dual-use items (hereinafter Regulation 428/2009).
 
56
See EU Parliament, ‘Implementation Appraisal Control of Trade in Dual Use Items’ (…Committee briefing 2016).
 
57
Regulation 428/2009, Recitals 2, 3.
 
58
Regulation 428/2009, art 2.
 
59
Committee on Legal Affairs ‘Draft Rules on Robotics’ item 34 p 12.
 
60
See notes on product defects above.
 
61
See Directive 2006/42/EC, item 1 of Annex 1.
 
62
for example, privacy can be relative to context, societies and even technologies.
 
63
Executive Office of the president National Science and Technology Council Committee on Technology, ‘Preparing for the Future of Artificial intelligence’ (2016) p 31; See also Arnold v Reuther 92 So. 2d 595, 596 where the court appeared to sanction this notion of autonomy when it suggested that liability in cases of autonomous vehicles will be higher because they (the vehicles) raise the presumption that their programming is accurate.
 
64
Bryce Goodman and Seth Flaxman, ‘European Union Regulations on Algorithmic Decision-making and a Right to Explanation’ p 1.
 
Literature
go back to reference Anderson JM, Nidhi K, Stanley KD, Sorensen P, Samaras C, Oluwatola OA (2014) Autonomous vehicle technology: a guide for policymakers. Rand Corporation Anderson JM, Nidhi K, Stanley KD, Sorensen P, Samaras C, Oluwatola OA (2014) Autonomous vehicle technology: a guide for policymakers. Rand Corporation
go back to reference Andrieu C, Freitas ND, Doucet A, Jordan MI (2003) An introduction to mcmc for machine learning. Mach Learn 50(1–2):5–43CrossRef Andrieu C, Freitas ND, Doucet A, Jordan MI (2003) An introduction to mcmc for machine learning. Mach Learn 50(1–2):5–43CrossRef
go back to reference Bac CW, Henten EJ, Hemming J, Edan Y (2014) Harvesting robots for high-value crops: state-of-the-art review and challenges ahead. J Field Robot 31(6):888–911CrossRef Bac CW, Henten EJ, Hemming J, Edan Y (2014) Harvesting robots for high-value crops: state-of-the-art review and challenges ahead. J Field Robot 31(6):888–911CrossRef
go back to reference Beck S (2016) The problem of ascribing legal responsibility in the case of robotics. AI Soc 31(4):473–481CrossRef Beck S (2016) The problem of ascribing legal responsibility in the case of robotics. AI Soc 31(4):473–481CrossRef
go back to reference Beiker SA (2012) Legal aspects of autonomous driving. Santa Clara L Rev 52:1145 Beiker SA (2012) Legal aspects of autonomous driving. Santa Clara L Rev 52:1145
go back to reference Bernardo JM, Smith AFM (2001) Bayesian theory Bernardo JM, Smith AFM (2001) Bayesian theory
go back to reference Binch A, Fox CW (2017) Controlled comparison of machine vision algorithms for Rumex and Urtica detection in grassland. Comput Electron Agric 140:123–138CrossRef Binch A, Fox CW (2017) Controlled comparison of machine vision algorithms for Rumex and Urtica detection in grassland. Comput Electron Agric 140:123–138CrossRef
go back to reference Blackmore S, Godwin RJ, Fountas S (2003) The analysis of spatial and temporal trends in yield map data over six years. Biosyst Eng 84(4):455–466CrossRef Blackmore S, Godwin RJ, Fountas S (2003) The analysis of spatial and temporal trends in yield map data over six years. Biosyst Eng 84(4):455–466CrossRef
go back to reference Blackmore S, Griepentrog HW, Nielsen H, Nørremark M, Resting-Jeppesen J (2004) Development of a deterministic autonomous tractor. In Proceedings CIGR, vol. 11, p 2004 Blackmore S, Griepentrog HW, Nielsen H, Nørremark M, Resting-Jeppesen J (2004) Development of a deterministic autonomous tractor. In Proceedings CIGR, vol. 11, p 2004
go back to reference Brodsky JS (2016) Cyberlaw and venture law: Autonomous vehicle regulation: how an uncertain legal landscape may hit the brakes on self-driving cars. Berkeley Tech LJ 31:851–1169 Brodsky JS (2016) Cyberlaw and venture law: Autonomous vehicle regulation: how an uncertain legal landscape may hit the brakes on self-driving cars. Berkeley Tech LJ 31:851–1169
go back to reference Chua PY, Ilschner T, Caldwell DG (2003) Robotic manipulation of food products—a review. Ind Robot Int J 30(4):345–354CrossRef Chua PY, Ilschner T, Caldwell DG (2003) Robotic manipulation of food products—a review. Ind Robot Int J 30(4):345–354CrossRef
go back to reference Cooper GF (1990) The computational complexity of probabilistic inference using Bayesian belief networks. Artif Intell 42(2–3):393–405MathSciNetCrossRef Cooper GF (1990) The computational complexity of probabilistic inference using Bayesian belief networks. Artif Intell 42(2–3):393–405MathSciNetCrossRef
go back to reference Douma F, Palodichuk SA (2012) Criminal liability issues created by autonomous vehicles. Santa Clara L Rev, 52:1157 Douma F, Palodichuk SA (2012) Criminal liability issues created by autonomous vehicles. Santa Clara L Rev, 52:1157
go back to reference Dr. Mike Webster (2017) HSE Improvement and Prohibition Notices: what do they tell us about CDM 2015 and construction health and safety? V1.0 November Dr. Mike Webster (2017) HSE Improvement and Prohibition Notices: what do they tell us about CDM 2015 and construction health and safety? V1.0 November
go back to reference Dvorak J (2016) An autonomous, solar-powered tractor. 2016. American Society of Agricultural and Biological Engineers Dvorak J (2016) An autonomous, solar-powered tractor. 2016. American Society of Agricultural and Biological Engineers
go back to reference Escolà A, Rosell-Polo JR, Planas S, Gil E, Pomar J, Camp F, Llorens J, Solanelles F (2013) Variable rate sprayer. Part 1—orchard prototype: design, implementation and validation. Comput Electron Agric 95:122–135CrossRef Escolà A, Rosell-Polo JR, Planas S, Gil E, Pomar J, Camp F, Llorens J, Solanelles F (2013) Variable rate sprayer. Part 1—orchard prototype: design, implementation and validation. Comput Electron Agric 95:122–135CrossRef
go back to reference Fox CW, Roberts SJ (2012) A tutorial on variational bayesian inference. Artif Intell Rev 38(2):85–95CrossRef Fox CW, Roberts SJ (2012) A tutorial on variational bayesian inference. Artif Intell Rev 38(2):85–95CrossRef
go back to reference Guizzo E (2011) How Google’s self-driving car works. IEEE Spectrum Online, October 18, 2011 Guizzo E (2011) How Google’s self-driving car works. IEEE Spectrum Online, October 18, 2011
go back to reference Health and Safety Executive (2001) (R2P2) Reducing risks, protecting people – HSE’s decision-making process Health and Safety Executive (2001) (R2P2) Reducing risks, protecting people – HSE’s decision-making process
go back to reference Ishida M, Imou K, Okado A, Takenaga H, Honda Y, Itokawa N, Shibuya Y (1998) Autonomous tractor for forage production. J Jpn Soc Agric Mach 60(2):59–66 Ishida M, Imou K, Okado A, Takenaga H, Honda Y, Itokawa N, Shibuya Y (1998) Autonomous tractor for forage production. J Jpn Soc Agric Mach 60(2):59–66
go back to reference Levitt TS, Laskey KB (2000) Computational inference for evidential reasoning in support of judicial proof. Cardozo L Rev 22:1691 Levitt TS, Laskey KB (2000) Computational inference for evidential reasoning in support of judicial proof. Cardozo L Rev 22:1691
go back to reference Marino D, Tamburrini G (2006) Learning robots and human responsibility. Int Rev Inf Ethics 6(12):46–51 Marino D, Tamburrini G (2006) Learning robots and human responsibility. Int Rev Inf Ethics 6(12):46–51
go back to reference Michio K, Noguchi N, Ishii K, Terato H (2002) The development of the autonomous tractor with steering controller applied by optimal control. In: Automation technology for off-road equipment proceedings of the 2002 conference, p 367. American Society of Agricultural and Biological Engineers Michio K, Noguchi N, Ishii K, Terato H (2002) The development of the autonomous tractor with steering controller applied by optimal control. In: Automation technology for off-road equipment proceedings of the 2002 conference, p 367. American Society of Agricultural and Biological Engineers
go back to reference Pedersen SM, Fountas S, Have H, Blackmore BS (2006) Agricultural robots—system analysis and economic feasibility. Precis Agric 7(4):295–308CrossRef Pedersen SM, Fountas S, Have H, Blackmore BS (2006) Agricultural robots—system analysis and economic feasibility. Precis Agric 7(4):295–308CrossRef
go back to reference Pinto C (2012) How autonomous vehicle policy in California and Nevada addresses technological and non-technological liabilities. Intersect Stanf J Sci Technol Soc 5 Pinto C (2012) How autonomous vehicle policy in California and Nevada addresses technological and non-technological liabilities. Intersect Stanf J Sci Technol Soc 5
go back to reference Singh N, Shaligram AD (2014) NPK measurement in soil and automatic soil fertilizer dispensing robot. Int J Eng Res Technol 3:635–637 (ESRSA Publications) CrossRef Singh N, Shaligram AD (2014) NPK measurement in soil and automatic soil fertilizer dispensing robot. Int J Eng Res Technol 3:635–637 (ESRSA Publications) CrossRef
go back to reference Thrun S, Burgard W, Fox D (2005) Probabilistic robotics. MIT press, CambridgeMATH Thrun S, Burgard W, Fox D (2005) Probabilistic robotics. MIT press, CambridgeMATH
go back to reference Transparency Market Research (2017) Agriculture robots market—global industry analysis, size, share, growth, trends and forecast 2016–2024. Transparency Market Research Transparency Market Research (2017) Agriculture robots market—global industry analysis, size, share, growth, trends and forecast 2016–2024. Transparency Market Research
Metadata
Title
Legal framework for small autonomous agricultural robots
Authors
Subhajit Basu
Adekemi Omotubora
Matt Beeson
Charles Fox
Publication date
08-05-2018
Publisher
Springer London
Published in
AI & SOCIETY / Issue 1/2020
Print ISSN: 0951-5666
Electronic ISSN: 1435-5655
DOI
https://doi.org/10.1007/s00146-018-0846-4

Other articles of this Issue 1/2020

AI & SOCIETY 1/2020 Go to the issue

Premium Partner