Skip to main content
Top
Published in: International Journal of Social Robotics 3/2016

Open Access 01-06-2016

On Robots and Insurance

Authors: Andrea Bertolini, Pericle Salvini, Teresa Pagliai, Annagiulia Morachioli, Giorgia Acerbi, Leopoldo Trieste, Filippo Cavallo, Giuseppe Turchetti, Paolo Dario

Published in: International Journal of Social Robotics | Issue 3/2016

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Does a robot producer, owner or user need insurance policy? What kind of risks derive from the use of robots? Are there any differences between insuring a robot and another product? And finally, who should bear the burden of paying the insurance premium? The objective of this article is to provide the reader with an overview on the issue of risk management of robotic applications through insurance contracts. Indeed, insurance products are essential for an effective technology transfer from research to market. As of today, the production, use and diffusion of robots, determines risks that can hardly be identified and assessed both with respect to the probability of their occurrence and to the consequences they might bring about. Thence, innovation causes existing insurance products to be inadequate, and often times, it leaves insurance companies in the complex position of needing to elaborate new solutions in the absence of complete information. This article discusses the reasons for those hindrances and identifies the essential issues that lawyers, economists and engineers need to address in their future research in order to overcome current limitations and ultimately develop efficient and adequate risk management tools.
Notes
An erratum to this article can be found at http://​dx.​doi.​org/​10.​1007/​s12369-016-0360-0.

1 Introduction

In order for innovation to reach the market, a laboratory prototype needs to be transformed into a product. This entails a relevant technological effort in order to ensure the effectiveness of the device, its reliability, its appeal to the public as a tool that responds to existing and perceived needs, as well as an attentive study of the market where it is intended to be distributed, and the development of an adequate business strategy for the purpose. However, at least as relevant is the issue of risk management, in particular of the legal risks associated to the distribution of the device. When a product—of whichever nature, hence both robotic and not—is sold and then used, the issue of whom is going to be held liable for the negative consequences arising from its functioning, is most certainly of the greatest relevance. Indeed, the legal risk of being held liable, and hence called on to compensate damages, needs to be managed too, and this is commonly done through insurance contracts that, in an economic perspective, transform ex post uncertainty into an ex ante—known, thence manageable-cost.
Existing literature has already extensively discussed the issue of who shall be held liable when damage arises from the use of a robot. Despite the problem not having been solved yet—for it is not even plausible to conceive a one-fits-all solution [1]—this is but one of the relevant aspects that needs to be addressed in order to be able to elaborate a risk management strategy. Precisely identifying such risks, and their possible—economic and legal—consequences, as well as the likelihood of them materializing is at least as essential. Insurance contracts are in fact elaborated pursuant to that information.
In this article, we therefore intend to cast some light on the issue of robots and insurance, firstly by identifying the reasons why existing insurance products may be inadequate in managing the specific risks posed by robotic technologies, secondly by specifying the issues that lawyers, economists and engineers need to address in order to overcome those hindrances.
Delay in filling this gap in the insurance industry may overall result in a technology-chilling effect, delaying the emergence of new, desirable robotic applications onto the market, ultimately impairing the formation of a strong robotic industry.
The social desirability of a robotic application may also derive from the impact it has on fostering human beings’ relevant constitutional rights. Robotics prostheses, for instance, allow people with disabilities to recover a lost function, increasing their quality of life. As pointed out by [2] with respect to microprocessor controlled lower-limb prosthetics: ‘the prosthetics industry suffers from a slow billing code application process, Medicare1-imposed fitness restrictions (on amputees that are likely to suffer from diabetes or obesity), insurance contracts that hurt prosthetist office profits, and private health insurance plans that restrict patients’ options.’ Notwithstanding the advantages brought about by robotics prostheses and the improvement in the amputees’ quality of life, many patients are still excluded from enjoying the benefits of this technology due to financial reasons (i.e. private health insurance not covering the price of robotic prostheses). The development of such devices has therefore not just a mere economic and industrial relevance—which is still very relevant and shall not be underestimated—but also a characterizing social value. The issue of risk management is therefore to be understood as more broadly encompassing also ethical and social implications of advanced technologies [4].
The paper is organised as follows: Sect. 1 provides some preliminary notions on insurance contracts; Sect. 2 depicts a broad overview of the current market for robots and insurance contracts in use, if any; Sect. 3 discusses some aspects of the management of risks posed by robots, namely the type of negative consequences that may materialize and the assessment of the likelihood of them materializing.

1.1 Preliminary Remarks on Insurance Contracts

Insurance is a contract aimed at shielding the insured party from the adverse economic consequences of a future and possible risk, should that risk materialize. The risk can be of any type, depending on a negligent behaviour of the insured, or of third parties, causing harm to the same party entering the contract (first party insurance) or to others (third party insurance). The losses due to its materialization instead, may pertain both to the health and estate of the contracting party. Pursuant to art. 1882 of the Italian civil code (henceforth c.c.), insurance is defined as:
The contract whereby the insurer, against payment of a premium, undertakes to retaliate the insured, within the agreed limits, the damage caused by an accident [...]2
According to the definition above, the prerequisites of this contract are: (i) an agreement between the parties, (ii) the existence of a risk to the insured party or potential third parties, (iii) the payment of a premium.
The latter is a function of the risk; hence it varies according to both the likelihood of its occurrence, and the severity of the consequences that may arise once it materializes. Therefore, the very possibility to enter such a contract rests on the availability of data related to those parameters. In the absence of adequate statistical information, the insurance company will not be able to determine the premium and most likely will refuse to contract for the specific risk.
Insurance is one of the possible legal tools that parties can resort to in order to manage the risks they face with their activities, both professional and not (e.g.: driving). The contract will thence be entered into in order to avoid (shift) the negative economic consequences the party would otherwise face because of the application of liabilities rules set forth by existing legislation that—for the issue here addressed—may range from professional to enterprise liability as well as product liability.
Normally, the parties are left free to decide whether to enter an insurance contract or not, in order to manage a risk they are exposed to. Their personal preferences, their risk aversion and ultimately their utility functions will determine whether a contract will be signed, and at which conditions, as for any other asset-management-related choice.
However, in some cases, the legislator deeming that the risk associated to a certain activity is too high, or that moral hazardous behaviour and adverse selection may negatively affect the pooling and spreading of the damages that arise, may legislatively impose a duty to purchase insurance (normally third party insurance) [57]. This is the case with compulsory traffic insurance [8] as well as with professional liability insurance (typically medical doctors and lawyers), and more recently also with drones [8].
So briefly sketched the notion of insurance, and its purpose, we can attempt to address the questions of whether (i) robotic applications do pose novel issues in this perspective, (ii) whether the risks they pose can be insured, and (iii) who eventually is better suited to bear the burden associated therewith, hence to purchase coverage.
Given the variety of applications (from surgery to domestic chores) and the differences in national legislation, in this paper we will refer mainly to Italian and European law, when applicable. The considerations drawn are however of a general nature and can be extended to other legal systems.

2 Overview of Some Existing Professional and Non-professional Robots and Their Implications with Respect to Insurance Contracts

Since a few decades, many robots have become commercial products and robots sales continue to increase. In 2013, 178.132 units of industrial robots and 4 million of service robots were sold, 12 % and 28 % more with respect to 2012, respectively [9]. Among the countries that most seriously are pursing a robot revolution in economy and society is Japan. Remarkably, the Ministry of Economy Trade and Industry (METI) has recently announced Japan’s robot strategy for a “New industrial revolution driven by robots”, in which the use of robotic technologies is meant to: ‘improve Japan’s productivity, enhance companies’ earning power, and raise wages’ [10]. Among the challenges identified by METI is that ‘current technique is insufficient in identifying and evaluating the risk of unexpected potential accidents that results from the expansion of the area to utilize robots’ [10].
The aim of this section is to point out the implications deriving from producing, owning or using robots with respect to risk management, by looking at some of the robots currently available in the market. In other words, we wish to find out whether the use, possession or production of robots is burdened by a legal duty to underwrite insurance policies, as well as whether insurance products—either already existing or specifically developed ones—are available and diffused as tools to manage the legal risks posed by applicable liability rules.
Before proceeding to the analysis of commercial robots, it is worth mentioning the case of the test of the autonomous robot DustCart—developed within the European funded project DustBot [11]. The robot, designed to perform a door-to-door separate waste collection service on demand, was tested in the down-town area of the village Peccioli from 15th June to 7th August 2010, with real users as participants [12]. The objectives of the test, which involved 24 families and 10 business activities, were to evaluate the robot technical performance, the usability and acceptability of the system and the economic affordability of the service provided. The area of the test was adapted to host the presence of the autonomous robot by dedicating a part of the road for the robot transit and installing new road signs, warning about the presence of the robot in the area.
In order for the testing to take place in a safe and fully controlled fashion, the BioRobotics Institute of Scuola Superiore Sant’Anna, decided to enter into a specific insurance contract with its established provider, that way integrating the general insurance policy, which covers any research activities, including demos carried out with prototypes by the institution personnel in any place of the world.
This on the one hand shows that the testing of the robot in a real life environment, despite supervised, because not restricted, was perceived as a new kind of risk. On the other hand, however, the sum that was charged to integrate the general contract to cover the testing of the robot, was not corresponding to the exact determination of the foreseeable risks in light of precise and existing data. Therefore, entering an insurance contract under such conditions is therefore potentially hazardous for both parties, and may only be applied to an exceptional testing situation, being absolutely inadequate for the distribution of the same product to the market and to a general public.

2.1 Industrial Robots

It is possible to distinguish two main typologies of industrial robots: robots operating in isolation from human beings, usually constrained inside protective cages; and “collaborative” robots, which are designed to interact physically with workers, such as Baxter by Rethinking Robotics [13] or UR5 by Universal Robots [14]. While conventional industrial robots are already market products, with EC mark, Baxter and other similar robots designed for physical collaboration with workers are not yet commercialised in Europe. As to insurance, it seems possible to argue that there are no specific requirements neither for producers, nor owners, nor users of conventional industrial robots. As a matter of fact, they are very well established in manufacturing and their production and use fully covered by regulations [15] and standards [16]. On the contrary, safety standards for “collaborative” robots are still under development [17]. Therefore, it remains uncertain whether insurance companies are already prepared for covering the risks associated with the deployment of collaborative robots such as Baxter, considering that workers insurance is compulsory in many countries, such as Italy.
Parenthetically, the conventional assumption that automation increases the overall safety of an activity has led some scholar to consider the consequences of a safer workplace. By reducing the risks to which workers are exposed to in factories, it is likely that workers’ compensation coverage can be reduced too [18]. Whether collaborative robots will increase or further reduce the safety of the workers interacting with them is not clear yet.

2.2 Service Robots

A service robot ‘is a robot that performs useful tasks for humans or equipment excluding industrial automation application’ [19]. An example of service robot for non professional use is Roomba by iRobot, which was introduced in the market since 2002 [20]. Roomba is an autonomous robot designed to vacuum clean floors. There are many other robots similar to Roomba that are now commercialised, such as such as pool, window or gutter cleaners. For these kinds of “chore” robots there are no specific insurance requirements for production, ownership or use. This is due to the fact that the risks are easily identifiable and do not present significant physical hazards for people or things, given the small sizes and limited capabilities of these robots. The same is true for some of the most popular entertainment and educational robots sold in the market, such the robotic dog AIBO by Sony (model ERS-7) and the small humanoid robot Nao Evolution by Aldebaran.
According to the regulatory information provided by the manufacturers, the products are compliant with European Directives as well as European and international standards.3 Nevertheless, saying that there are no insurance requirements or that robots are available in the market does not imply that they might not bring about risks, in particular to human safety and security. However, since these robots are relatively new, there are no long term studies showing evidence of such risks. The problem has been succinctly described by the Collingridge dilemma: the impact of a new technology cannot be easily predicted until the technology is extensively developed and widely used. However, at that time, it would become difficult to remediate to eventual problem [21].
Still in the field of service applications, a special case is deserved by civilian drones. A drone is intended by the International Civil Aviation Organisation (ICAO)4 as ‘an unmanned aerial vehicle, which is a pilotless aircraft, in the sense of Article 8 of the Convention on International Civil Aviation, which is flown without a pilot-in-command on-board and is either remotely and fully controlled from another place (ground, another aircraft, space) or programmed and fully autonomous’ [22].5 Therefore, with respect to human control, drones can be divided into manned and unmanned aircraft, that is, devices remotely operated or autonomous, respectively. Manned drones are also known as ‘Remotely Piloted Aviation Systems (RPAS)’, namely: ‘A set of configurable elements consisting of a remotely-piloted aircraft, its associated remote pilot station(s), the required command and control links and any other system elements as may be required, at any point during flight operation’ [22]. RPAS are currently the subject of new regulations for use in airspace. As pointed out by ICAO: ‘Fully autonomous aircraft operations are not being considered in this effort, nor are unmanned free balloons nor other types of aircraft which cannot be managed on a real-time basis during flight.’ [22]. Indeed, the Italian Civil Aviation Authority has issued the ‘Remotely Piloted Aerial Vehicles Regulation’, which entered into force on April 30th, 2014 [8]. As far as insurance is concerned, according to Art. 20 of the Italian regulation: ‘A RPAS cannot be operated without valid, adequate third party insurance, not less than the minimum insurance coverage of the table in Article 7 of Regulation (EC) No 785/2004’ [8]. However, as pointed out in the literature, insuring drones can be complicated given the fact that there is no sufficient data on the hazard that can be caused by RPAS [23].
Going back to unmanned drones, that is, drones that can fly autonomously, without any human intervention, although they are not yet authorised for use, either by ICAO or under EU regulations, nevertheless, it is worth pointing out that they are already available in the market. An example is Hexo Plus, a drone with a camera that can follow and film a person autonomously [24].
Self-driving cars are not yet on the market, although the technology is currently being experimented on many public roads, also in Europe [25], thanks to special legal permissions [26]. As pointed out by [27]: ‘The problem of driverless cars has shifted from one of engineering and technology, to one of legislation and insurance’. Outside of the medical field, self-driving cars are one of the most discussed topics with respect to robotics and autonomy [28]. Concerning insurance, according to [29], for the management of risks deriving from autonomous vehicles new business models are needed. Likewise surgical robots, autonomous vehicles could increase safety and therefore may determine a decrease of insurance costs [30]. The conclusion seems however implausible and needs to be further specified. No matter how safe driverless vehicles will be, accidents will still occur, due to causes and malfunctions that may be hard to foresee at the time being. Accidents due to human lack of attention and reckless behaviour on the street will clearly become negligible, but new vulnerabilities will certainly emerge. For instance, the fact that driverless vehicles will make use of numerous sensors and connections will give raise to the relevant issue of their cyber security and vulnerability to attacks brought about from a distance. The role of policy makers in fostering available insurance product for autonomous vehicles will be determinant.6
As far as insurance issues are concerned, the central point seems to be the human action (or inaction), in terms of either choosing to override the autonomous robot’s actions in the name of safety, or simply letting the robot continue its intended actions. If an accident occurs, the legal debate will be whether it is the manufacturer’s fault, or the individual’s fault for taking over [31]. With a fully autonomous vehicle the responsibility for avoiding an accident shifts entirely to the vehicle and the components of its accident avoidance systems [32]. Some might argue that if cars really do become safer, there might not even be a need for motor insurance [33] with evident negative consequences for the insurance market. However, as stated in Lloyd’s report on Autonomous vehicles [34], some element of risk would be retained by the owner of a car (e.g. damage or theft can still occur when a car is parked in a driveway).
Finally, another special kind of service that robots can perform is in the medical field. A definition of medical robots within standards organisations doesn’t exist yet, however the phrase is very popular in the literature.7 Most of “medical robots” are used by professional users, such as Da Vinci surgical robot by from Intuitive Surgical [36]. However, there are also medical robots designed for patients, such as DEKA Arm System by DEKA [37], a bionic arm for amputees, which has been approved by U.S. Food and Drug Administration (FDA) for commercialisation or the robot suits HAL by Cyberdyne [38], an exoskeleton designed to assist people with reduced mobility, which is under FDA approval. Given the variety of applications, we will consider here the robot Da Vinci, which is already sold in the European market. The robot is a tele-operated robot currently used for minimally invasive surgery [36]. Since it is difficult to identify a common attitude towards insurance coverage among European and non-EU countries, we will report here the statement taken from the Da Vinci factory: ‘da Vinci Surgery is categorized as robot-assisted minimally invasive surgery, so any insurance that covers minimally invasive surgery generally covers da Vinci Surgery’ [36]. Searching in the literature, the most recurrent issues concerning surgical robots and insurance are about the coverage of specific surgical interventions by the national health insurance service [39, 40] and about the reduced health insurance costs resulting from robotic assisted surgery [41]. These issues are relevant also for other medical applications of robots, such as prosthetic devices, as reported by [2]. Finally, a few scholars focus on the need for hospitals and doctors to have insurance that covers robotic assisted surgery [42, 43]. Indeed, it is questionable whether the robot fits in the existing contracts for medical doctors or as a consequence of new risks, new contracts should be devised.

3 Discussion

With the exception of self-driving cars, all the applications described in the previous section are already market products. However, as we shall point out, the presence in the market does not imply that the management of risk is solved or even addressed.
Most of the commercial robots currently in the market are not covered by any specific insurance product. That is for instance the case of surgical robots. Existing medical insurances for low invasive laparoscopy may apply, but it is not certain whether the specificity of the robotic device is taken into account when determining the premium. In particular, uncertainty with respect to the specific risks posed by the technology may only be overcome once the use of the device is diffused and data is available about the rate of accident involving its use.
In other cases, such as that of drones, a legal duty to acquire third party insurance was introduced under Italian law [8]. However, it is not clear what kind of insurance contracts are currently being offered to users, given that the lack of data on their use makes it extremely difficult for companies to provide a narrow tailored contract.
Finally, in case the device is autonomous—like a driverless vehicle—the current legal duty to acquire insurance—as currently is the case for the owners of cars under EU Motor Insurance Directive 2009/103/EC—may be plainly unjustified. Indeed, once the device is truly autonomous and the human being cannot interfere with its functioning (not even by supervising and eventually intervening to avoid collision or take over control in some cases), the only party responsible for the functioning of the device will be the producer or designer. Hence, forcing the owner to bear the cost of insurance would appear completely unjustified based on the application of a common fault principle. Indeed, in such a scenario either the claim is made that it is better (more efficient) to leave the cost with the user rather than the producer—both a deep-pocket and an economies-of-scale argument appear fallacious in such a case—or a different legal solution ought to be enacted to take technological development into account. Nonetheless, such a problem will only exist once driverless vehicles become completely independent from human control and supervision, while for the moment existing liability schemes and insurance products may prove adequate.
Let us now address the question concerning the possible risks that can be generated by a robot, in particular by looking at whether these risks are different from those generated by other products. Indeed, the identification of risks is relevant since the presence of a risk allows transforming it into an economic value, which is a pre-condition for insurance agreements.

3.1 What Kind of Damages Can be Caused by Robots?

Although robots are intended to perform the tasks they are designed for in a safer way than the human being they replace or assist, it is not possible to exclude the occurrence of a dangerous event. Each robot presents different kinds and levels of risk, which depend on how the robot is made of (its nature, including size, shape, weight, material etc.), what the robot is doing (its function or task), where the robot is employed (its operative environment) and whom it interacts with (its relationship with human beings or objects). The fact that nowadays robots are deployed in several public environments in physical contact or proximity with users or bystanders, reduces the possibility to devise extrinsic protective measures, such as fences or helmets, as it was with early industrial robots in factories, and highlights the need to design intrinsic safety solutions.
Let us now consider the kinds of risks that a robot may cause. Likewise any other artefact, the risks generated by a robot can be divided into damage to a person and to things and property.
As far as personal damage is concerned, we propose to consider two kinds of damage which may happen using a robot:
  • hard damages
  • soft damages.
The distinction is meant to take into account the current evolution in the field of human robot interaction from merely physical to cognitive and affective.
Hard damages are physical, that is, related to the body of a person and caused by physical human–robot interaction [44]. Physical damages can be caused by defects in the correct working of a robot and therefore they are ascribed to the producer. However, they can also be caused by users to third persons or objects as a consequence of wrong use, software or hardware alteration, lack of maintenance.
On the contrary, we propose to call “soft damages”, mental or psychological damages, which are related to the mind and can affect the cognitive, social and even emotional capabilities of a person. This kind of damage may derive from prolonged interactions with special kinds of robots, such as personal care robots. Psychologist Sherry Turkle defines “nurturing machines” robots that are eliciting in their users a compulsion to do something for them, e.g. feeding or interacting [45]. This kind of danger may be compared to addiction to video-poker machines or the Internet, which are new entries in the Diagnostic and Statistical Manual of Mental Disorders edited by the American Psychiatric Association [46]. Another example of soft hazard could be caused by the so-called “environmental generational amnesia” [47], that is, to get used to and prefer the artificial copy or surrogate to the natural as a consequence of the increasing replacement of reality with technological surrogates.

3.2 Risk Assessment

Related and preliminary to the presence and definition of risks is their identification and evaluation. The evaluation of a risk, which is at the basis of the definition of the insurance premium, is based on the definition of its frequency and its severity.
The difficulty in assessing risks with respect to robots is due to (i) the technical complexity of robotic devices, (ii) the lack of sufficient data with respect to the potential risks and the accidents they may cause, (iii) the uncertainties with respect to liabilities that producers and users may face.
With respect to the difficulties in assessing technological risks due to robot complexity, the close interaction of insurance companies and robot designers and producers may prove useful, despite non-sufficient. The issue concerning the lack of substantial data about potential risks and the accidents, may be partially addressed through extensive testing in real life environments, which do prove useful information when interactions of the device in the environment are closer to reality. Indeed, the possibility to test robots in their operating fields rather than in laboratories would allow situating robots in their environments, namely in designing socially as well as legally ‘embedded robots that are structurally coupled with their surroundings’ [48]. As regards to this, deregulation areas [49] or robotics innovation facilities (RIF) [50] may prove useful. However, in the case of devices to be used in very unstructured and extremely variable and diverse scenarios—like robotic prostheses and exoskeletons—only the diffusion and real use of the device—and subsequent accidents caused—will provide more reliable data [33]. In this case, ethical considerations should be carefully evaluated since the experiments would involve human subjects. Although valuable at research level, the argument in favour of “real use” as the best way for risk assessment, to be a viable solution should receive approval from the national Ethical Committees and together with the informed consent, be in line with regulations on human clinical trials and with the Helsinki Declaration. Finally, as far as the uncertainty in liability is concerned, this issue may only be partially addressed. Indeed, all existing and forthcoming robotic applications are regulated by existing liability rules, which, unless provided otherwise, do apply. Therefore the problem of the “legal gap” is a false problem, since anything that exists, is also regulated by the legal system where it rests. The real issue instead, is whether the existing legal framework is the best and desirable one for that kind of technology, given its social relevance, desirability and ethical and economical importance. In this perspective, sound policy arguments can often be formulated both to ensure high levels of product safety and adequate compensation to the victim, while allowing the growth of a market for these applications.
In the next section, we will provide a brief discussion of the problems associated with the application of liability rules to robotic devices.

3.3 The Liability Regime of Robots: Who Pays for Damages When Robots Run Amok?

Robots are products [51],8 hence product liability rules do apply. Product liability rules make the producer responsible for all damages caused by the user and third parties by the functioning of the device. In particular, when damage occurs as a consequence of the use of a robot, its design may be deemed defective and not sufficiently safe. What kind of design may be deemed defective is however a matter of fact, decided ex post by a judge in a trial. In some cases, the novelty of robotic technologies may cause the outcome of a judicial decision to be highly uncertain. Moreover, the absence of sufficiently narrow-tailored technical standards [52] may increase this uncertainty. To better illustrate this concept, the case of robotic prostheses may be shortly considered. Prostheses, pursuant to the European directive on active implantable medical devices [52], need to abide those technical regulations in order to receive the EC marking, which is required for their commercialization in Europe. However, that same directive applies to anything ranging from a pace maker to an industrial robot, intuitively very distinct devices. At the same time, pursuant to existing legislation even if the device complied with the safety standards set forth by the directive, liability would not be excluded if some accident occurred while the product was being used.
Other existing liability rules may be considered at a theoretical level in order to determine who may be called to pay for damages arising from the use of robots. Rules such as that establishing the liability of the person for the damage caused by a thing in custody or by an animal [53], as well as a general negligence standard (such as that set forth by art. 2043 c.c.) can be considered [54, 55], and [51]. Addressing the details of those regulations falls beyond the purposes of the current analysis. However, it shall suffice to say that given that such rules were not designed with robots in mind, anticipating the way they may be applied in a trial is not easy, and the lack of significant data about previous occurrences does not allow to easily inferring workable conclusions.
Particularly complex is the disentangling of liability in cases of articulated human-machine interaction, for instance in the case of driverless vehicles before they become completely autonomous. In such cases, the responsibility of the human “driver”, still partially in control, interferes with that of the producer and programmer, who designed the vehicle. Precisely assessing how a court could distribute the negative consequences deriving from an accident in such cases is again quite difficult, and strongly rests on the very possibility of tracking down the material cause of the event.
Finally, if the possibility of machine learning is considered, further uncertainty is added to the determination of which party ought to be held liable for the harmful consequences arising from the use of the robot. On the one hand, producers may be called on to compensate pursuant to the claim that they failed to provide the device with adequate safety measures to prevent the negative outcome that arose as consequence of its learning capacity. Such reasoning is ultimately not any different from that of all product liability claims grounded on the defectiveness of design.
On the other hand, instead, the role of the users who “taught” the robot may be considered too. Indeed, they may have actively “modified” the behaviour of the machine in a way deemed to be ex ante unpredictable by the producer.
As for the case of the driverless vehicle, disentangling the two different kinds of liability may prove complex and certainly rests on the ascertainment of the facts that were performed ex post. However, unlike the case of driverless vehicles, the learning capacity that could prove problematic in this respect is much more remote, representing a problem only in a theoretical way, emerging only in the medium-to-long run [56].

4 Conclusion: Risk Management and Innovation Through Insurance Contracts

The analysis conducted in this article explains why, as of today, insurance companies face too complex a challenge in precisely assessing the risks associated with the production, use and diffusion of robots of various kinds.
In particular, the complexity and novelty of robots (i.e. autonomous) causes the identification of the damages they may bring about in a real life environment to be extremely complex, diversified and hard to foretell and hence managed. The same kind of technical malfunctioning may indeed determine very different outcomes once the device is used in different and ex ante unrestrained environments (like in the case of a robotic prosthesis).
The complexity and to some extent opacity—if not inadequateness – of the legal framework further adds upon such considerations, causing the assessment of the risk pertaining to each party involved (be it the producer or user) to be even more complex. Indeed, in some cases, it is not even clear which party may be held liable, hence, ultimately, who should have an interest to acquire insurance coverage.
Overall, this may result either in the (i) refusal to insure some kinds of robotic devices, or (ii) the use of existing contracts, which however may prove inadequate, or (iii) the charging of higher premiums, ultimately delaying the diffusion of robots as well as impairing the proliferation of a supply side of the economy (industry for the production of robots).
All such problems cannot be successfully tackled by simply introducing new legal duties for users and producers to purchase first or third party insurance products (as in the case of drones). Indeed, this may exacerbate the problem and further delay the adoption and diffusion of robotic devices.
Insurance is a fundamental tool to enable technology transfer from research to the market and the creation of a new industry. The risk management function of insurance helps transform ex post uncertainty into an ex ante cost that may be internalized by the party, and in case of the producer even distributed through price mechanisms among all possible users of the device.
To this end, however, an effort is also required in both direction of technology and legal assessment. As far as the first is concerned, a deeper understanding of the functioning of the device is required, that may only be achieved through extensive testing in a realistic environment. The use of RIFs [49] or deregulation areas [50] may prove essential to this end. Moreover, adequate risk assessment methods, which include both hard and soft risks, need to be developed for evaluating new robots [57].
As per the legal assessment, instead, two different perspectives need to be adopted and used. On the one hand, the effort needs to be made to precisely assess—despite existing uncertainties—what the solutions adopted by courts in light of existing regulation most probably would be, should an accident occur. Such a conclusion can only be based on a narrow-tailored technological assessment of the specific device studied.
On the other hand, identify preferable strategies to be suggested in a de iure condendo perspective, in light of a possible—and to some extent—desirable legislative reform in the field of the liability rules for the damage caused in the use of robots.
All such efforts would help provide the necessary conditions for the development of specific insurance products for robotic devices, at once allowing the proliferation of a new market for risk-management products in technologically advanced industries, as well as the establishment of a sound and competitive robotic industry.

Acknowledgments

This article has partially received funding from the European Commission funded project ‘Robot-Era: Implementation and integration of advanced Robotic systems and intelligent Environments in real scenarios for the aging population’ (288899) FP7-ICT-Challenge 5: ICT for Health, Ageing Well, Inclusion and Governance.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Footnotes
1
In the United States, Medicare is a federal system of health insurance for elderly people [3].
 
2
Authors’ own translation. Life insurance contracts are here not addressed.
 
3
Aibo complies with: Toy Directive 88/378/EEC, EMC Directive 89/336/EEC; LVD Directive 73/23/EEC; R&TTE Directive 1999/5/EC and Directive 1999/5/EC, and EN55022. Nao: R & TTE Directive 1999/5/EC; Wireless Radio Equipment: ETSI EN 300 328, ETSI EN 301 893; FCC Part 15, ARIB-STD-T66 & ARIB-STD-T71; EMC: EN 55022, CISPR22, EN 55024, CISPR24; ETSI EN 301 489-1, ETSI EN 301 489-17; Health: IEC/EN 62479; Safety: IEC/EN 60950.
 
4
‘The International Civil Aviation Organization (ICAO) is a UN specialized agency, created in 1944 upon the signing of the Convention on International Civil Aviation (Chicago Convention). ICAO works with the Convention’s 191 Member States and global aviation organizations to develop international Standards and Recommended Practices (SARPs) which States reference when developing their legally-enforceable national civil aviation regulations’ [22].
 
5
According to Article 8 of the Convention on International Civil Aviation: ‘No aircraft capable of being flown without a pilot shall be flown without a pilot over the territory of a contracting State without special authorization by that State and in accordance with the terms of such authorization’ [22].
 
6
A patent exists: ‘Systems and methods for insurance based on monitored characteristics of an autonomous drive mode selection system’ (US 8595037 B1).
 
7
A joint committee between the International Standardization Organization (ISO) and the International Electrotechnical Commission (IEC) is working on standards for medical robots [35].
 
8
All attempts at identifying autonomy as the relevant paradigm that causes liability rules to be inadequate can easily be criticized. Only a strong model of autonomy, where the robot is capable of freely determining its own will and desires, would justify treating the robot as a subject. Short of that, the robot is to be deemed a mere product and no ethical, nor legal grounds can be identified in order to conclude differently. However, policy arguments can be formulated in order to suggest that other subjects, different from the producer and researcher of the device may be held liable for consequences arising from its functioning, based on purely functional—policy—considerations.
 
Literature
1.
go back to reference Bertolini A (2013) Robots as products: the case for a realistic analysis of robotic technologies and the law. Law Innov Technol 5(2):147–171MathSciNetCrossRef Bertolini A (2013) Robots as products: the case for a realistic analysis of robotic technologies and the law. Law Innov Technol 5(2):147–171MathSciNetCrossRef
4.
go back to reference Bertolini A (2015) Robotic prostheses as products enhancing the rights of people with disabilities: reconsidering the structure of liability rules. Int Rev Law Comput Technol 29(2–3):116–136CrossRef Bertolini A (2015) Robotic prostheses as products enhancing the rights of people with disabilities: reconsidering the structure of liability rules. Int Rev Law Comput Technol 29(2–3):116–136CrossRef
5.
go back to reference Posner R (2007) Economic analysis of law, 7th edn. Wolters Kluwer, New York Posner R (2007) Economic analysis of law, 7th edn. Wolters Kluwer, New York
6.
go back to reference Polinsky M, Shavel S (2007) Handbook of law and economics, vol 2. Elsevier, Amsterdam Polinsky M, Shavel S (2007) Handbook of law and economics, vol 2. Elsevier, Amsterdam
7.
go back to reference Akerloff G (1970) The market for lemons: quality uncertainty and the market mechanism. Q J Econ 84:The market for lemons: quality uncertainty and the market mechanismCrossRef Akerloff G (1970) The market for lemons: quality uncertainty and the market mechanism. Q J Econ 84:The market for lemons: quality uncertainty and the market mechanismCrossRef
9.
go back to reference IFR (2014) World robotics 2014: service robots. Case Studies IFR Statistical Department IFR (2014) World robotics 2014: service robots. Case Studies IFR Statistical Department
11.
go back to reference DustBot - Networked and Cooperating Robots for Urban Hygiene project. FP6-045299. Start Date: 01–12-2006, end date: 30–11-2009. Coordinated by Scuola Superiore Sant’Anna. Project web-site available online: http://www.dustbot.org/. Accessed 31 Aug 2015 DustBot - Networked and Cooperating Robots for Urban Hygiene project. FP6-045299. Start Date: 01–12-2006, end date: 30–11-2009. Coordinated by Scuola Superiore Sant’Anna. Project web-site available online: http://​www.​dustbot.​org/​. Accessed 31 Aug 2015
12.
go back to reference Salvini P, Teti G, Spadoni E, Laschi C, Mazzolai B, Dario P (2011) Peccioli: the testing site for the robot dustcart—focus on social and legal challenges. IEEE Robot Autom Mag Spec Issue Roboethics 18(1):1070–9932 Salvini P, Teti G, Spadoni E, Laschi C, Mazzolai B, Dario P (2011) Peccioli: the testing site for the robot dustcart—focus on social and legal challenges. IEEE Robot Autom Mag Spec Issue Roboethics 18(1):1070–9932
15.
go back to reference Directive 2006/42/EC of the European Parliament and of the Council of 17 May 2006 on machinery, and amending Directive 95/16/EC (recast) Directive 2006/42/EC of the European Parliament and of the Council of 17 May 2006 on machinery, and amending Directive 95/16/EC (recast)
16.
go back to reference ISO 10218-1:2011 Robots and robotic devices—Safety requirements for industrial robots—Part 1: Robots ISO 10218-1:2011 Robots and robotic devices—Safety requirements for industrial robots—Part 1: Robots
17.
go back to reference ISO/TS 15066:2016 Robots and robotic devices—Collaborative robots ISO/TS 15066:2016 Robots and robotic devices—Collaborative robots
19.
go back to reference ISO 8373:2012 Robots and robotic devices: vocabulary ISO 8373:2012 Robots and robotic devices: vocabulary
21.
go back to reference Collingridge D (1980) The social control of technology. St. Martin’s Press, London Collingridge D (1980) The social control of technology. St. Martin’s Press, London
22.
go back to reference ICAO (2011) International civil aviation organisation cir 328, unmanned aircraft systems (UAS) order number: CIR328 ISBN 978-92-9231-751-5. http://www.icao.int. Accessed 31 Aug 2015 ICAO (2011) International civil aviation organisation cir 328, unmanned aircraft systems (UAS) order number: CIR328 ISBN 978-92-9231-751-5. http://​www.​icao.​int. Accessed 31 Aug 2015
23.
go back to reference Beyer DK, Dulo DA, Townsley GA, Wu SS (2014) Risk, product liability trends, triggers, and insurance in commercial aerial robots. In: WE ROBOT conference on legal & policy issues relating to robotics. University of Miami School of Law, 4, 5 April 2014 Beyer DK, Dulo DA, Townsley GA, Wu SS (2014) Risk, product liability trends, triggers, and insurance in commercial aerial robots. In: WE ROBOT conference on legal & policy issues relating to robotics. University of Miami School of Law, 4, 5 April 2014
27.
go back to reference Winfield AF (2014) The next big thing(s) in robotics. In: Westlake S (ed) Our work here is done: visions of a robot economy. NESTA, London, pp 38–44 Winfield AF (2014) The next big thing(s) in robotics. In: Westlake S (ed) Our work here is done: visions of a robot economy. NESTA, London, pp 38–44
28.
go back to reference Beer JM, Fisk AD, Rogers WA (2014) Toward a framework for levels of robot autonomy in human–robot interaction. J Hum Robot Interact 3(2):74–99CrossRef Beer JM, Fisk AD, Rogers WA (2014) Toward a framework for levels of robot autonomy in human–robot interaction. J Hum Robot Interact 3(2):74–99CrossRef
29.
go back to reference Smith BW (2015) If a robotic car crashes, who takes the blame? In: AAAS 2015 annual meeting San Jose, San Jose Convention Center, California. Accessed 14 Feb 2015 Smith BW (2015) If a robotic car crashes, who takes the blame? In: AAAS 2015 annual meeting San Jose, San Jose Convention Center, California. Accessed 14 Feb 2015
30.
go back to reference Thierer A, Hagemann R (2014) Removing roadblocks to intelligent vehicles and driverless cars. MERCATUS WORKING PAPER—Mercatus center George Mason University Washington Vlvd, Virginia Thierer A, Hagemann R (2014) Removing roadblocks to intelligent vehicles and driverless cars. MERCATUS WORKING PAPER—Mercatus center George Mason University Washington Vlvd, Virginia
31.
32.
go back to reference Marchant GE, Lindor RA (2012) The coming collision between autonomous vehicles and the liability system. Santa Clara L Rev 52:1321 Marchant GE, Lindor RA (2012) The coming collision between autonomous vehicles and the liability system. Santa Clara L Rev 52:1321
33.
39.
go back to reference Seo Y (2015) Urologic robotic surgery in Korea: past and present. Korean J Urol Korean Urol Assoc 56:546CrossRef Seo Y (2015) Urologic robotic surgery in Korea: past and present. Korean J Urol Korean Urol Assoc 56:546CrossRef
40.
go back to reference China T, Isotani S, Hisasue S, Horie S (2015) Robot-assisted surgical navigation in urology. Juntendo Med J 61(1):5–10CrossRef China T, Isotani S, Hisasue S, Horie S (2015) Robot-assisted surgical navigation in urology. Juntendo Med J 61(1):5–10CrossRef
41.
go back to reference Niklas C, Saar M, Berg B, Steiner K, Janssen M, Siemer S, Stöckle M, Ohlmann CH (2015) da Vinci and open radical prostatectomy: comparison of clinical outcomes and analysis of insurance costs. Urol Int. doi:10.1159/000431104 Niklas C, Saar M, Berg B, Steiner K, Janssen M, Siemer S, Stöckle M, Ohlmann CH (2015) da Vinci and open radical prostatectomy: comparison of clinical outcomes and analysis of insurance costs. Urol Int. doi:10.​1159/​000431104
42.
go back to reference Dikens BM, Cook RJ (2006) Legal and ethical issue in telemedicine and robotics. Int J Gynecol Obstet 94:73–78CrossRef Dikens BM, Cook RJ (2006) Legal and ethical issue in telemedicine and robotics. Int J Gynecol Obstet 94:73–78CrossRef
44.
go back to reference Bicchi A, Peshkin MA, Colgate JE (2008) Safety for physical human–robot interaction. In: Siciliano B, Khatib O (eds) Springer handbook of robotics. Springer, Berlin, pp 1335–1348CrossRef Bicchi A, Peshkin MA, Colgate JE (2008) Safety for physical human–robot interaction. In: Siciliano B, Khatib O (eds) Springer handbook of robotics. Springer, Berlin, pp 1335–1348CrossRef
47.
go back to reference Kahn PH (2009) The human relation with nature and technological nature. Curr Dir Psychol Sci 18(1):37–42CrossRef Kahn PH (2009) The human relation with nature and technological nature. Curr Dir Psychol Sci 18(1):37–42CrossRef
48.
go back to reference Sabanovic S, Michalowski MP, Simmons R (2006) Robots in the wild: observing human–robot social interaction outside the lab. AMC’06, Istambul Sabanovic S, Michalowski MP, Simmons R (2006) Robots in the wild: observing human–robot social interaction outside the lab. AMC’06, Istambul
49.
go back to reference Weng YH, Sugahara Y, Hashimoto K, Takanishi A (2015) Intersection of “Tokku” special zone, robots, and the law: a case study on legal impacts to humanoid robots. Int J of Soc Robot 7(5):841–857. doi:10.1007/s12369-015-0287-x CrossRef Weng YH, Sugahara Y, Hashimoto K, Takanishi A (2015) Intersection of “Tokku” special zone, robots, and the law: a case study on legal impacts to humanoid robots. Int J of Soc Robot 7(5):841–857. doi:10.​1007/​s12369-015-0287-x CrossRef
51.
go back to reference Bertolini A (2014) Liability for the acts of robots: justifying a change in perspective. The Robolaw Series. Pisa University Press, Pisa Bertolini A (2014) Liability for the acts of robots: justifying a change in perspective. The Robolaw Series. Pisa University Press, Pisa
53.
go back to reference Schaerer E, Kelley R, Nicolescu M, (2009) Robots as animals: a framework for liability and responsibility in human–robot interactions. The 18th IEEE international symposium on robot and human interactive communication (RO-MAN) Schaerer E, Kelley R, Nicolescu M, (2009) Robots as animals: a framework for liability and responsibility in human–robot interactions. The 18th IEEE international symposium on robot and human interactive communication (RO-MAN)
54.
go back to reference Santosuosso A, Boscarato C, and Caroleo F (2012) Robot e Diritto: una Prima Ricognizione. La Nuova Giurisprudenza Commentata 494 Santosuosso A, Boscarato C, and Caroleo F (2012) Robot e Diritto: una Prima Ricognizione. La Nuova Giurisprudenza Commentata 494
55.
go back to reference Matthias A (2010) Automaten als Träger von Rechten. Logos Verlag, Berlin Matthias A (2010) Automaten als Träger von Rechten. Logos Verlag, Berlin
56.
go back to reference Bertolini A (2014) Robots and liability—Justifying a change in perspective. In: Battaglia F, Nida-Rümelin J, Mukerji N (eds) Rethinking responsibility in science and technology. Pisa University Press, Pisa Bertolini A (2014) Robots and liability—Justifying a change in perspective. In: Battaglia F, Nida-Rümelin J, Mukerji N (eds) Rethinking responsibility in science and technology. Pisa University Press, Pisa
57.
go back to reference Salvini P, Laschi C, Dario P (2010) Design for acceptability: improving robots’ coexistence in human society. Int J Soc Robot 2(4):451–460CrossRef Salvini P, Laschi C, Dario P (2010) Design for acceptability: improving robots’ coexistence in human society. Int J Soc Robot 2(4):451–460CrossRef
Metadata
Title
On Robots and Insurance
Authors
Andrea Bertolini
Pericle Salvini
Teresa Pagliai
Annagiulia Morachioli
Giorgia Acerbi
Leopoldo Trieste
Filippo Cavallo
Giuseppe Turchetti
Paolo Dario
Publication date
01-06-2016
Publisher
Springer Netherlands
Published in
International Journal of Social Robotics / Issue 3/2016
Print ISSN: 1875-4791
Electronic ISSN: 1875-4805
DOI
https://doi.org/10.1007/s12369-016-0345-z

Other articles of this Issue 3/2016

International Journal of Social Robotics 3/2016 Go to the issue

Premium Partners