Zum Inhalt

Bringing Trustworthy Robo-Advisers into the Insurance Distribution Process: An Accountability Perspective

  • Open Access
  • 2025
  • OriginalPaper
  • Buchkapitel
Erschienen in:

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Robo-advisers have been increasingly developed and deployed in the process of insurance distribution. While they play an important role in facilitating insurance distribution, concerns have also emerged regarding their lack of transparency and potential to cause harm to end users. The involvement of multiple actors throughout the lifecycle of a robo-adviser further complicates the question of accountability. Against this backdrop, examining robo-advisers through the lens of accountability becomes essential. This chapter explores this issue within the framework of both existing and newly introduced EU legislation, contributing to the assessment of the effectiveness and adequacy of the current legal mechanisms governing robo-advisers.

1 1Introduction

Digitalization is transforming the landscape of insurance distribution. In recent years, emerging digital innovations, such as comparison websites and peer-to-peer insurance, have significantly empowered insurance distributors. 1 These advancements enable them to better tailor their services, meet customer demands more effectively, and achieve their strategic goals. Beyond these digital applications, robo-advisers have also emerged as a key development in insurance distribution. Traditionally, providing professional insurance advice has been the responsibility of human advisors, who undergo rigorous training and obtain the necessary certifications before practicing. These professionals are expected to exercise diligence in recommending suitable insurance products based on their clients’ profiles. 2 Failure to comply with these standards may result in penalties. Additionally, from an ex post perspective, if adverse consequences arise due to an advisor’s fault, issues of professional liability may also come into play.
In general, robo-advisers are developed to assist and even replace insurance professionals to handle a variety of different tasks. For customers, robo-advisers rather than human advisers will be their first contact point. Robo-advisers, especially those rule-based, are able to automate the underwriting processing on the basis of the input information from customers. More advanced robo-advisers leverage technologies such as machine learning and big data analytics to ensure that the insurance products offered to customers are always optimized to meet their needs. 3 Robo-advisers are able to respond the demand from customers quickly, which enhances customers experiences and improve the overall efficiency along the insurance value chain. Robo-advisors are also considered a promising application to facilitate precise insurance distribution, as they are expected to overcome certain shortcomings resulting from human professionals, such as the lack of capability to adapt insurance product to the dynamic changes and the potential of bias and errors when advising their customers. 4 From the viewpoint of suppliers, according to a report from European Insurance and Occupational Pensions Authority (EIOPA), around 40 percent of surveyed insurance companies have already adopted or are ready to deploy robo-advisers in support of their business. 5
Despite the benefits of developing and deploying robo-advisers for the purpose of insurance distribution, their potential challenges shall not be underestimated. Robo-advisers may result in biased outcome by processing sensitive information (e.g., race, gender, disability, etc.) and assessing the risk based on these factors. 6 In this regard, people with vulnerable attributes will be further disadvantaged when accessing insurance products. 7 In addition, the decision-making process of a robo-adviser may not be transparent, so that impacted persons can hardly explain the advice given to them and find a way to contest it. 8 Robo-advisers may also manipulate customers in the course of communication. As a result, customers may be directed to make decisions that they would not have made given that they are well informed. 9
The increasing delegation of various tasks to robo-advisers indicates the importance of accountability issue. To enhance the level of trust, law shall make clear which party is accountable for what throughout the lifetime of a robo-adviser. This chapter aims to delve into the accountability issues relating to robo-advisers that are used for insurance distribution. The structure of this chapter is as follows. After introduction, Section 2 provides a theoretical framework for accountability. Section 3 and Section 4 analyze the mechanisms to implement accountability, respectively from the perspective of the recently published ex ante AI Act and ex post liability regimes, such as the revised product liability directive and emerging debated on AI liability. Section 5 concludes.

2 2Mapping Out the Concept of Accountability in the Digital Age

The concept of accountability is difficult to generalize as it can be split into multiple dimensions. In general, it refers to ‘an obligation to inform about, and justify one’s conduct to an authority’. 10 In the era of digitalization, the concept of AI becomes even more complex, considering the delegation relationship between digital technologies like AI and humans. The multiple dimensions of accountability in the context of AI can be perceived from the Ethical Guidelines drafted by the High-Level Expert Group (HLEG). According to the HLEG Ethical Guidelines, accountability is defined as a term that that can necessitate ‘mechanisms be put in place to ensure responsibility and accountability for AI systems and their outcomes, both before and after their development, deployment and use’. 11 Later, HLEG in the Assessment List for Trustworthy AI (ALTAI) defines accountability as ‘one is responsible for their action – and as a corollary their consequences – and must be able to explain their aims, motivations, and reasons’. 12
Based on the up to date discussion about accountability, we may further unpack its meaning from four facets: dimension, accountable person, stage and mechanism. The following parts of this section provide an extensive explanation of each of these components.

2.1 2.1Dimension

First of all, accountability can be roughly split into three dimensions: ex ante, ex post and procedural dimension.
From the ex ante perspective, accountability means that the law must explicitly clarify the responsibility and duty of relevant parties. The ex ante dimension of accountability provides certainty to all relevant parties that the measures that they must comply with. In other words, they are informed of the possible consequences if they fail to do so. These predefined rules serve with a deterrence function, incentivizing all relevant parties to behave properly.
From the ex post perspective, accountability means that the law must clearly determine which party shall bear the liability when harmful consequences occurred. The ex post dimension of accountability discloses the specific parties that shall undertake the accident costs of harmful consequences. In addition, the ex post dimension of accountability is crucial for compensation, making sure that victims could recover their loss fully and effectively.
In practice, whether or not a person is accountable is not self-evident. The claimant needs to take measures to recognize the accountable persons and prove that they are accountable by meeting other essential requirements, such as fault, defect, causal link, damage, etc. Therefore, there is a third perspective when conceptualizing accountability, which is the procedural one. The procedural dimension of accountability aims to provide the necessary degree of transparency in support of the first two dimensions. The meaning of transparency composes multiple facets, including traceability, answerability, explainability, etc. Therefore, the procedural dimension of accountability is important to facilitate trustworthy AI, since it addresses crucial practical issues, such as who to which extent contributed to damage. 13

2.2 2.2Accountable Person

Accountable person is an indispensable angle when decoupling the concept of accountability, as it is the target of relevant mechanisms. Accountability must be assigned to specific persons beforehand. In the context of AI, in principle, all parties including natural persons and legal persons along the supply chain may be accountable under certain conditions.
Manufacturers determine the technical specifications of an AI system. Accountability shall cover manufacturers, so that they will be properly incentivized when designing and manufacturing AI systems. 14 They will also be obliged to take necessary conformity assessment to ensure that the product has meet all legal requirements before being placed on the market.
After an AI system is put into circulation, importers and various distributors take the role in making the AI system available on the market and spreading the AI system to a broader range of regions. Although their activity does not directly determine the function or performance of an AI system, they can facilitate the process of distribution, which thereby indirectly influences the scale of accidents. In this digital age, online platforms take a crucial role in facilitating the distribution of AI-enabled applications in various manners, such as cloud services and software as a service (SaaS). Therefore, it is necessary to hold online platforms accountable as well. 15
After circulation, the next stop for an AI system is to be deployed in a certain context. In reality, depending on the concrete application of an AI system, the typical deployers may include educational institutes, healthcare organization, public authorities and many others. It is noteworthy that although the whole operation and monitoring of an AI system is normally under the umbrella of deployers, they need to assign specific natural persons (i.e., user) to operate the AI system. In practice, users can be teachers, doctors and civil servants, who take the concrete control over an AI system. To ensure that AI systems are deployed for intended purposes and operated as expected, deployers including the assigned users shall also be considered as accountable persons.
Users are indeed not the end point in the spectrum of accountable persons. The use of AI can influence a variety of parties (e.g., consumers, students and patients). In principle, these parties are often not considered as accountable parties, since they are impacted persons. 16 However, in a bilateral accident, where the activity of impacted persons can also influence the likelihood and severity of damage, impacted persons shall also be motivated to behave properly. 17 From the accountability perspective, this normally means that impacted persons shall also take necessary care and their fault may reduce or even disallow the liability of others.

2.3 2.3Stage

Accountability must be secured throughout the lifecycle of an AI system. As the HLEG Ethical Guidelines points out, accountability shall cover the stages ‘both before and after their development, deployment and use’. From this perspective, four facets are distinguished when designing mechanisms for accountability.
The first stage is ‘before development’, which covers the stage from design to manufacturing. This is the stage that sets up the architecture and technical specification of an AI system.
The next stage covers the activities between development and deployment. In this stage, an AI system is ready to be or has already been placed on the market. By the end of this stage, the AI system is physically under the control of a deployer, even if the manufacturers may still maintain some control (e.g., through software updates). 18 In this regard, accountability should cover both manufacturers as well as deployers.
The third stage spans from deployment to use. Normally, this refers to the assignment of human oversight to the appropriate people within the deployer’s organization. This person shall in principle be technically and knowledgably competent in terms of operating the AI system under the instructions of use provided by the manufacturer.
The last stage starts from the moment when an AI system is put into use. It specifically refers to the interaction between impacted persons and the AI system. This process also includes the necessary updates provided by the upstream parties.
To summarize, the lifetime of an AI system can be split into four concrete stages. The discussion in this paragraph explains that varying activities featured in each stage. This also indicates the necessity of developing diverse mechanisms for different stages, which is the central topic of the next part.

2.4 2.4Mechanism

The last, but not the least, dimension of understanding accountable AI lies in the mechanisms that are adopted for the purpose of implementing accountability. Traditionally, a variety of mechanisms have been introduced prior to the appearance of AI systems. Risk regulation provides the requirement that a product must meet before they are allowed to be placed on the market. It also lays down the obligations that parties must comply with. Besides risk regulation, liability is also a crucial mechanism in terms of allocation liability and compensation.
In recent years, new mechanisms are developed in order to catch up the challenges posed by new technologies. For example, meta-regulation (e.g., self-regulation, impact assessment and standardization) provides the essential flexibility to strike accountability and other social goals, such innovation. 19 In addition, the law starts to require manufacturers to embed ethical values (e.g., accountability and transparency) into the designing and manufacturing process of a product. This ‘by design’ insight is considered an important approach toward accountability issue. 20 Education, e.g., digital literacy, is another arising mechanism that is important for properly materializing accountability. 21 In the era of AI, literacy means that accountable persons (e.g., users) shall be sufficiently educated to the extent that they understand how to properly comply with the obligations.

3 3Accountability for Robo-Advisers: The Example of the EU AI Act

Having distinguished the different dimensions of accountability in Section 2, the following two sections analyze how accountability issues concerning robo-advisers are addressed in recent EU legislation and proposals. In specific, Section 3 discusses the accountability for robo-advisers under the framework of the recently adopted EU AI Act. Further, in Section 4, the accountability for robo-advisers will be analyzed in the sphere of liability.

3.1 3.1The Background of Regulating Robo-Advisers: IDD and/or AI Act?

In the EU, sector-specific regulations are the main mechanisms to address the risk occurred in particular activities. In the sphere of insurance distribution, Insurance Distribution Directive (IDD) 22 provides harmonized requirements that Member States must transform when regulating the activities of insurance distribution. As insurance technologies (e.g., robo-advisers) are constantly introduced to insurance distribution, the questions arise are (1) whether such innovative applications fall under the regulatory framework of the IDD, and (2) whether the IDD is the right mechanism to regulate the risk of these insurance technologies.
For the first question, as literature explained, the application of robo-advisers will fall under the IDD without any doubt, since the provision of insurance advice regardless its form (by humans or by machines) is clearly a sort of insurance distribution. 23 Nevertheless, the regulatory framework provided by the IDD is largely principle-based, meaning that concrete regulatory rules specifically designed to address the challenges caused by robo-advisers are missing.
Then it comes to the second question, i.e., whether the IDD is the right mechanism to introduce new rules for robo-advisers. Regarding this issue, authorities at the EU level (e.g., EIOPA) contends that the regulation for AI-related applications (including robo-advisers) in the insurance sector shall still be sector-specific. 24 This indicates the confidence of insurance authorities to rely on existing insurance regulations to resolve the challenges raised by emerging technological challenges. Nevertheless, the EU ultimately determined to introduce a horizontal approach to regulate the risk of all AI applications.
Therefore, in the aftermath of the AI Act, robo-advisers should comply with the IDD as well as the horizontal AI Act. They are complementary each other, which has significant implications for insurance distributors, as they need to not only respect the obligations as deployers of AI system laid down in the AI Act but also follow the obligations within the IDD as insurance distributors.

3.2 3.2An Overview of the EU AI Act and Its Implication for Accountability

In April 2021, the European Commission introduced its proposal for a comprehensive regulatory framework specifically for AI applications. 25 After three years discussion, the final version of the EU AI Act came into force as of 1 August 2024. 26 The EU AI Act followed a so-called risk-based approach that has been widely adopted in product safety regulations. 27 Depending on the risk-profiles of their applications, all AI systems are distinguished into the four groups which are subject to different regulatory requirements.
First of all, some AI applications that result in serious fundamental rights concerns are considered with unacceptable risks. They are banned according to the AI Act. 28 What is more, a wide range of AI systems are categorized as high-risk AI applications, as they may pose substantial risks to safety and fundamental rights. High-risk AI systems are in principle allowed for circulation, but they must firstly meet specific mandatory safety requirements (laid down by the AI Act as well as other sector-specific regulations, if there is any) and have them verified through necessary conformity assessment procedures. 29 In addition, the AI Act laid down obligations for the operators of high-risk AI systems. 30 Moreover, the AI Act set up transparency obligations for the providers or deployers of certain AI applications. 31 The objective is requiring their operators to take proper transparency measures to ensure that the impacted persons can make an informed decision. Last but not least, for all other AI applications, the AI Act does not provide any additional requirements or obligations for them or their operators. They can be placed directly on the market, once they meet the mandatory requirements from relevant sector-specific regulations. Besides the regulatory measures toward various AI systems, the AI Act also extends to regulating General Purpose AI models by obliging their operators to respect certain obligations. 32
To summarize, the AI Act provides a comprehensive regulatory framework for all kinds of AI applications. Insurance technologies, such as robo-advisers, are thereby not isolated from this framework.

3.3 3.3Locating Robo-Advisers in the Risk-Based Matrix in the EU AI Act

As analyzed, robo-advisers are in nature developed to take a role as professional insurance advisers. They may engage in a variety of tasks, including communication, risk-differentiation and advice provision. Therefore, robo-advisers may fall under different risk profiles depending on their concrete practices. 33
First and foremost, robo-advisers may generate unacceptable risks, if they are designed with subliminal and deceptive techniques and has the potential to result in harmful consequences. For robo-advisers that are used for supporting insurance distribution, it is highly likely that they more or less employ manipulative techniques in its design for the purpose of persuading customers to make decisions that ultimately benefit insurance distributors. Manipulative techniques, as the AI Act interprets, will only be prohibited if they (are likely to) cause significant harms. 34 The concept of significant harm under the AI Act is complementary to unfair commercial practices, meaning that any commercial practices, irrespective of AI-driven or not, shall be prohibited under all circumstances, as long as it generates economic or financial harms to consumers. Therefore, the standard of ‘significant harm’ is very low, which is mostly linked to the disadvantage of the economic benefits of consumers. Robo-advisers have the potential to lead consumers to make wrong decisions and caused economic losses by purchasing a coverage that they do not need. If this falls under the scope of significant harm defined by the AI Act, then the application of rob-advisers would be substantially limited in terms of recommending insurance products to customers. In the aftermath of the AI Act, two clarifications are important: firstly, the distinction between manipulative techniques and others should be explicitly interpreted; secondly, the meaning of significant harm should be clarified, especially in the context of financial sectors.
Secondly, robo-advisers with a function of using biometric information to infer certain protected attributes, may be prohibited as well. For instance, if a robo-adviser embeds facial recognition technology (FRT), uses the data collected from FRT to infer the race of their customers and then categorizes the risk-level of customers based on race, such a robo-adviser will be prohibited. 35 According to the AI Act, the ‘certain protected attributes’ that are prohibited for inference are limited to race, political opinions, trade union membership, religious or philosophical beliefs, sex life and sexual orientation. Therefore, if a robo-advisers uses biometric data to infer protected attributes other than the aforementioned ones, they will not be prohibited. Instead, they may fall under the scope of high-risk AI applications and be subject to a different framework of accountability.
Thirdly, robo-advisers for insurance distribution may be categorized as a high-risk AI application. To begin with, biometric-based robo-advisers may be defined with a high risk. In particular, in the case that a robo-adviser is used to categorize the risk of insureds based on their biometric information or it is able to infer their emotional status, it will be considered as a high-risk application. 36 In addition, robo-advisors that are used for assessing and pricing the risk of natural persons in the case of life and health insurance are also defined as with a high-risk. 37 Accordingly, from an accountability perspective, additional regulatory requirements and obligations shall be fulfilled before a high-risk robo-adviser is allowed for circulation.
Fourthly, if robo-advisers are developed to facilitate the communication between insurance distributors and customers (e.g., chatbots), the AI Act will also apply to them by exerting a specific transparency obligation onto their providers. In this regard, providers should make sure that robo-advisers are developed in a manner that impacted natural persons are sufficiently informed that they are interacting with AI rather than a human. 38
To summarize, the discussion in this part is intended to clarify the multiple facets of robo-advisers. Depending on the concrete techniques that they employ and the purposes that they intend to apply for, robo-advisers may fall under different risk categories outlined by the AI Act. As a result, they themselves and their operators shall also be subject to particular requirements and obligations. In essence, the risk-based approach laid down by the AI Act is a pertinent reflection of the different components of accountability: different mechanisms (e.g., regulatory requirement and obligation) shall be developed to relevant accountable persons (e.g., providers and deployers) throughout the lifetime of robo-advisers depending on their risk profiles.

3.4 3.4A Further Look into the Accountability for High-Risk Robo-Advisers

Having mapped out robo-advisers under the risk-based approach introduced by the AI Act, it is clear that high-risk applications are the main target under the new regulatory framework. As aforementioned, robo-advisers that are used for insurance distribution purposes are considered with a high-risk in several scenarios, if, e.g., they employ biometric systems as well as they are used for assessing and pricing the risk of natural persons in the case of life or health insurance. From the accountability perspective, this means that the operators of robo-advisers are subject to new obligations beyond the existing insurance regulations (e.g., the IDD). Therefore, it is of great importance to unveil the specific regulatory requirements and obligations for high-risk robo-advisers.
The AI Act established new essential legal requirements for high-risk AI systems. According to Section 2 of Chapter III, the requirements that a high-risk AI system must meet include risk management system, data and data governance, technical documentation, record-keeping, transparency and provision of information to deployers, human oversight and accuracy. In practice, the proof of meeting such legal requirements must be supported on the technical side. This requires providers of high-risk AI systems to use reliable technical specifications (e.g., standards) to explain how their products have complied with the legal requirements. 39 Furthermore, they need to follow the necessary conformity assessment procedures to verify their compliance.
The demonstration of compliance in line with harmonized standards can significantly reduce the effort of verifying conformity, because it will lead to a presumption of meeting relevant legal requirements. Nevertheless, as harmonized standards are instruments that the Commission requires European standardization organizations to draft, in reality it may take a long time before they are established. This is also the challenge for standardizing AI products. 40 At the moment, the European standardization organizations are busy with developing harmonized standards regarding all legal requirements set out in the AI Act. 41 It, however, is unknown whether such standards match the expectation within the insurance sector.
The AI Act also sets up obligations respectively for providers and deployers of high-risk AI systems. Noticeably, the obligations for deployers will have significant impact on insurance distributors from the accountability perspective. According to Article 26, insurance companies that adopt robo-advisers with high risks shall respect a wide range of obligations. First of all, insurance companies shall assign human oversight to competent natural persons within the organization to operate and supervise robo-advisers. Such persons must also be provided with necessary training in support of their supervision. What is more, to the extent that the input data of a high-risk robo-adviser is under the control of insurance companies, they must ensure that the data is representative for the purpose that high-risk robo-adviser is intended to use. Moreover, insurance company is the ultimate party to monitor the proper operation of robo-advisers. Their operation shall follow the instructions of use and they shall notify providers and competent authorities in the cases where robo-advisers present risk to safety and fundamental risks and where a serious accident resulted. Last but not least, insurance companies must make sure the logs are automatically kept when logs are under their control.
The obligations for the insurance companies are de facto the duties that they must respect. From an accountability perspective, setting up obligations for deployers like insurance companies will incentivize them to take due care. In the cases that they fail to respect the obligations, it would be an important signal to indicate and establish the fault of them in a liability claim.
Besides the aforementioned obligations for deployers, the AI Act also obliges in Article 27 certain deployers of high-risk AI systems to conduct a so-called fundamental rights impact assessment (FRIA). 42 It is noteworthy that the FRIA does not apply to every deployer of a high-risk AI system. Only two types of deployers are covered: public or private bodies providing public services and certain financial service providers. According to Article 27, deployers of AI systems that are used for assessing and pricing the risk of nature persons in the case of life or health insurance will be subject to FRIA. Therefore, if a robo-adviser holds such a capability, their deployers must be prepared to conduct the FRIA.
To summarize, the AI Act provides a wide spectrum of mechanisms specifically for holding the operators of high-risk robo-advisers accountable. Such mechanisms include not only ex ante legal requirements but also duties and obligations through the operation process. In addition, necessary transparency and record-keeping measures must be ensured throughout the lifetime of a high-risk robo-adviser. What the AI Act does not clearly address, however, is how to allocate the liability among various accountable persons, where a harmful consequence has occurred. For this issue, liability regime serves as the proper mechanism to clarify.

3.5 3.5Literacy

The discussion so far unveils how the AI Act utilizes different types of mechanisms to necessitate the accountability of AI systems. The mechanism contains not only traditional legal requirements and obligations but also emerging meta-regulations, such as standards and risk assessment. Noticeably, the AI Act also stresses the role of AI literacy in promoting trustworthy AI.
On this basis, Article 4 of the AI Act further clarifies that providers and deployers of AI systems shall take measures to make sure that their staff are employed with a sufficient level of literacy in terms of operating the AI system. To achieve this objective, all relevant methods, such as education and professional training, shall be offered. AI literacy is crucial for accountability, because it prepares the necessary level of knowledge and techniques to help staff to better understand their responsibility and obligation in the course of operating and monitoring an AI system.
Unlike other mechanisms, AI literacy in the AI Act is very much principle-based. This means that the Member States shall further develop their own AI literacy rules. Also, industries and companies can introduce concrete guidelines to implement AI literacy. When implementing the AI Act, insurance authorities shall work closely with insurance industry to set out sufficient guidance on AI literacy in the insurance sector.

4 4Accountability for Robo-Advisers: An Example of the Recent Liability Reform

The discussion so far mapped out the accountability of robo-advisers through the lens of the new AI Act. What makes the AI Act remarkable is that various mechanisms have been developed and combined to cover different operators throughout the lifetime of robo-advisers. The AI Act, however, has no intention to address the liability issues resulting from the damage caused by robo-advisers. 43 The objective of this section is to discuss the accountability of robo-advisers from the perspective of liability. In particular, the discussion extends to the recent liability reform at the EU level
A proper allocation of liability is crucial, as it will not only influence the incentives of relevant parties but also determine the effectiveness of compensation. 44 The basic question in the context of robo-advisers is: who should be liable for the damage caused by robo-advisers? The answer to this question can be further distinguished into two folds depending on the role of stakeholders. On the one hand, we need to clarify the liability for the providers of robo-advisers; on the other hand, we need to clarify the liability for the deployers of robo-advisers.

4.1 4.1The Liability for the Providers of Robo-Advisers

According to the 1985 product liability directive (i.e., 1985 PLD 45), producers shall be liable for the damage caused by the defect of their products. 46 With regard to the liability for the providers of robo-advisers, it is not surprising that people may question whether providers could be comparable to the producers of normal products. From a positive legal analysis, however, there are some barriers to justify the applicability of product liability for the providers of robo-advisers.
First of all, a robo-adviser may not be qualified as the ‘product’ defined in the 1985 PLD. Product in the 1985 PLD refers to movable products. 47 The CJEU further clarified that movable goods should refer to tangible goods. 48 Robo-advisers per se, however, are standalone AI applications, meaning that they look nowhere like the traditional products.
Secondly, the damage caused by robo-advisers may not be compensable under product liability. As 1985 PLD clearly listed, the interests protected by product liability include a person’s life, physical integrity as well as their property. 49 Other interests, such as mental health as well as other fundamental rights (e.g., fairness, non-discrimination, personal data) are thereby not protected by product liability. Robo-advisers, however, are remote from causing damage to the aforementioned protected interests. The main consequences of using robo-advisers are pure economic losses or non-material damage resulting from violating fundamental rights. Therefore, robo-advisers are not well covered by the existing product liability framework, if we delve into the damage caused by them.
Thirdly, as other AI applications, there are procedural obstacles preventing victims from smoothly claiming the liability of providers. When establishing the liability of providers, victims must prove the existence of defectiveness and that there is a causal link between the defectiveness and the damage. The proof of defectiveness and causation, however, is not an easy task for victims, because they may lack the necessary information as well as the capability to do so. 50
The discussion in this part so far disclosed the substantive and procedural obstacles of holding the providers of robo-advisers liable under product liability. On 12 March 2024, the European Parliament adopted a proposal to revise 1985 PLD. 51 The revised PLD (rPLD) was finally published in the Official Journal on 18 November 2024. 52 The rPLD adapted product liability rules to the new challenges caused by digitalization. In the aftermath of rPLD, the question remains: to what extent can the providers of robo-advisers be held liable within the rPLD?
First of all, the rPLD significantly extends the scope of product to intangible goods, such as software, AI and 3D printing. 53 By doing so, robo-advisers will be clearly covered by the rPLD.
What is more, the rPLD provides some concrete rules to help victims to prove defectiveness and causal link. Victims can request the defendant to disclose relevant evidence in the case that they have ‘presented facts and evidence sufficient to support the plausibility of the claim for compensation’. 54 When certain conditions are met, defectiveness and causal link can also be presumed. 55
The challenging issue of holding the provider of robo-advisers under the rPLD lies in the scope of compensable damage. Although the rPLD extends the scope of personal injury to ‘medically recognized damage to psychological health’, the scope of protected interests is not significantly expanded, e.g., to other fundamental rights. 56 Therefore, in the end, whether the damage caused by the defect of a robo-adviser can be held accountable under the rPLD is very largely determined by the type of damage caused by it. 57 If robo-advisers caused one of the following damage, i.e., death, personal injury (including medically recognized damage to psychological health) and property, then their providers might be held liable given the successful demonstration of other factors.
To summarize, there is a large chance that the damage caused by robo-advisers may not be recoverable under the product liability regime. In order to be compensated, the harm must fall under the scope damage defined by the rPLD and it must be caused by the defectiveness of the robo-advisors. If these conditions are not met, for example, the damage was caused by the fault of manufacturers rather than defectiveness of the robo-advisor, the rPLD may not apply. As a result, holding providers accountable would be less applicable via product liability regime. In this scenario, fault-based rules are an important complementary regime for holding the providers of robo-advisers accountable. 58

4.2 4.2The Liability for the Deployers of Robo-Advisers

Besides providers, the activity of deployers can also significantly contribute to harmful consequences due to their carelessness in operation and supervision. Holding deployers liable will incentivize them to take proper precautionary measures and reduce accidents. Traditionally, deployers are subject to fault-based liability, meaning that they will be liable where they fail to take the due level of care as expected. The failure to take due care level can be verified from both subjective and objective perspective: from the subjective perspective, they are negligent in triggering harmful consequences through their activities; from the objective perspective, it means that they failed to comply with regulatory requirements.
Several elements need to be successfully demonstrated when relying on fault-based liability to hold deployers of robo-advisers accountable. First and foremost, the claimant must prove that the deployer had a fault. As aforementioned, the proof of fault can be established from both subjective and objective dimensions. One visible benchmark of the fault of deployers are the failure of complying with the obligations of deployers listed in the Article 26 and 27 in the AI Act. If an insurance distributor failed to comply with one or these several obligations (e.g., proper assignment of human oversight and data governance), it can be an important source of demonstrating their fault. Secondly, the claimant must prove the damage. The concept of damage can be defined very differently from one Member States to another. 59 The claimant needs to check the extent to which the damage suffered by them is compensable in a given legal regime. Thirdly, the claimant must prove the causal link between the fault of the deployer and the damage.
The elements of fault liability are not self-evident in a given case. Claimants shall take the burden of proof toward these elements. However, in reality they are very likely to encounter difficulties in establishing these elements. The products and services empowered by AI and digital technologies will even worsen this situation, considering the opacity of its operation. Since 2022, the European Commission started with a proposal to set up the liability of AI systems (i.e. AI Liability Directive, AILD). 60 Considering the huge divergence of the Member States in terms of the substantive issues, such as concept of fault, causation and damage, the AILD proposal does not intend to provide a harmonized liability regime at the EU level at the moment. Alternatively, the main purpose of the AILD is requiring the Member States to ease their procedures to facilitate the claims that are based on non-contractual fault liability. 61 In specific, the AILD requests national courts to require defendants to disclose the evidence to facilitate their claims. 62 In addition, the causal link between fault and damage shall be presumed when certain conditions are met. 63 Therefore, the AILD aims to set up minimal harmonized rules for the Member States. 64 The Member States can of course introduce or maintain their higher requirements in their national laws.
The proposal for the AILD, however, was criticized concerning its clarity and added-value. 65 The Member States like France also questioned the necessity of this directive. Under this background, the AILD was withdrawn by the Commission in February 2025. After the withdrawal, it is expected that in the following years national-based rules will play a crucial role in reshaping the liability rules of AI.
To summarize, the liability for the deployer of robo-advisers is very largely shaped by national laws. The recent proposal for AILD shows the determination of the EU authorities to establish harmonized rules. Nevertheless, although it only aimed for a minimal harmonization and only targeted procedural issues rather than substantive ones, the AILD was ultimately withdrawn due to strong objection from the Member States. There is still a long way to explore the liability mechanism in the future.

5 5Conclusion

To conclude, this chapter reflected the theoretical and practical arguments on the accountability issue relating to robo-advisers that are used for insurance distribution. The analytical framework of accountability requires policymakers to map out several key elements (i.e., dimension, person, stage and mechanism) in the legal rules. The discussion in this chapter further clarified how the recent legislation, including the AI Act, the rPLD and the (withdrawn) AILD, has been reframed to accommodate the different elements into concrete legal rules. It showed that the accountability for robo-advisers depends largely on the practices and purposes that a specific robo-advisers intends to apply to. Robo-advisers that adopt biometric systems and that are used for assessing the risk in the case of life and health insurance will be subject to a more comprehensive accountability rules: (1) there are more requirements and obligations; (2) the accountability concern extends to the lifetime of such applications and (3) various mechanisms (e.g., obligations, risk assessment, by design and literacy) are introduced to ensure accountability. From an ex post perspective, important updates are also observed in the liability sphere. However, as discussed, the reform in liability may provide little help to strengthen accountability for the providers and deployers robo-advisers. Victims may face unsurmountable barriers when identifying the liable persons and proving the liability. This chapter concludes that there is still a loophole of holding robo-advisers accountable from the ex post perspective.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits any noncommercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if you modified the licensed material. You do not have permission under this license to share adapted material derived from this chapter or parts of it.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
download
DOWNLOAD
print
DRUCKEN
Titel
Bringing Trustworthy Robo-Advisers into the Insurance Distribution Process: An Accountability Perspective
Verfasst von
S. Li
Copyright-Jahr
2025
DOI
https://doi.org/10.1007/978-3-032-09716-3_2
1
See e.g. Marano (2019).
 
2
Coombs and Redman (2018), p. 1–2.
 
3
RapidValue Solutions (2017).
 
4
Infosys BPM (2018).
 
5
EIOPA (2024), p. 10.
 
6
Van Bekkum, Borgesius and Heskes (2025).
 
7
Marano and Li (2023), p. 9.
 
8
Baker and Dellaert (2017), p. 735.
 
9
Marano and Li (2023), p. 12.
 
10
Novelli, Taddeo and Floridi (2024).
 
11
HLEG (2019), p. 19.
 
12
HLEG (2020), p. 23.
 
13
Larsson and Heintz (2020), p. 3.
 
14
Li and Schütte (2023).
 
15
Ulfbeck and Verbruggen (2022).
 
16
Shavell (2004), p. 193.
 
17
Li and Faure (2024).
 
18
Wendehorst (2020), p. 175–176.
 
19
Dong and Chen (2024).
 
20
Almada (2023).
 
21
Long and Magerko (2020).
 
22
Directive (EU) 2016/97 of the European Parliament and of the Council of 20 January 2016 on insurance distribution (recast), OJ L 26, 2.2.2016, p. 19–59.
 
23
Marano (2019), p. 310.
 
24
EIOPA (2022).
 
25
European Commission. (2021).
 
26
Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act). OJ L, 2024/1689, 12.7.2024.
 
27
Ebers (2024). See also Almada and Petit (2025).
 
28
Article 5, AI Act.
 
29
Section 2 of Chapter III, AI Act.
 
30
Section 3 of Chapter III, AI Act.
 
31
Article 50, AI Act.
 
32
Chapter V, AI Act.
 
33
Marano and Li (2023).
 
34
Recital (29), AI Act.
 
35
Article 5(1)(g), AI Act.
 
36
Point 1 of Annex III, AI Act.
 
37
Point 5(c) of Annex III, AI Act.
 
38
Article 50(1), AI Act.
 
39
Article 40-44, AI Act.
 
40
Laux, Wachter and Mittelstadt (2024).
 
41
Commission implementing decision on a standardization request to the European Committee for Standardisation and the European Committee for Electrotechnical Standardisation in support of Union policy on artificial intelligence, C(2023)3215.
 
42
Mantelero (2024).
 
43
Li and Schütte (2024).
 
44
Li and Faure (2024).
 
45
Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products. OJ L 210, 7.8.1985, p. 29–33.
 
46
Article 1, 1985 PLD.
 
47
Article 3, 1985 PLD.
 
48
C-65/20, VI v Krone Verlag, ECLI:EU:C:2021:298.
 
49
Article 9, 1985 PLD.
 
50
Jacquemin (2024).
 
51
European Parliament legislative resolution of 12 March 2024 on the proposal for a directive of the European Parliament and of the Council on liability for defective products. COM(2022)0495 – C9-0322/2022 – 2022/0302(COD).
 
52
Directive (EU) 2024/2853 of the European Parliament and of the Council of 23 October 2024 on liability for defective products and repealing Council Directive 85/374/EEC, OJ L, 2024/2853.
 
53
Article 4(1), rPLD.
 
54
Article 9, rPLD.
 
55
Article 10. rPLD.
 
56
Article 6, rPLD.
 
57
Li and Schütte (2023).
 
58
Li and Schütte (2023).
 
59
Li and Faure (2024).
 
60
European Commission (2022).
 
61
Li and Schütte (2024).
 
62
Article 3, AILD.
 
63
Article 4, AILD.
 
64
Hacker (2023).
 
65
Li and Faure (2025).
 
Zurück zum Zitat Almada, M. (2023). Regulation by design and the governance of Technological futures. European journal of risk regulation 14(4): 697–709.
Zurück zum Zitat Almada, M., & Petit, N. (2025). The EU AI Act: Between the rock of product safety and the hard place of fundamental rights. Common market law review, 62(1): 85–120.
Zurück zum Zitat Baker, T., & Dellaert, B. (2017). Regulating robo advice across the financial services industry. Iowa Law Rev 103: 713.
Zurück zum Zitat Coombs, C., & Redman, A. (2018). The impact of robo-advice on financial advisers: a qualitative case study. UK Academy for Information Systems Conference Proceedings.
Zurück zum Zitat Dong, H., & Chen, J. (2024). Meta-Regulation: An ideal alternative to the primary responsibility as the regulatory model of generative AI in China. Computer Law & Security Review 54: 1–16.
Zurück zum Zitat Ebers, M. (2024). Truly Risk-Based Regulation of Artificial Intelligence – How to Implement the EU’s AI Act, First View: 1-20.
Zurück zum Zitat EIOPA (2022). EIOPA letter to co-legislators on the Artificial Intelligence Act.
Zurück zum Zitat EIOPA (2024). Report on the digitalisation of the European insurance sector.
Zurück zum Zitat European Commission (2021). Proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain legislative acts. COM/2021/206 final.
Zurück zum Zitat European Commission (2022). Proposal for a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive). COM(2022) 496 final.
Zurück zum Zitat Hacker, P. (2023). The European AI liability directives–Critique of a half-hearted approach and lessons for the future. Computer Law & Security Review 51: 1–42.
Zurück zum Zitat HLEG (2019). Ethical guidelines for trustworthy AI.
Zurück zum Zitat HLEG (2020). Assessment List for Trustworthy AI.
Zurück zum Zitat Infosys BPM (2018). The insurance advisor of the future: how robots are set to reshape the value framework in insurance. Available at: https://www.infosysbpm.com/mccamish/resources/Documents/insurance-advisor-of-the-future.pdf. Accessed on 31 May 2025.
Zurück zum Zitat Jacquemin, Z. (2024). Product Liability Directive: Disclosure of Evidence, the Burden of Proof and Presumptions. Journal of European Tort Law 15(2): 126–139.
Zurück zum Zitat Larsson, S., & Heintz, F. (2020). Transparency in artificial intelligence. Internet policy review 9(2): 1–16.
Zurück zum Zitat Laux, J., Wachter, S., & Mittelstadt, B. (2024). Three pathways for standardisation and ethical disclosure by default under the European Union Artificial Intelligence Act. Computer Law & Security Review 53: 1–13.
Zurück zum Zitat Li, S., & Faure, M. (2024). The Revised Product Liability Directive: A Law and Economics Analysis. Journal of European Tort Law 15(2): 140–171.
Zurück zum Zitat Li, S., & Faure, M. (2025). Does the EU Need an Artificial Intelligence Liability Directive? Insights from the Economics of Federalism. Revue économique 76(1): 115–139.
Zurück zum Zitat Li, S., & Schütte, B. (2023). The proposal for a revised Product Liability Directive: The emperor’s new clothes? Maastricht Journal of European and Comparative Law 30(5): 573–596.
Zurück zum Zitat Li, S., & Schütte, B. (2024). The Proposed EU Artificial Intelligence Liability Directive: Does/Will Its Content Reflect Its Ambition?. Technology and Regulation, 143–151.
Zurück zum Zitat Long, D., & Magerko, B. (2020). What is AI literacy? Competencies and design considerations. Proceedings of the 2020 CHI conference on human factors in computing systems.
Zurück zum Zitat Mantelero, A. (2024). The Fundamental Rights Impact Assessment (FRIA) in the AI Act: Roots, legal obligations and key elements for a model template. Computer Law & Security Review, 54: 1–13.
Zurück zum Zitat Marano, P. (2019). Navigating InsurTech: The digital intermediaries of insurance products and customer protection in the EU. Maastricht Journal of European and Comparative Law 26(2): 294–315.
Zurück zum Zitat Marano, P. & Li, S. (2023). Regulating robo-advisors in insurance distribution: Lessons from the insurance distribution directive and the ai act. Risks 11(1): 1–13.
Zurück zum Zitat Novelli, C., Taddeo, M., & Floridi, L. (2024). Accountability in artificial intelligence: what it is and how it works. AI & Society 39(4): 1871–1882.
Zurück zum Zitat RapidValue Solutions (2017). The Rise of Robo-Advice in Insurance Companies. Available at: https://rapidvalue.medium.com/the-rise-of-robo-advice-in-insurance-companies-ef21d2ed8a0
Zurück zum Zitat Shavell, S. (2004). Foundations of Economic Analysis of Law, Harvard University Press.
Zurück zum Zitat Ulfbeck, V., & Verbruggen, P. (2022). Online Marketplaces and Product Liability: Back to the Where We Started? European Review of Private Law 30(6): 975–998.
Zurück zum Zitat Van Bekkum, M. S., Borgesius, F. J. Z., & Heskes, T. (2025). AI, insurance, discrimination and unfair differentiation. An overview and research agenda. Law, Technology and Innovation 17(1): 177–204.
Zurück zum Zitat Wendehorst, C. (2020). Strict liability for AI and other emerging technologies. Journal of European Tort Law 11(2): 150–180.