Trustworthy AI on the Road
A Legal Perspective
- Open Access
- 2026
- OriginalPaper
- Buchkapitel
Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by (Link öffnet in neuem Fenster)
Abstract
1 Introduction
Transportation systems are becoming increasingly more digitised and reliant on the use of Artificial Intelligence (AI). In road transport, the use of AI can bring automated driving closer to reality. The use of AI, however, also poses risks. These risks are related to the very nature of AI systems: complexity and opacity as well as unpredictability can present danger.
The European Commission has recognised these risks of AI systems and acted by way of proposing the so-called AI Act. At the moment of writing, the AI Act is being negotiated, so amendments will be made. For reasons of brevity, this contribution only refers to the European Commissions proposal of 2021.
Anzeige
This proposed AI Act lays down requirements for high-risk AI systems in Chapter 2 of Title III. This includes, for instance, requirements on: data governance (art. 10); record-keeping and technical documentation (art. 12, art. 11); transparency (art. 13); robustness and cybersecurity (art. 15); and human oversight (art. 14). These requirements reflect ongoing ethical discussions on trustworthy AI.
Automated vehicles are considered to be high-risk AI systems (art. 3, 6). However, art. 2 of the AI Act limits the scope of the AI Act, thereby leaving (safety components of) products or systems falling within the scope of the EU Type-approval Regulation or the EU General Safety Regulation outside of the scope of the AI Act. Only when adopting implementing acts relating to the General Safety Regulation, do the requirements of Title III, Chapter 2 of the proposed AI Act have to be taken into account (art. 82). Consequently, a high-risk AI system (the automated vehicle) with which the public might see itself confronted with on a daily basis, would not have to be in conformity with the proposed AI Act and its requirements on transparency, robustness, and so on. This appears to be in conflict with the recommendations of the High-Level Expert Group on AI (AI HLEG), as well as the findings of the Horizon 2020 Expert Group to advice on specific ethical issues raised by driverless mobility (Horizon 2020 Expert Group 2020). This contribution will therefore explore two questions:
-
What current legislation is in place that contributes to achieving the core elements of trustworthy AI in automated vehicles?
-
How can a (future) legislative framework for automated vehicles realise these core elements for trustworthy AI?
Thereby, how these elements and requirements for trustworthy AI are and can be respected in the regulatory framework is explored. This study adds to the existing literature by bridging the gap between the outcomes of ethics research and legal developments.
2 Ethics Expert Groups on Trustworthy AI
2.1 AI HLEG
The AI HLEG states that ‘Trustworthiness is a prerequisite for people and societies to develop, deploy and use AI systems’ (AI HLEG 2019, 4). When not worthy of trust, unwanted consequences could develop and the uptake of AI systems might be hindered (AI HLEG 2019, 4). The experts list three requirements that should be met throughout the lifetime of an AI-system: it should be lawful, ethical and robust (AI HLEG 2019, 5). The AI HLEG develops principles that contribute to the requirements of robust and ethical AI, as they ‘proceed on the assumption that all legal rights and obligations that apply to the processes and activities involved in developing, deploying and using AI systems remain mandatory and must be duly observed’ (AI HLEG 2019, 6). Four ethical principles are formulated: (1) respect for human autonomy, (2) prevention of harm, (3) fairness, (4) explicability (AI HLEG 2019, 12). The AI HLEG expert group derives 7 requirements from these principles (though this list is not exhaustive) (AI HLEG 2019, 14): (1) Human agency and oversight; (2) Technical robustness and safety; (3) Privacy and data governance; (4) Transparency; (5) Diversity, non-discrimination and fairness; (6) Societal and environmental wellbeing; (7) Accountability. Compliance with these principles should ensure the trustworthiness of AI, thereby facilitating the safe and responsible development of AI systems.
Anzeige
2.2 Horizon 2020 Expert Group
Whereas the principles of the AI HLEG are designed for trustworthy AI in general, the Horizon 2020 Expert Group focuses specifically on connected and automated vehicles (CAVs). The Horizon 2020 Expert Group has drawn up 20 recommendations that can be divided into three broader topics/themes: (1) road safety, risk dilemmas, (2) Data and algorithm ethics: privacy, fairness and explainability, (3) responsibility (Horizon 2020 Expert Group 2020, 4–5). The recommendations range from reducing physical harm (recommendation 1) to safeguarding informational privacy (recommendation 5) to creating fair and effective mechanisms for granting compensation to victims of crashes or other accidents involving CAVs (recommendation 20). It goes beyond the scope of this contribution to discuss the recommendations more in-depth, but it is already clear that several recommendations of the Horizon 2020 Expert Group show similarities to the 7 requirements formulated by the AI HLEG.
3 Ethics and the AI Act
The findings of both the AI HLEG and the Horizon 2020 Commission Expert Group show some similarities or overlap. Most notably, the following elements seem to be recurring in both reports of the expert groups as well as the AI Act (in no particular order):
-
Ensuring technical robustness and safety (AI HLEG requirement 2; Horizon 2020 Expert Group recommendations 1, 2, 3, 4; art. 15 AI Act)
-
Privacy or data governance (AI HLEG requirement 3, Horizon 2020 Expert group recommendations 7, 9, 10, 13, 15; art. 10 AI Act)
-
Transparency (AI HLEG requirement 4; Horizon 2020 Expert Group recommendations 10, 12, 13, 14, 18; art. 13 AI Act)
-
Human oversight (AI HLEG requirement 1; art. 14 AI Act)
As automated vehicles fall outside of the scope of the AI Act, the question arises what current legislation is in place that contributes to achieving these four core elements of trustworthy AI in automated vehicles. Simply because the AI Act does not (directly) apply, does not mean that the above identified principles or requirements are not supported by other legislation. Therefore, each of the four principles will be discussed in light of the current legal framework to identify how this framework is already catering to these principles.
4 Ethics Principles in the Current Legal Framework
4.1 Technical Robustness and Safety
Ensuring the technical robustness and safety of automated vehicles can be achieved via different routes, including the EU type-approval framework as laid down in the EU Type-approval Regulation. The EU General Safety Regulation already sets requirements for (fully) automated vehicles in art. 11. In addition, requirements from the UN Regulations (can) contribute to the automated vehicle’s safety and technical robustness. UN R155 on cybersecurity, for instance, is an important instrument in ensuring vehicles’ digital security and thereby fostering the safety of vehicles.
4.2 Privacy and Data Governance
Privacy and data governance are not just important in relation to the automotive field, they are topics that are of great importance in all sectors. Regardless of the sector, when it comes to processing personal data (‘‘personal data’ means any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person’) the General Data Protection Regulation (GDPR, art. 1, 4) applies. In addition, the EU Data Governance Act sets requirements for the re-use, within the EU, of certain categories of data held by public sector bodies (art. 1) and the proposed Data Act lays down harmonised rules on making data generated by the use of a product (like an automated vehicle) ‘available to the user of that product or service, on the making data available by data holders to data recipients, and on the making data available by data holders to public sector bodies or Union institutions, agencies or bodies, where there is an exceptional need, for the performance of a task carried out in the public interest’ (art. 1(1)). Legislation specifically for (automated) vehicles also lays down some rules on the recording of data: the General Safety Regulation provides some provisions on data recording (e.g. art. 6(4)), as work is being done on a data storage system for automated driving (DSSAD) at a UN level.
4.3 Transparency
Other than with robustness and safety as well as privacy and data governance, transparency seems to be lacking in the current legal framework for automated vehicles. There are elements of transparency in UN regulations and the type-approval process, but these requirements do not seem to achieve the level of transparency strived for in the proposed AI Act: ‘High-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable users to interpret the system’s output and use it appropriately’ (art. 13(1)). Especially when it comes to transparency towards users, legal lacunas seem to exist. Domestic legislation could fill this gap, setting requirements in this. In addition, the EU Product Liability Directive might provide an incentive for manufacturers of automated vehicles to inform the vehicles’ users of its possibilities and limitations: if not or incorrectly informed, the manufacturer might be found liable for resulting damage (art. 6(1)).
4.4 Human Oversight
The proposed AI Act requires that ‘High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which the AI system is in use’ (art. 14(1)). Human oversight, according to the AI HLEG, can help in ensuring human autonomy by for instance enabling human intervention in every decision cycle of the system (AI HLEG, 16). This gives rise to the question of whether human oversight is contradicting the idea of automated vehicles, in which a human is taken ‘out-of-the-loop’ and does not take part in the decision making concerning the execution of the driving task. This could explain why the Horizon 2020 Expert Group does not refer to human oversight in this meaning. Regarding human oversight, this expert group does however explain that algorithmic decisions require adequate explaination (which is closely tied to the requirement of transparency) because ‘Without adequate means of access, the role of human agency and oversight is severely weakened or hindered and risks undermining the principles of human dignity and autonomy, with the consequence of critically eroding public trust in these fastdeveloping technologies’ (Horizon 2020 Expert Group, 50). The AI HLEG adds to this that ‘the less oversight a human can exercise over an AI system, the more extensive testing and stricter governance is required’ (AI HLEG, 16). This could mean that establishing human oversight in the development phase might be sufficient. The UN regulations and EU Type-approval Regulation could ensure this.
5 Future Law-Making and Ethical Principles
From the analysis above, it is clear that mainly transparency and human oversight are not covered sufficiently by the current legislative legal framework. This means that automated vehicles can be held to a different standard than other high-risk AI systems that do fall within the scope of the AI Act. Given the findings of the AI HLEG and the Horizon 2020 Expert Group, this would be undesirable. As the Horizon 2020 Expert Group points out that ‘The timely and systematic integration of broader ethical and societal considerations is also essential to achieve alignment between technology and societal values and for the public to gain trust and acceptance of CAVs [connected and automated vehicles; NEV]’ (16) it is important that a legal baseline is also estabished for the principles on transparency and human oversight. A harmonised approach, as in other AI and vehicle matters, would be adviseable. However, if this is deemed not feasable, the domestic legislator can step in. For instance, the German legislator requires ‘Technisches Aufsicht’ when operating an automated vehicle, which seems in line with the human oversight principle (§1d, §1f Abs. 2 Straßenverkehrsgesetz). So even though the proposed AI Act will not be directly applicable to automated vehicles, the key principles and requirements from Title III Chapter 2 can still be ensured.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.