Sie können Operatoren mit Ihrer Suchanfrage kombinieren, um diese noch präziser einzugrenzen. Klicken Sie auf den Suchoperator, um eine Erklärung seiner Funktionsweise anzuzeigen.
Findet Dokumente, in denen beide Begriffe in beliebiger Reihenfolge innerhalb von maximal n Worten zueinander stehen. Empfehlung: Wählen Sie zwischen 15 und 30 als maximale Wortanzahl (z.B. NEAR(hybrid, antrieb, 20)).
Findet Dokumente, in denen der Begriff in Wortvarianten vorkommt, wobei diese VOR, HINTER oder VOR und HINTER dem Suchbegriff anschließen können (z.B., leichtbau*, *leichtbau, *leichtbau*).
Dieses Kapitel geht auf die wesentlichen Aspekte der Einhaltung von KI und funktionalen Sicherheitsstandards ein, wobei ein besonderer Schwerpunkt auf dem EU-KI-Gesetz liegt. Sie unterstreicht die Notwendigkeit eines risikobasierten Ansatzes, um KI-Systeme entsprechend ihrem Risikoniveau und den strengen Anforderungen an Hochrisikosysteme zu klassifizieren. Das Kapitel diskutiert die Notwendigkeit einer kontinuierlichen Anpassung an sich entwickelnde Standards und technologische Fortschritte und betont die Bedeutung rigoroser Konformitätsbewertungen und CE-Kennzeichnungen für hochriskante KI-Systeme. Außerdem werden die einschlägigen Vorschriften und Normen untersucht, wie die Richtlinie über elektromagnetische Verträglichkeit und die Richtlinie über Funkgeräte, die die Sicherheit und Zuverlässigkeit von KI-Systemen gewährleisten. Darüber hinaus bietet das Kapitel Einblicke in Risikomanagementsysteme, Datenverwaltung und die Bedeutung menschlicher Aufsicht in KI-Systemen. Er schließt mit der Betonung der Notwendigkeit einer agilen Anpassung an die sich rasch verändernde Regulierungslandschaft und der Bedeutung eines flexiblen und dynamischen Planungsansatzes, um eine fortlaufende Einhaltung sicherzustellen.
KI-Generiert
Diese Zusammenfassung des Fachinhalts wurde mit Hilfe von KI generiert.
Abstract
This chapter addresses the crucial aspect of ensuring compliance with AI and functional safety standards, focusing on adherence to the EU AI Act and related regulations. It highlights the importance of maintaining a risk-based approach to classify AI systems according to their risk levels. The stringent requirements for high risk AI systems are underscored, including conformity assessments and CE marking. We also discuss the need for continuous adaptation and flexibility in regulatory compliance to keep pace with evolving standards and technological advancements.
This chapter addresses the crucial aspect of ensuring compliance with AI and functional safety standards, focusing on adherence to the EU AI Act and related regulations. It highlights the importance of maintaining a risk-based approach to classify AI systems according to their risk levels. The stringent requirements for high risk AI systems are underscored, including conformity assessments and CE marking. We also discuss the need for continuous adaptation and flexibility in regulatory compliance to keep pace with evolving standards and technological advancements.
Objective
This part of the safety plan’s objective is to ensure that the EU AI Act and safety regulations and other identified regulations are adhered to in a safe, reliable, and trustworthy manner. This includes the development, deployment, and utilization of artificial intelligence (AI) systems. It also includes adhering to a risk-based regulatory framework that categorizes AI systems by unacceptable, high, limited, and minimal risk levels to implement necessary safeguards, ensure transparency, and mitigate risks associated with AI technologies.
Anzeige
Information
The high risk AI systems are subject to strict regulatory requirements to ensure they do not pose undue risks to individuals or society. Compliance with these regulations involves rigorous conformity assessments, including adherence to specific standards and obtaining CE marking, which is mandatory for placing these systems on the EU market. The CE marking serves as a declaration by the manufacturer that the product meets all applicable EU safety, health, and environmental protection requirements. This includes meeting the essential requirements set out in the AI Act for AI systems classified as high risk systems. The conformity assessment process for CE marking typically involves third-party evaluations by notified bodies to verify compliance. Once certified, the CE marking must be visibly affixed to the product, signalling its conformity to EU standards and enabling its free circulation within the European Economic Area (EEA).
Some of the relevant regulations and directives that require a CE mark and are relevant for safe AI systems:
The AI Act (Regulation (EU) 2024/1689) contains several important articles and annexes specifically focused on high risk AI systems. These requirements establish the framework for classifying, regulating, and ensuring compliance for AI systems that are considered high risk due to their potential impact on fundamental rights, health, safety, or the environment. Table 8.1 summarizes the main articles and annexes related to high risk AI systems and examples of relevant standards and guidelines. Requirements related to providers and importers are not included.
Table 8.1
The main AI articles and relevant standards and guidelines
AI Act’s main articles related to SW development of high risk systems
Article 6 Classification rules for high risk AI systems
There are no standards that classify high risk AI systems. Safety standards classify, for example, different SILx levels. The ISO/IEC 5469:2024 (STD33) classifies different AI Technology Classes, including different usage levels. Usage level Ax is AI technology used in a safety-related system. Usage level Bx is AI technology used during development, and usage level C is AI technology used to assist safety functions. Usage Level D is assigned if the AI technology is not part of a safety function in the E/E/PE
Article 7 Amendments to Annex III
See comments to Annex III below
Article 8 Compliance with the requirements
Concretized in FuSa and AI standards but they are not sufficient today. So, other concretizations have to be included. And risk mitigation achieved by existing standards? Probably not.
Relevant standards that combine FuSa and AI are only:
The report (National Transportation Safety Board [NTSB], 2019) includes a presentation of regulatory gaps, and it seems that there generally seems to be a lack of standards specifically addressing this topic
According to the AI Act, each EU Member States should designate a national supervisory authority to supervise the application and implementation of the AI Act.
Article 26 Obligations of deployers of high risk AI systems
• Some safety standards like EN 50716:2023 (STD12) include requirements for deployment
• Standards like ISO 24089:2023 (STD24) on SW update
• UL 5500:2018 (STD51) on remote SW update
• ITU Telecommunication and X.1373:2017 STD41 include requirements for SW updates over the air
Article 40 Harmonized standards
See Sect. 1.3. And AI Act Article 43, “Conformity assessment”, which includes relevant information regarding harmonized standards, especially when they are unavailable or include limitations.
• EU link regarding harmonized standards (EU, 2024c)
• The EN 50126–2:2017 (STD9) standard was recognized as a harmonized standard in the EU Official Journal (European Commission, 2020). See “fun facts” in the box below
Article 60 Testing of high risk AI systems in real-world conditions outside AI regulatory sandboxes
Here is a fun fact regarding harmonized standards. An example
The railway domain has long experience using harmonized standards. The EN 50126–2:2017 (STD9) for railway has been issued, including Annex ZZ “(informative) Relationship between this European Standard and the Essential Requirements of EU Directive 2008/57/EC” in 2017. The standards state, “2018-07-03 is the latest date by which this document has to be implemented at the national level” and that 2020-07-03 is the latest date by which the national standards conflicting with this document have to be withdrawn”.
Copy from the Official Journal (European Commission, 2020): “Article 1 The references of harmonized standards drafted in support of Directive 2008/57/EC, listed in Annex I to this Decision, are hereby published in the Official Journal of the European Union”. EN 50126–2:2017 is listed in Annex I.
Agile Adaptation
An agile mindset is crucial in the rapidly evolving landscape of AI and functional safety regulations and requirements. Agile adaptation involves maintaining flexibility in processes and planning to respond quickly to changes in regulations, harmonized standards, and technological advancements. All organizations should foster a culture that encourages iterative development and continuous learning to ensure compliance while staying ahead of new regulatory demands.
Safety Plan Issues
It is important to continuously monitor the regulatory environment and the development of new standards and update of existing standards. A flexible and dynamic planning approach is necessary to integrate updates and changes into compliance strategies. This ensures that the organization promptly addresses emerging risks and aligns with the latest regulatory requirements.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Myklebust, T., & Stålhane, T. (2018). The agile safety case. Springer.CrossRefMATH
Myklebust, T., & Stålhane, T. (2021). Functional safety and proof of compliance. Springer.CrossRefMATH
Myklebust, T., Stålhane, T., Hanssen, G. K., Wien, T., & Haugset, B. (2014). Scrum, documentation and the IEC 61508-3: 2010 software standard. In International Conference on Probabilistic Safety Assessment and Management (PSAM).
Myklebust, T., Stålhane, T., & Vatn, D. M. K. (2025, January). Which chapters and topics should be included in a safety case? In Paper presented at 71st Annual Reliability and Maintainability Symposium (RAMS 2025), Miramar Beach, Florida.