Zum Inhalt

8. Compliance with Regulations, AI, and Functional Safety Standards

  • Open Access
  • 2025
  • OriginalPaper
  • Buchkapitel
Erschienen in:

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …
Download

Abstract

Dieses Kapitel geht auf die wesentlichen Aspekte der Einhaltung von KI und funktionalen Sicherheitsstandards ein, wobei ein besonderer Schwerpunkt auf dem EU-KI-Gesetz liegt. Sie unterstreicht die Notwendigkeit eines risikobasierten Ansatzes, um KI-Systeme entsprechend ihrem Risikoniveau und den strengen Anforderungen an Hochrisikosysteme zu klassifizieren. Das Kapitel diskutiert die Notwendigkeit einer kontinuierlichen Anpassung an sich entwickelnde Standards und technologische Fortschritte und betont die Bedeutung rigoroser Konformitätsbewertungen und CE-Kennzeichnungen für hochriskante KI-Systeme. Außerdem werden die einschlägigen Vorschriften und Normen untersucht, wie die Richtlinie über elektromagnetische Verträglichkeit und die Richtlinie über Funkgeräte, die die Sicherheit und Zuverlässigkeit von KI-Systemen gewährleisten. Darüber hinaus bietet das Kapitel Einblicke in Risikomanagementsysteme, Datenverwaltung und die Bedeutung menschlicher Aufsicht in KI-Systemen. Er schließt mit der Betonung der Notwendigkeit einer agilen Anpassung an die sich rasch verändernde Regulierungslandschaft und der Bedeutung eines flexiblen und dynamischen Planungsansatzes, um eine fortlaufende Einhaltung sicherzustellen.
This chapter addresses the crucial aspect of ensuring compliance with AI and functional safety standards, focusing on adherence to the EU AI Act and related regulations. It highlights the importance of maintaining a risk-based approach to classify AI systems according to their risk levels. The stringent requirements for high risk AI systems are underscored, including conformity assessments and CE marking. We also discuss the need for continuous adaptation and flexibility in regulatory compliance to keep pace with evolving standards and technological advancements.
Objective
This part of the safety plan’s objective is to ensure that the EU AI Act and safety regulations and other identified regulations are adhered to in a safe, reliable, and trustworthy manner. This includes the development, deployment, and utilization of artificial intelligence (AI) systems. It also includes adhering to a risk-based regulatory framework that categorizes AI systems by unacceptable, high, limited, and minimal risk levels to implement necessary safeguards, ensure transparency, and mitigate risks associated with AI technologies.
Information
The high risk AI systems are subject to strict regulatory requirements to ensure they do not pose undue risks to individuals or society. Compliance with these regulations involves rigorous conformity assessments, including adherence to specific standards and obtaining CE marking, which is mandatory for placing these systems on the EU market. The CE marking serves as a declaration by the manufacturer that the product meets all applicable EU safety, health, and environmental protection requirements. This includes meeting the essential requirements set out in the AI Act for AI systems classified as high risk systems. The conformity assessment process for CE marking typically involves third-party evaluations by notified bodies to verify compliance. Once certified, the CE marking must be visibly affixed to the product, signalling its conformity to EU standards and enabling its free circulation within the European Economic Area (EEA).
Some of the relevant regulations and directives that require a CE mark and are relevant for safe AI systems:
  • Electromagnetic Compatibility (EMC) Directive 2014/30/EU
  • Radio Equipment Directive (RED) 2014/53/EU
  • Construction Products Regulation (CPR) 305/2011
  • Equipment for potentially explosive atmospheres (ATEX) Directive 2014/34/EU
According to the Directive 2014/90/EU (Marine equipment), maritime products require the wheel mark (Fig. 8.1).
Fig. 8.1
The CE and wheel marks
Bild vergrößern
Requirements
The AI Act (Regulation (EU) 2024/1689) contains several important articles and annexes specifically focused on high risk AI systems. These requirements establish the framework for classifying, regulating, and ensuring compliance for AI systems that are considered high risk due to their potential impact on fundamental rights, health, safety, or the environment. Table 8.1 summarizes the main articles and annexes related to high risk AI systems and examples of relevant standards and guidelines. Requirements related to providers and importers are not included.
Table 8.1
The main AI articles and relevant standards and guidelines
AI Act’s main articles related to SW development of high risk systems
Examples of relevant standards and guidelines
Some of the standards are commented in Chap. 6
Article 6 Classification rules for high risk AI systems
There are no standards that classify high risk AI systems. Safety standards classify, for example, different SILx levels. The ISO/IEC 5469:2024 (STD33) classifies different AI Technology Classes, including different usage levels. Usage level Ax is AI technology used in a safety-related system. Usage level Bx is AI technology used during development, and usage level C is AI technology used to assist safety functions. Usage Level D is assigned if the AI technology is not part of a safety function in the E/E/PE
Article 7 Amendments to Annex III
See comments to Annex III below
Article 8 Compliance with the requirements
Concretized in FuSa and AI standards but they are not sufficient today. So, other concretizations have to be included. And risk mitigation achieved by existing standards? Probably not.
Relevant standards that combine FuSa and AI are only:
• CEN/CLC/TR 17894 Artificial Intelligence Conformity Assessment (STD52)
• ISO IEC 5469:2024 (STD33); see also Sect. 1.2
• DNV-RP-0671:2023 (STD8) Assurance of AI-enabled systems. See also Chap. 5
Article 9 Risk management systems
The basic or umbrella IEC 61508 standard and related safety standards
• ISO/IEC 42001:2023 Information technology—Artificial intelligence—Management system (STD31)
• ISO/IEC 23894:2023 Information technology—Artificial intelligence—Guidance on risk management (STD35)
• NIST AI 100–1:2023 Framework: Artificial Intelligence Risk Management (STD44)
• ISO 31000:2018 Risk management—Guidelines (STD28)
• IEC 31010:2019 Risk management—Risk assessment techniques (STD13)
Article 10 Data and data governance
• Data safety guidance 127 (STD6)
• ISO 8000 data quality series (STD39)
Article 11 Technical documentation
See Annex IV below
Article 12 Record-keeping
• ISO 9001 (STD23)
Article 14 Human oversight
The report (National Transportation Safety Board [NTSB], 2019) includes a presentation of regulatory gaps, and it seems that there generally seems to be a lack of standards specifically addressing this topic
Myklebust et al. (2025)
Chap. 16
Article 15 Accuracy, robustness, and cybersecurity
• BIPM on uncertainty (STD1)
• DIN on uncertainty (STD7)
• DNV-RP-06712023 (STD8) includes uncertainty, aleatory and epistemic
• ISO/IEC TS 22440 (STD34) includes robustness requirements
• ISO/IEC 21448:2022 (STD27) includes robustness requirements
• IEC 62443 cybersecurity series (STD19)
• ISO/SAE 21434 on cybersecurity (STD40)
• Radio Equipment Directive (RED) 2014/53/EU (Reg. 2)
• EN 18031 series (STD58). Harmonized standards for RED
• SAE J3101 (STD48) on HW-protected security
• Kiureghian et al. (2009) on Aleatory or epistemic?
Article 17 Quality management
• ISO 9001 (STD23)
• EN 50129 (STD11)
• The Agile Safety Case (Myklebust & Stålhane, 2018). QMR part
• Myklebust et al. (2025). QMR part
Article 18 Documentation keeping
• ISO 9001 (STD23)
Article 21 Cooperation with competent authorities
According to the AI Act, each EU Member States should designate a national supervisory authority to supervise the application and implementation of the AI Act.
• Blue guide (EA, 2022)
• EU AI office (EU, 2024a). The European AI Office is the centre of AI expertise across the EU
• Notified bodies (EU, 2024b)
• NB-Rail [NB-Rail]
• Railway InterOperability Directive (IOD) 2016/797 (REG. 16)
Article 26 Obligations of deployers of high risk AI systems
• Some safety standards like EN 50716:2023 (STD12) include requirements for deployment
• Standards like ISO 24089:2023 (STD24) on SW update
• UL 5500:2018 (STD51) on remote SW update
• ITU Telecommunication and X.1373:2017 STD41 include requirements for SW updates over the air
Article 40 Harmonized standards
See Sect. 1.3. And AI Act Article 43, “Conformity assessment”, which includes relevant information regarding harmonized standards, especially when they are unavailable or include limitations.
• Blue guide (EA, 2022)
• EU link regarding harmonized standards (EU, 2024c)
• The EN 50126–2:2017 (STD9) standard was recognized as a harmonized standard in the EU Official Journal (European Commission, 2020). See “fun facts” in the box below
Article 60 Testing of high risk AI systems in real-world conditions outside AI regulatory sandboxes
See Chap. 9
Annex III—High risk AI systems
In this book, we focus on high risk AI systems for the process industry, automotive, railway, and seaborne
Annex IV—Technical documentation
• EN 50129 (STD11)
• The Agile Safety Case book (Myklebust & Stålhane, 2018)
• Myklebust et al. (2014)
• Proof of Compliance book (Myklebust & Stålhane, 2021)
• Annex A of this book
• ISO/IEC 5469:2024 (STD33)
• UNECE Regulation 156 (REG. 30)
• ISO/SAE 21434 (STD44)
• IEC 62443 series (STD19)
• EU declaration of conformity (EU, 2024d)
Annexes VI-VII— Conformity assessment procedures
• EU conformity assessment (EU, 2024e)
• CEN/CLC/TR 17894 Artificial Intelligence Conformity (STD52)
Here is a fun fact regarding harmonized standards. An example
The railway domain has long experience using harmonized standards. The EN 50126–2:2017 (STD9) for railway has been issued, including Annex ZZ “(informative) Relationship between this European Standard and the Essential Requirements of EU Directive 2008/57/EC” in 2017. The standards state, “2018-07-03 is the latest date by which this document has to be implemented at the national level” and that 2020-07-03 is the latest date by which the national standards conflicting with this document have to be withdrawn”.
Copy from the Official Journal (European Commission, 2020): “Article 1 The references of harmonized standards drafted in support of Directive 2008/57/EC, listed in Annex I to this Decision, are hereby published in the Official Journal of the European Union”. EN 50126–2:2017 is listed in Annex I.
Agile Adaptation
An agile mindset is crucial in the rapidly evolving landscape of AI and functional safety regulations and requirements. Agile adaptation involves maintaining flexibility in processes and planning to respond quickly to changes in regulations, harmonized standards, and technological advancements. All organizations should foster a culture that encourages iterative development and continuous learning to ensure compliance while staying ahead of new regulatory demands.
Safety Plan Issues
It is important to continuously monitor the regulatory environment and the development of new standards and update of existing standards. A flexible and dynamic planning approach is necessary to integrate updates and changes into compliance strategies. This ensures that the organization promptly addresses emerging risks and aligns with the latest regulatory requirements.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Download
Titel
Compliance with Regulations, AI, and Functional Safety Standards
Verfasst von
Thor Myklebust
Tor Stålhane
Dorthea Mathilde Kristin Vatn
Copyright-Jahr
2025
DOI
https://doi.org/10.1007/978-3-031-80504-2_8
Zurück zum Zitat EA. (2022, June 29). The European Commission published the revised ‘Blue Guide’ on the implementation of EU product rules 2022. Retrieved from https://european-accreditation.org/the-european-commission-published-the-revised-blue-guide-on-the-implementation-of-eu-product-rules-2022/
Zurück zum Zitat EU. (2024a). European AI office. Retrieved September 24, 2024, from https://digital-strategy.ec.europa.eu/en/policies/ai-office
Zurück zum Zitat EU. (2024d). Technical documentation and EU declaration of conformity. Retrieved September 24, 2024, from https://europa.eu/youreurope/business/product-requirements/compliance/technical-documentation-conformity/index_en.htm
Zurück zum Zitat Myklebust, T., & Stålhane, T. (2018). The agile safety case. Springer.CrossRefMATH
Zurück zum Zitat Myklebust, T., & Stålhane, T. (2021). Functional safety and proof of compliance. Springer.CrossRefMATH
Zurück zum Zitat Myklebust, T., Stålhane, T., Hanssen, G. K., Wien, T., & Haugset, B. (2014). Scrum, documentation and the IEC 61508-3: 2010 software standard. In International Conference on Probabilistic Safety Assessment and Management (PSAM).
Zurück zum Zitat Myklebust, T., Stålhane, T., & Vatn, D. M. K. (2025, January). Which chapters and topics should be included in a safety case? In Paper presented at 71st Annual Reliability and Maintainability Symposium (RAMS 2025), Miramar Beach, Florida.
    Bildnachweise
    AvePoint Deutschland GmbH/© AvePoint Deutschland GmbH, NTT Data/© NTT Data, Wildix/© Wildix, arvato Systems GmbH/© arvato Systems GmbH, Ninox Software GmbH/© Ninox Software GmbH, Nagarro GmbH/© Nagarro GmbH, GWS mbH/© GWS mbH, CELONIS Labs GmbH, USU GmbH/© USU GmbH, G Data CyberDefense/© G Data CyberDefense, Vendosoft/© Vendosoft, Deutsche Telekom MMS GmbH/© Vendosoft, Noriis Network AG/© Noriis Network AG, ams.solutions GmbH/© ams.solutions GmbH, Ferrari electronic AG/© Ferrari electronic AG, Asseco Solutions AG/© Asseco Solutions AG, AFB Gemeinnützige GmbH/© AFB Gemeinnützige GmbH, Haufe Group SE/© Haufe Group SE, Doxee AT GmbH/© Doxee AT GmbH , Videocast 1: Standbild/© Springer Fachmedien Wiesbaden, KI-Wissen für mittelständische Unternehmen/© Dell_Getty 1999938268, IT-Director und IT-Mittelstand: Ihre Webinar-Matineen /© da-kuk / Getty Images / iStock