Skip to main content
Erschienen in:

Open Access 2022 | OriginalPaper | Buchkapitel

4. Technical, Legal, and Economic Risks

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
insite
INHALT
download
DOWNLOAD
print
DRUCKEN
insite
SUCHEN
loading …

Abstract

In the following chapter the author traces the technical improvements in vehicle safety over recent decades, including new sensor technologies with image recognition and Artificial Intelligence, factoring in growing consumer expectations. Through Federal Court of Justice rulings on product liability and economic risks, he depicts requirements that car manufacturers must meet. For proceedings from the first idea until development to sign-off, he recommends interdisciplinary, harmonized safety and testing procedures. He argues for further development of current internationally agreed-upon standards including tools, methodological descriptions, simulations, and guiding principles with checklists. These will represent and document the practiced state of science and technology, which has to be implemented technically suited and economically reasonable. Dilemma situations have always served to clarify ethical and legal principles, such as in the famous example of the so-called “trolley case”. The answer of the law is clear: the killing of a human being with the intention of saving others from certain death may be excused in a concrete case, but it remains illegal in any case. The solution is to avoid accidents at any rate by adapting and forward-looking driving. Relevant maneuvers of driving robots have to be defined and assessed for example using accident data and virtual methods. Further investigation of real driving situations in comparison with system specifications with tests on proving grounds, car clinics, field tests, human driver training or special vehicle studies are recommended. For the required exchange of information, storage of vehicle data and possible criminal attacks protective technical measures are necessary.
The contents of this chapter were already prepublished within the Springer book: Autonomous driving, technical, legal and social aspects (Winkle, Development and Approval of Automated Vehicles: Considerations of Technical, Legal and Economic Risks, 2016b).

4.1 Introduction Development

In the following chapter the author traces the technical improvements in vehicle safety over recent decades, including new sensor technologies with image recognition and Artificial Intelligence, factoring in growing consumer expectations. Through Federal Court of Justice rulings on product liability and economic risks, he depicts requirements that car manufacturers must meet. For proceedings from the first idea until development to sign-off, he recommends interdisciplinary, harmonized safety and testing procedures. He argues for further development of current internationally agreed-upon standards including tools, methodological descriptions, simulations, and guiding principles with checklists. These will represent and document the practiced state of science and technology, which has to be implemented technically suited and economically reasonable.

4.2 Motivation

In the course of new innovations, technical, especially electrical/electronic systems with Artificial Intelligence and sophisticated software are becoming far more complex in the future. Therefore, safety will be one of the key issues in future automobile development resulting in a number of major new challenges, especially for car manufacturers and their developers. In particular, changing vehicle guidance from being completely human-driven, as it has always been, to being highly or fully automated, raises fundamental questions regarding responsibility and liability. This calls for new approaches: first and foremost new safety and testing concepts (Bengler, Dietmayer, Färber, Maurer, Stiller & Winner, 2014). From the legal point of view, automated vehicles require protective safety measures in the development process (Gasser, et. al. 2012). The remaining risk must be accepted by users. According to a judgment by the German Federal Court of Justice (Bundesgerichtshof, or BGH), such vehicle systems must be designed (within the limits of what is technically possible and economically reasonable) according to the respective current state of the art, state of science, and must enter the market in a suitably sufficient form to prevent damage (Bundesgerichtshof 2009).
Nationwide, it can be seen that product liability claims against large companies continue to rise (see Sec. 4.7.1). Consumer expectations regarding safety rise (see Sec. 4.5) while a general decline in self-responsibility is also becoming apparent in Europe and the eastern world. The social acceptance of destinies decreases with consumer attitudes: “Someone has to be responsible for that and pay me for my damage.”
In addition, increased willingness to sue is being caused by increased social cuts and the threat of further economic crises. Payments for compensation of severe injury cases continue to escalate due to increasingly expensive court decisions and a more litigious social environment. In particular, lack of or inadequate social security systems force victims to seek financial compensation for damages in court. This puts insurance companies under pressure and leads to an increase of compensation claims against companies. A “socialization of damages” by large companies occurs. Regional differences are increasingly disappearing. The author’s personal experience with regard to product liability cases shows that consumer protection in countries such as China, India and Russia are now at least on a western level. Media diversity, in particular various types of consumer information from the Internet, generates a high level of consumer awareness worldwide. Class actions are now also possible in Europe, for example by means of interest groups via the Internet. The payment of attorneys’ fees via success-related results also reduces the risk of legal action by consumers.
The worldwide harmonization of compensation payments settles at a high level (see Sec. 4.7.1). Due to the possibilities of an US electronic discovery in the event of a claim, companies today are more transparent. Similar processes have now been installed in Europe, Australia, Korea, Japan and China. Overall, this increases the potential risk for extended lawsuits.

4.3 Questions of Increased Automation’s Product Safety

Media reports on fully automated research vehicles from car manufacturers, suppliers and IT companies have been predicting for years the series production and market launch of self-driving vehicles. Several things still need to be in place however, before these vehicles can be launched on the market. Increasing automation of vehicle guidance calls for cutting-edge, highly complex technology. Particularly with the use of electric/electronic hard and software, unforeseeable reactions have to be expected, which in worst cases may even be danger to life and limb. Due to the growing complexity, fully automating all driving tasks in driverless vehicles (see Gasser, et. al. 2012) – without a human driver as a backup – currently involves risks, which are difficult to assess. In addition, there are new liability questions and limited tolerance for technical failure.
Assumption: while about 3,000 deaths in road traffic currently seem to be acceptable to society in Germany, there is likely to be zero tolerance for any fatal accident involving presumable technical failure. Although automation in driving – for example at lower speeds – promises considerable safety benefits, the comprehensive commercialization of driverless vehicles can only take place once the questions of who is liable and responsible for damage caused by technological systems have been clarified. Acceptance by society may only be achieved if, among other things, the benefits perceived by the individual clearly exceed the risks experienced.
To date, the following questions, amongst others, remain unsolved:
  • How safe is safe enough to bring the new system in the market?
  • How is the duty of care assured during development?
  • Which requirements need to be taken into consideration when developing and marketing safe automated vehicles?
  • Under what conditions is an automated vehicle considered defective?
Further questions also arise beginning from level 3 systems and above to improve product safety:
  • Which precautions can the developer take to avoid critical traffic situations, while the driver was allowed to deal with secondary or tertiary driving tasks according to the function offered? Which precautions can be taken for possible malfunctions?
  • Which precautions can be taken to prevent the driver from activating the system if it is not appropriate? Under what conditions should a tertiary driving task or non-driving activity be prohibited? (for example: “Tesla judgement” decision of 27.03.2020 – Reference: 1 Rb 36 Ss 832/19)
  • Which possibilities are available to get the driver back into the driving task or to bring the vehicle into a safe state if the driver does not respond to the warning of the system within the specified time period?
  • Which measures must be taken if the automated function expects a take over from the driver during a time period which is less than the specified time period? (see Gold C, et. al. 2013; Zeeb K, et. al. 2015).
  • Can it be assumed that the system can handle a critical driving situation just as collision-free as the driver could have done?
  • Is it foreseeable that the system will not react as correctly as a driver would have done and the severity of a collision will increase as a result?
  • Were maneuvers of other road users considered that could indirectly cause a collision?
  • Is it possible that the vehicle breaks the traffic rules while the driver was not responsible for monitoring the driving task?

4.4 Continued Technical Development of Assistance Systems – New Opportunities and Risks

From a technical point of view, automated vehicles are presently already able to autonomously take over all driving tasks in some defined areas and traffic situations. Current series production vehicles with an optimized sensor, computer, and chassis technologies enable assistance systems to increase their performance. Some of the driver-assistance systems on the market today give warning when they recognize dangers in parallel or cross traffic (Lane Departure Warning, Collision-, Lane Change-, Night Vision- and Intersection-Assistance). Others intervene in the longitudinal and lateral dynamics (e.g. anti-lock braking ABS, Electronic Stability Control ESC, Adaptive Cruise Control ACC). Active parking/steering assistance systems provide increased convenience by interventions of steering and braking at low speeds. These partially automated vehicle systems, with temporary longitudinal and lateral assistance, are currently offered for series-production vehicles, but exclusively on the basis of an attentive driver being able to control the vehicle. Supervision by a human driver is required. During normal operation at and beyond the system limits, the system limits or failures of these Advanced Driver Assistance Systems, or ADAS, are thus compensated by the proof of controllability due to the driver (see Knapp, Neumann, Brockmann, Walz & Winkle 2009; Donner, Winkle, Walz & Schwarz, 2007).
For fully automated driving systems on the other hand, the driver is no longer available as a backup for the technical limits and failures. This replacing of humans, acting by their own responsibility, with programmed machines goes along with technical and legal risks, as well as challenges for product safety. However, future expectations regarding driverless vehicles – even in a situation of possible radical change – can only be described as using previous experience. Analogies based on past and present expectations concerning vehicle safety will therefore be examined in the following section.

4.5 Expectations Regarding Safety of Complex Vehicle Technology

4.5.1 Steadily Rising Consumer Expectations for Vehicle Safety

Fully automated driving vehicles must be measured against today’s globally high level of consumer awareness in vehicles’ technical failures. Since 1965, critical awareness regarding the car industry has evolved more and more, strengthened by the book Unsafe at Any Speed: The Designed-In Dangers of the American Automobile (Nader, 1965 & 1972). In this publication, the author Ralph Nader blamed car makers for cost savings and duty of care breaches at the expense of safe construction and production. With its presentation of safety and construction deficiencies at General Motors and other manufacturers, the book’s content scared the public. Nader went on to found the Center for Study of Responsive Law, which launched campaigns against the “Big Three” auto makers, Volkswagen and other car companies. Technical concepts were subsequently reworked and optimized. At the center of Nader’s criticism was the Chevrolet Corvair. Amongst other things, Nader criticized the unsafe vehicle dynamics resulting from the rear-mounted engine and swing axle. Under compression or extension, it changed the camber (inclination from the vertical axis). By a design modification into an elastokinematic twist-beam or a multilink rear suspension, the inclination remains largely unchanged, which results in more stable driveability and handling. Later, the VW Beetle also came under fire for similar reasons due to its sensitivity to crosswinds. It was also designed with a rear-mounted engine and a swing axle. As a technical improvement VW therefore replaced the Beetle with the Golf, with a front engine and more stable handling (market introduction 1974).
Besides the development of new vehicles that were of better design and drove more safely, a further consequence of this criticism was the establishment of the US National Highway Traffic Safety Administration (NHTSA), located within the Department of Transportation. Based on the Highway Safety Act of 1970 it improves road traffic safety. It sees its task as protecting human life, preventing injury, and reducing accidents. Furthermore, it provides consumers with vehicle-specific safety information that had previously been inaccessible to the public. Moreover, the NHTSA has accompanied numerous investigations of automobile safety systems to this day. Amongst other things, it has actively promoted the compulsory introduction of Electronic Stability Control (ESC). Parallel to NHTSA activities, statistics from the Federal Motor Transport Authority in Germany (Kraftfahrt-Bundesamt, or KBA) also show increasingly sensitive ways in handling safety-related defects, by supporting and enforcing product recalls (Kraftfahrtbundesamt Jahresberichte, 2014). Furthermore, there are now extremely high expectations for vehicle safety. This also can be seen in the extensive safety equipment expected today in almost every series production vehicle across the globe. This includes anti-lock braking (ABS), airbags, and Electronic Stability Control (ESC). The frequency of product recalls has increased, despite passenger vehicles’ general reliability and functional safety noticeably rising at the same time. Endurance tests in trade magazines such as Auto Motor und Sport show that a distance of 100,000 km can be obtained more often without any breakdowns, unscheduled time in the garage, or defective parts, and no defects at all.

4.5.2 Current Safety Expectations of Potential Users

Above all the acceptance of automated vehicles depends upon whether the consumers perceive the technologies as safe and reliable.
Consumers are still skeptical about data protection, protection against cyber-crime and functional safety with increasing automation. A study on automated driving from the TÜV Rheinland 2018 states: In general, consumers in China, the USA and Germany have a positive attitude towards autonomous driving technology. However, the more driving functions are automated, the lower the feeling of safety. Chinese consumers are little less skeptical.
This was one of the main findings and results received from the study that drivers in Germany, the USA and China are convinced that road safety decreases with increasing automation of cars (Schierge Frank, 2017). According to the author, however, an intelligent controllable automation can increase security.
In the study mentioned above, TÜV Rheinland surveyed 1,000 private individuals aged 18 and over with a car driving license in each of the major markets of Germany, the USA and China using an online questionnaire. The study covered a period of 3 months (August to October 2017). The results confirmed the trend of a representative survey conducted by TÜV in spring 2017 on the acceptance of autonomous driving technology in Germany: Three out of four were therefore positive about higher levels of autonomous driving, but there were still many reservations about the technical implementation. According to the current international study, 78 percent of all respondents want to be able to take the steering wheel themselves at any time in an emergency. More than every second German interviewed (53 percent) would only buy an autonomous vehicle if they were always able to drive it themselves.
Furthermore, the fear of personal data falling into unauthorized hands is widespread: 30 percent of respondents in Germany “fully agreed” with this statement, 28 percent in the USA and 13 percent in China. The lack of customer confidence in cyber security extends so deeply that the majority (Germany 66 percent, USA 61 percent, China 60 percent) would even change the brand of the vehicle after a hacker attack.
In summary, the study showed that there is a need for improvement in the area of safety in the perception of the surveyed persons. To increase the acceptance of autonomous driving technology, consumers in Germany, China and the USA are requesting politics and industry to increase the level of knowledge, to ensure personal intervention in the car, to make data protection and co-determination in data use more transparent and to put in place effective measures to protect against cybercrime (see also Annex Fig. A.6).

4.5.3 Considerations of Risks and Benefits

Automated vehicles will arguably only gain acceptance within society when the perceived benefit (depending on the degree of efficiency: “driver” versus “robot”) outweighs the expected risks (depending on the degree of automation: “area of action” versus “area of effectiveness”). In order to minimize the risks, manufacturers carry out accident data analysis and corresponding risk management (see Fig. 4.1).
  • may occur contextually, while consumers weight up the perceived beneficial options and fear for risks in the relevant contexts (see Grunwald, 2013, Fraedrich, 2016). Risks depend on the level of automation, benefits of the degree in efficiency. Risk management and accident data analysis (see Ch. 2, 4) allow for objectivities and optimization.
For car manufacturers and their suppliers, automated vehicles are an interesting product innovation with new marketing possibilities. Investment decisions and market launches however involve risks that are difficult to assess:
  • Which risks exist for product liability claims when autonomous vehicles do not meet the requirements of a safe product?
  • Which failures may lead to product recalls?
  • Will the brand image be sustainably damaged, if the automated vehicle does not comply with consumer expectations?
Society’s and individual expectations of technical perfection in vehicles are rising. Higher demands in vehicle quality and functions also call for corresponding safety measures when rolling out autonomous vehicles. This for example can be seen in the increase of recall campaigns despite increasing technical vehicle reliability or additional requirements and standards, applicable comprehensive safety campaigns, such as the Motor Vehicle Safety Defects and Recall Campaigns or new obligations for documentation by public authorities. One example of the latter is the Transportation Recall Enhancement, Accountability and Documentation (TREAD) Act in the USA (United States of America, 2000), which introduced a series of new and extensive obligations for documentation and report-keeping for the National Highway Traffic Safety Administration (NHTSA). At the same time, human errors in road traffic are sanctioned individually, without bringing the whole road transport system itself into question.
Highly complex technologies and varying definitions slow down any launch of autonomous vehicles. In addition, the interdisciplinary context contains various technical guidelines. Developers used to be able to get their specifications with standards, respectively guidelines such as “generally accepted good engineering practice”, “generally recognized and legally binding codes of practice”, “industry standards”, or the “state of the art.” With its decision of 06/16/2009, the German Federal Supreme Court of Justice (BGH) wanted to ramp up requirements for the automotive industry and surprisingly shaped the term “latest state of the art and science”. This creates additional challenges for developers. Functions that are currently feasible in research vehicles for scientific purposes are under laboratory conditions far from fulfilling expectations for series production vehicles, e.g. protection from cold, heat, vibrations, water, or dirt.
From a developer’s point of view, the fulfillment of legal requirements for a careful development of new complex systems can only be proven by validation tests. These should ideally be internationally harmonized and standardized. The German BGH judgment from 2009 explained these development requirements – excluding economic and technical suitability for production – with “… all possible design precautions for safety …” based on “state of the art and science” (Bundesgerichtshof, 2009) on the basis of an expert opinion for the preservation of evidence. This opinion, however, requires ultrasound sensors as redundancy for recognition of critical objects to trigger airbags. It should be possible, “… to attach ultrasound sensors around the vehicle which sense contact with an object and are in addition verified by existing sensors before airbag deployment …” (Bundesgerichtshof BGH, 2009).
This expert opinion for the preservation of evidence however from an engineering point of view is more than questionable, as current sensor designs only permit a range of a few meters in series production vehicles. Subject to the current state of the art, the application of ultrasonic sensor systems is limited to detect static surroundings at slow speeds in the scope of parking assistance. The sensors’ high-frequency sound waves can be disturbed by other high frequency acoustic sources such as jackhammers or trucks and buses’ pneumatic brakes, which can lead to false detections. Also, poorly reflecting surfaces will not lead to a reflection of sound waves. Object recognition is then entirely excluded (Geiger A, et. al. 2012; Noll & Rapps, 2012). Furthermore, the lawsuit finally turned out that the sensor system concerned worked error-free according to the technical specification.
In addition, the previous fundamental BGH judgment requires that risks and benefits be assessed before market launch:
“Safety measures are required which are feasible to design according to the state of the art and science at the time of placing the product on the market … and in a suitable and sufficient form to prevent damage. If certain risks associated with the use of the product cannot be avoided according to state of the art and science, then it must be verified—under weighing up the risks, the probability of realization, along with the product benefits connected—whether the dangerous product can be placed on the market at all.” (Bundesgerichtshof 2009)

4.6.1 Generally Accepted Rules of Technology

An interpretation of the term “generally accepted rules of technology” (allgemein anerkannte Regeln der Technik, or aaRdT) as a basic rule was shaped in a German Imperial Court of Justice (Reichsgericht) judgment from 1910 based on a decision from 1891 during criminal proceedings concerning Section 330 of the German Penal Code (§ 330 StGB) in the context of building law:
“Generally accepted rules of technology are addressed as those, resulting from the sum of all experience in the technical field, which have proven in use, and wherever correctness experts in the field are convinced.”
In various legal areas, they have different meanings. In terms of product liability, generally accepted rules of technology represent minimum requirements. Non-compliance to the rules would indicate the required safety has not been reached. They are described in DIN-VDE regulations, DIN standards, accident prevention regulations, and VDI guidelines, amongst others (Krey & Kapoor 2012).

4.6.2 The Product Safety Law (ProdSG)

The German Product Safety Law (Produktsicherheitsgesetz, or ProdSG), in its revised version of 11/08/2011 establishes rules on safety requirements and consumer products. Its predecessor was the Equipment and Product Safety Law (Geräte- und Produktsicherheitsgesetz, or GPSG) of 01.05.2004, which in turn had replaced the Product Safety Law (Produktsicherheitsgesetz, or ProdSG) of 22.04.1997 and the Equipment Safety Law (Gerätesicherheitsgesetz, GSG) of 24.06.1968. Section 3 GSG describes the general requirements for providing products on the market:
“A product may … only be placed on the market if its intended or foreseeable use does not endanger the health and safety of persons.” (Burg & Moser, 2017)

4.6.3 The Product Liability Law (ProdHaftG)

Independent of its legal basis for a claim, the term “product liability” commonly refers to a manufacturer’s legal liability for damages arising from a defective product. A manufacturer is whoever has produced a final product, a component product, a raw material, or has attached its name or brand name to a product. For product liability in Germany, there are two separate foundations for claims. The first basis is fault-based liability, as found in Section 823 of the German Civil Code (BGB) (Köhler, 2012); the second is strict liability regardless of negligence or fault related to the tortfeasor, as contained in the Product Liability Law. Section 1 of the Product Liability Law (ProdHaftG Law Concerning Liability for Defective Products) of 12/15/1989 describes the consequences of a fault as:
“If a person is killed or his or her body or health injured, or if property is damaged, due to a defect of a product, the manufacturer of the product is thus obliged to compensate the injured parties for any losses.” (European Commission, 1985)
Independently of whether the product defect is caused intentionally or through negligence, a defect is defined in Section 3 of ProdHaftG as follows:
“A product is defective when it is lacking safety which the public at large is entitled to expect, taking into account the presentation of the product, the reasonably expected use of the product and the time when the product was put into circulation.” (European Commission 85/374/EWG, 1985)
Should damage arise from a defective product, the Product Liability Law regulates the liability of the manufacturer. Firstly, this entails potential claims of civil liability for property damage, financial losses, personal injury, or compensation for pain and suffering. Liability rests primarily with the manufacturer. In justified cases suppliers, importers, distributors, and vendors may also be made liable without limitation. Furthermore, in cases of legally founded criminal liability, there may also be particular consequences for top management or individual employees, if it is proven that risks were not minimized to an acceptable level (see Fig. 4.2). In cases of serious fault or depending on the offense as negligence, this may involve criminal personal proceedings against a developer.
Besides the potential legal consequences, manufacturers must also expect considerable negative economic effects. Negative headlines in the media can lead to substantial loss in profits or revenue, damage to image, loss in trust and consequently loss of market share. Therefore, when developing new systems, both consequences of potentially legal and economic risks must be considered. Figure 4.2 gives an overview of the potential effects of failures in automated vehicles.

4.6.4 Ethics, Court Judgments to Operational Risk and Avoidability

Furthermore, the ongoing developments in automated driving require politics, society and the legal system to reflect on additional emerging changes.
One aspect is the decision whether the approval of automated driving systems is ethically justifiable or even necessary. At a fundamental level, it depends on how much dependence we want to accept on technical complexes, in the future increasingly on systems that may be capable of learning and based on Artificial Intelligence with trained Neural Networks for Deep Learning (see LeCun Y et. al., 2015; Goodfellow I et. al., 2016; Schmidhuber J, 2015), in order to achieve greater safety, mobility and comfort in return. The following questions arise here:
  • Are there any requirements for controllability, transparency and data autonomy?
  • Which technical requirements are necessary to legally protect the individual human being within society, their freedom of development, their physical and mental integrity, and their right to social respect?
In Germany, the Ethics Commission for Automated Driving presented the first ethical rules worldwide for autonomous driving technology in June 2017. It states that automatic control to prevent accidents is not ethically programmable without a doubt. In the case of unavoidable accidents, any qualification according to personal characteristics (age, gender, physical or mental constitution) is strictly prohibited (Di Fabio U et. al., 2017).
Legal ethics is an important link between jurisprudence and legal policy on the one hand and ethics on the other. From an ethical perspective, it addresses basic legal questions as well as questions of legal practice. It is therefore excellently suited to identifying and, under certain circumstances, correcting subject-specific viewpoints that are ossified (Hilgendorf et. al., 2018).
The following questions relate to an ethically justifiable development of automated vehicles:
  • Will the automated vehicle avoid accidents as good as practically possible?
  • Is the technology designed according to its respective state of the art in such a way that critical situations do not arise in the first place?
(including dilemma situations in which an automated vehicle is faced with the decision of having to implement one of two evils that cannot be weighed up)
  • Has the entire spectrum of technical possibilities been used and continuously been further developed?
(Limitation of the area of operation to controllable traffic environments, vehicle sensors and braking performance, signals for endangered persons up to hazard prevention by means of an “intelligent” road infrastructure)
  • Is the development objective focused on significantly increasing road safety?
  • Has the defensive and safe driving already been considered in the design and programming of the vehicles—especially with regard to Vulnerable Road Users VRU)?
Regarding Vulnerable Road Users in particular pedestrians is another aspect which was already mentioned in chapters 2 and 3 as a challenge for developing automated functions.
The German legislator has strengthened the rights of non-motorized road users through the law of modification on damages (2nd SchadÄndG) in 1998, including the substitution of the unavoidable event by force majeure. In concrete terms, the law provides for the following major innovations:
  • Strengthening the position of children in road traffic
  • Exclusion of liability of the vehicle keeper only in the case of force majeure
  • No consideration of the (partial) fault of children under 10 years of age
A change in the German court decisions took place only a few years later. To this end, the responsibility for pedestrian accidents has been investigated since 2004 on the basis of jurisdiction. Investigations of court decisions demonstrate, that there has been a significant change since the Federal Court of Justice (BGH) ruling of 2014.
The trend shows that in future the responsibility for damage in pedestrian accidents will remain with the owner and, in the case of fully automatic functions, probably with the manufacturer. The recommendation is that future case law should be observed (See Annex A: Change in jurisdiction on the responsibility for pedestrian accidents).

4.7 Product Safety Enhancement in Automated Vehicles Based on Expert Knowledge from Liability and Warranty Claims

4.7.1 Experience from Product Crises and Traffic Accidents

In the future safe automated vehicles will further depend on integrated quality management systems (International Organization for Standardization ISO 9001, 2015 & ISO/TS, 2009) and safe interactions (Akamatsu, Green & Bengler, 2013). In the past, advanced and successful vehicles were frequently affected by product crises.

4.7.1.1 Defective Supplier Parts and Systems

The following examples document how supplier parts and systems triggered extensive product crises.
The Ford Explorer was the worldwide best-selling sports utility vehicle. In the USA in May 2000, the NHTSA contacted both the Ford and Firestone companies due to a conspicuously high rate of tires failing with tread separation. Ford Explorers, Mercury Mountaineers, and Mazda Navajos were affected. All were factory-fitted with Firestone tires. At high speeds, tire failures led to vehicles skidding out of control and rollover crashes with fatal consequences. Firestone tires on Ford Explorers were linked to over 200 fatalities in the USA and more than 60 in Venezuela. Ford and Firestone paid 7.85 million dollars in court settlements. Total compensation and penalties in total amounted to 369 million dollars. In addition to the expensive recall of several million tires, communication errors were also made during the crisis: The managers responsible publicly blamed each other. This shattered friendly business relations between the two companies that dated back over 100 years. Harvey Firestone had sold Henry Ford tires for the production of his first car as long ago as 1895. As the crisis progressed it led to serious damage to the companies’ images, with sales collapsing for both parties (Hartley R F, 2011).
General Motors (GM) announced a further example of defective supplier parts in February 2014. As a consequence of the financial crisis, the car company had been on the brink of bankruptcy in 2009. It returned to profit for the first time, and won awards for its new models, after a government bailout. But the ignition switches on some models had seemingly been too weakly constructed since 2001, which meant the ignition key sometimes jumped back to the “Off” position while driving. When this happened, not only did the motor switch off, but the brake booster, power steering, and airbags also became deactivated. GM engineers were accused of having ignored the safety defect in spite of early warnings for more than ten years. Therefore, the company has already been fined 35 million dollars for a delayed recall and now faces billions of dollars of damages claims from accident victims and vehicle owners after mass product recalls (National Highway Traffic Safety Administration, 2014a).
Another huge air bag recall campaign by NHTSA involved eleven different vehicle manufacturers and more than 30 million vehicles in the United States only. Airbag Inflators supplied by Takata ignited with explosive force. In some cases, the inflator housing could rupture under high temperature conditions with metal shards spraying throughout the passenger cabin and thus injured or killed car occupants. Several fatalities and more than 100 injuries were linked to this case. The airbags were installed in vehicles from model years 2002 to 2014. Despite this injury risk the Department of Transportation estimated that between 1987 and 2012 frontal airbags have saved 37,000 lives (National Highway Traffic Safety Administration, 2014, 2015).

4.7.1.2 So-Called Unintended Accelerating, Decelerating or Steering Vehicles

Vehicles that automatically intervene in longitudinal and lateral guidance hold considerable risks and provide a target for those who assert that vehicles steer, accelerate and decelerate unintended, unexpected or uncontrolled. The accusation of unintended acceleration due to alleged technical defects has already put some car manufacturers in the media’s crossfire. Mainly in the USA, unintended accelerations of vehicles were reported causing fatal accidents. Affected drivers have initiated waves of lawsuits lasting for decades.
Examples of extensive lawsuits were allegations against Toyota, a globally successful company known for excellent quality. Toyota came off very well in customer-satisfaction studies by the American market research firm J. D. Power and Associates in 2002, 2004, and 2005. In 2009, however, Toyota was confronted with allegations of unintended and sudden acceleration of its vehicles. These were initially triggered by single incidents of sliding floor mats, which had supposedly been responsible for gas pedals getting jammed. It was then argued that vehicles would have accelerated unintentionally while driving due to the mechanically jammed gas pedals. As Toyota had not responded to the allegations quickly enough in the eyes of the NHTSA, the company was accused of covering up safety problems linked with more than 50 deaths. As well as compensation payments, Toyota had to pay the authority an unusually high fine of 16.4 million dollars in 2010. This was followed by extensive product recalls and claims for damages (National Highway Traffic Safety Administration, 2014b).
A further instance of a proven technical defect that led to unwanted accelerations can be seen in an NHTSA recall action in June 2014. The software problem occurred in some Chrysler Sport Utility Vehicles (SUV). When optional adaptive cruise control was activated and the driver temporarily pressed the accelerator pedal to increase (override) vehicle’s set speed more than the cruise control system would on its own, the vehicle continued to accelerate briefly after the accelerator pedal was released again. In this case and according to technical requirements the vehicle has to decelerate to the requested set speed. There were no accident victims to complain about. The short-notice initiated recall was restricted to a mere 6,042 vehicles (National Highway Traffic Safety Administration, 2014c).
Other great challenges already occurred because autonomous braking systems decelerated in some individual cases without a visible reason for the driver and put vehicles at risk of a rear-end collision. However, automatic braking and collision warning systems have great potential in reducing road accidents and saving lives. After recognizing a relevant crash object, they can automatically apply the brakes faster than humans, slowing the vehicle to reduce damage and injuries. Therefore, these systems are recommended to be made standard equipment on all new cars and commercial trucks. Since November 2013 EU legislation mandated Autonomous Emergency Braking Systems (AEBS) in different stages with respect to type-approval requirement levels for certain categories of motor vehicles to cover almost all new vehicles in the future (Juncker J-C, 2015).
According to NHTSA the Japanese car manufacturer Honda Motor Company had to recall certain model year 2014–2015 Acura vehicles with Emergency Braking. The reason was that the Collision Mitigation Braking System (CMBS) may inappropriately interpret certain roadside infrastructure such as iron fences or metal guardrails as obstacles and unexpectedly apply the brakes (National Highway Traffic Safety Administration, 2015a). Furthermore, NHTSA investigated complaints alleging unexpected braking incidents of the autonomous braking system in Jeep Grand Cherokee vehicles with no visible objects on the road (National Highway Traffic Safety Administration, 2015b).
Another recall of Chrysler vehicles from 2015 July 24 was, in accordance with NHTSA the first initiating by a software hack. US researchers brought a moving Chrysler Jeep under their control from afar, which forced the company to recall and ensure cyber-security of their onboard software. The affected vehicles were equipped with Uconnect radio entertainment systems from Harman International Industries. Software vulnerabilities could allow third-party access to certain networked vehicle control systems via internet. Exploitation of the software vulnerability could result in unauthorized manipulation and remote control of certain safety related vehicle functions (such as engine, transmission, brakes and steering) resulting in the risk of a crash (National Highway Traffic Safety Administration, 2015c).
Moreover, Fiat Chrysler Automobiles acknowledged violations of the Motor Vehicle Safety Act in some safety-relevant cases. To remedy its failures, the company agreed to repair vehicles with safety defects or purchase defective vehicles back from owners and pay a 105-million-dollar civil penalty. Until 2015 this was the largest fine ever imposed by NHTSA.
In addition to the threat of civil penalties, the following fatal traffic accident that occurred in Germany represents an important leading case. It transparently demonstrates the criminal liability of manufacturers with regard to automated driving, in order to limit it in a way that can be controlled under the rule of law by means of appropriate preventive measures. (see Fig. 4.3).
On January 8, 2012, a fast passenger car with an activated lane keeping system entered a small town in the district of Aschaffenburg and subsequently crashed into a family having a Sunday afternoon walk in the middle of the village. A woman and her child were both killed immediately. The driver was supposed to have suffered a heart attack at the entrance to the town and lost consciousness as a result. A vehicle conventionally steered exclusively by the driver would have come off the road at the entrance to the town and probably come to a standstill next to the road. However, the Lane Keeping Assist (LKA) kept the vehicle actively on the road. The consequence of this traffic accident was a dead mother (35 years), a dead boy (7 years), a seriously injured father (44 years) and a fatally injured driver (51 years). According to a police officer's report at the Würzburg police headquarters, a heart attack (cerebrovascular stroke) was confirmed as the cause of this accident. This also indicates that no brake markings were visible. According to witnesses, the 51-year-old driver of the passenger car was accelerating in a 30 kilometers per hour speed limit zone before the collisions occurred and had run over the traffic island of a roundabout (see Annex Fig. A.9 and A.10). Due to a following collision at the left vehicle front with a house wall, the vehicle was deflected and finally reached its final position on the opposite sidewalk (see Fig. 4.3). According to witnesses, the car then collided directly with a family during their Sunday afternoon walk on the sidewalk (Krämer K, Winkle T, 2019). It was reported that the father was only partially hit by the car by jumping to the side and only suffered a leg injury. Unfortunately, the mother and her seven-year-old son were completely hit and pulled along over several meters.
Subsequently, an extraordinary technical background in terms of liability law was considered responsible for the collision with the family. The car was equipped with a Lane Keeping Assist, which was allegedly activated before the first collision. As a result, the corrective steering torque would have tried to keep the vehicle on the road while the car with the unconscious driver approached the roundabout. According to the assumption that, without a corrective steering torque, the car might have left the road earlier and the deadly pedestrian collision would not have occurred.
The father who had lost his wife and child wanted justice. Someone should be held criminally responsible for the murder that destroyed his life. His question was to what extent someone could be held liable for a negligent murder. Therefore, he turned to the public prosecutor’s office.
The lawyer and expert for robot law Prof. Dr. Dr. Eric Hilgendorf was legally appointed by the public prosecutor’s office to analyze the case:
This traffic accident is one of the first cases in which an autonomous assistance system is held responsible for significant personal injury and material damage. Under civil law such a case is covered by the owner’s liability in German road traffic law. The owner of the vehicle is liable for all damages caused by the vehicle (§ 7 StVG). Liability insurance (see § 1PflVG) assumes the settlement of claims against the injured party – in this case the surviving father.
From a criminal law perspective, it must be clarified who is a potential perpetrator. Obviously, the vehicle itself cannot be the perpetrator of a crime. The driver cannot be accused of any act causing damage or disregarding duty of care. Only the vehicle manufacturer or an employee who is responsible for negligence in the development, programming or release process of the Lane Keeping Assist remains a punishable offender.
Two possible approaches were considered for the allegation of negligence:
1.
The technical system for active steering support had been defect.
 
2.
By functional definition, the system worked correctly, but additional safety measures would have to be provided.
 
While the first point could be excluded, the criticism remained that the system was not designed or programmed sufficiently safe. The statements of the public prosecutor’s office in this regard are therefore trend-setting:
“Bereits aus dem Grundsatz der Sozialadäquanz muss ein Sicherungssystem nicht in der Lage sein, jede technische Möglichkeit auszuschöpfen. Denn dies würde bedeuten, dass zwangsläufig jedes Fahrzeug alle nur denkbaren Sicherungsmöglichkeiten enthalten müsste. Zwar wäre es durchaus wünschenswert, wenn eine Lenkungsunterstützung neben den Daten des Fahrzeugs auch die Gesundheit des Fahrzeugführers überwachen könnte. Es ist technisch möglich, über Sensoren auch die Herzfrequenz oder (was hier zur Vermeidung des Unfalls erforderlich gewesen wäre) die Gehirnströme des Fahrzeuglenkers zu messen und auszuwerten. Allein das Unterlassen solcher Maßnahmen führt jedoch nicht zu Pflichtwidrigkeit, da es hier an einem Schutzzweckbezug fehlt. Denn durch die Lenkungsunterstützung wird das Risiko eines Unfalls nicht erhöht. Sie verlagert allenfalls schicksalshaft den Unfallort.” (Hilgendorf E, 2018; Generalstaatsanwaltschaft Bamberg, 2012, AZ 5 ZS 1016/12)
“Even the principle of social adequacy does not mean that a security system must be able to exploit every technical possibility. This would imply that every vehicle would inevitably have to fulfill all imaginable safety measures. It would certainly be desirable if steering assistance could monitor not only the vehicle’s data but also the driver’s state of health. It is technically possible to use sensors to measure and evaluate the heart rate or (which would have been necessary here to avoid the accident) the brain waves of the driver. However, the failure to take such measures alone does not lead to breach of duty, as there is no reference to the protective purpose here. Because steering assistance does not increase the risk of an accident. At most, it fatefully relocates the location of the accident.”
These considerations mean that technology is never absolutely safe. The users of a certain technology have to accept risks. The manufacturer should not be required by law to implement all imaginable hedging possibilities.
Regarding the criminal law assessment of this Aschaffenburg case, the lawyer Prof. Dr. Dr. Eric Hilgendorf tries to further specify the relevant criteria for a non-compliance with the duty of care in the manufacture and market introduction of technical products. He’s mentioning here “Fahrlässigkeitshaftung und erlaubtes Risiko” (Negligence liability and permitted risk)
The limitations required in criminal liability for defective technology should not be placed in the context of protective purpose considerations or in the context of an additional category of “objective attribution”, but in the context of checking duty of care violations.
Following this argumentation, the examination for the existence of a breach of the duty of care according to Prof. Dr. Dr. Hilgendorf can be structured as follows:
1.
A duty of care arises with the predictability of a damage and its avoidability
 
2.
The degree of required duty of care is determined by the proximity of the imminent danger (i.e. the probability of the damage occurrence) and the level of the imminent damage
 
3.
The duty of care is limited by the principle of trust and the principle of permissible risk.
 
For Prof. Hilgendorf, the legal concept of “permitted risk” is decisive in the assessment of this case. According to Prof. Hilgendorf (with regard to the permitted risk) the production of risky products is not to be assessed as negligent (and thus “permitted”) if, according to the current opinion of the legal community, the benefits associated with the technical products are so great that individual harm can be accepted. This principle thus reaches so far that even fatalities by passenger cars are tolerated – the manufacture of vehicles is therefore not qualified as negligent. However, this is only the case if manufacturers do everything reasonable to reduce the risks caused by their products as far as possible (and reasonable). The generation of risks that could reasonably be avoided is therefore not covered by the aspect of permitted risk (Hilgendorf E, 2018).
The criticism against the manufacturer was that introducing the system might have been negligent or careless. However, the manufacturer was able to prove with tests on competition vehicles that the lane guidance assistant corresponds demonstrably to the usual state of the art.
This case shows that it is difficult for a developer to foresee all eventualities. According to the assessment of the lawyers, the manufacturers can only be required to make their products as safe as possible within reasonable limits. Occasional damage must thereafter be accepted due to the benefits associated with the products. Basically, no technology is safe. Therefore, the society has to decide in each individual case which risk it will tolerate or accept.
From today’s perspective, a driver monitoring system could have detected the unconsciousness of the driver with corresponding technical measures in order to initiate risk-reducing measures. After this case became known, Prof. Hilgendorf argued that a technical solution for such cases should be considered in further new developments (Hilgendorf E, 2015, 2015b, 2019).
This tragic accident indicates that many new technological risks for automated functions in future may not be visible during development and testing. These issues arise in real-life traffic situations and developers have to make necessary changes to the technology ensuring real-world traffic safety (see Ch. 4).
Another example is the first recorded fatal pedestrian accident with a self-driving test vehicle in Tempe. The complaint in this case states that the collision avoidance system did not react. An Uber test vehicle collided with a pedestrian and her bicycle in autonomous mode. A 49-year-old woman pushed her bicycle across the road with two main lanes and another two lanes for left turners. The collision occurred late in the evening on 18 March 2018. Neither the automatically driving vehicle nor the responsible safety driver took any measures to prevent or mitigate the consequences of an accident. Thus, the incident raises ethical and legal questions about the sense and responsibility of vehicle automation.
On the basis of published photos of the damaged Volvo XC90, the accident site with the end positions and a video of an exterior and interior camera, the author was able to create an accident reconstruction with PC crash. Despite the limited perceptive power of camera sensors in darkness, the pedestrian is clearly visible in the published video more than a second before the collision.
The present accident reconstruction enables further analyses with different assumptions for the potential avoidance of human accidents in comparison to the machine against the background of the installed camera, lidar and radar sensors (see also Annex Fig. A.16).
Detailed information on the accident is provided by the National Transportation Safety Board NTSB in two reports under number HWY18MH010. A preliminary report was published immediately after the crash in 2018 (National Transportation Safety Board 2018). A detailed “vehicle automation report” was published on November 5, 2019 (National Transportation Safety Board 2019).
Thus, according to the preliminary record, the Uber test vehicle collided with a speed of 39 mph. Roughly 6 seconds before the impact, the vehicle drove at 43 mph. Already 1.3 seconds before the impact the system had determined that an emergency braking maneuver is necessary in order to prevent a collision. According to Uber, the test vehicle’s emergency braking system was deactivated to prevent unintentional behavior.
According to the data recorder, the modified autonomously driving Volvo XC 90 drove 44 mph (70.8 km/h) when an object was first detected from the Radar sensor 5.6 seconds before the crash. However, it was not recognized as a woman crossing the road, but only as a “vehicle” that was not identified as moving in any direction. Within the next few seconds, this image classification changed continuously. With each new image classification, the previously registered location information was reset. The robotic car thought it was constantly recognizing a new stationary “vehicle”, “unknown object” or “bicycle”. The object movement in the direction of the driving lane of the Volvo was not foreseen for seconds (see Annex Fig. A.12).
Only 1.5 seconds before the crash at 44 mph (70.8 km/h), an unknown object was detected by the Lidar sensor which partially moved into the lane of the Volvo. The algorithms therefore calculated an evasive maneuver. Exactly 1.2 seconds before the crash at 43 mph (69.2 km/h), the Lidar system then detected a bicycle on its way into the lane, so an evasive maneuver was no longer possible (see Annex Fig. A.12).
Another problem of the software at that time can be seen here: If the system detected such a hazardous situation, it interrupted for a second to give the safety driver time to intervene. A reaction from the Volvo was not designed in the software. Therefore, unintended consequences of a wrong intervention were prevented.
At the end of the one-second interruption, 0.2 seconds before the collision at 40 mph (64.4 km/h), the safety driver did not react. She looked down and had no view on the road. The software was programmed in such a way that it only decelerates to the maximum if a collision can be prevented. Otherwise an acoustic warning was programmed with only a slight braking. In this specific case, the safety driver took over the steering wheel at that moment and thus deactivated the slight autonomous braking. It came to a fatal crash and only 0.7 seconds later, at a speed of still 37 mph (59.5 km/h), the safety driver began to apply the brakes (see Annex Fig. A.12).
This traffic accident had fatal consequences not only because the sensor system was not prepared for people crossing roads unintentionally or against traffic rules (jaywalking), but also because the above-mentioned system design decisions have been implemented by the software developers. For further scientific findings, this pedestrian accident was subsequently investigated in detail by the author with an accident reconstruction and then visually simulated by using the PC-Crash software from DSD-Datentechnik, which is used worldwide.
In the following figure (Fig. 4.4) the accident site in the final received simulation is demonstrated. The point of time directly before the collision, during the course of the accident including the final end positions of the pedestrian, the bicycle and the Volvo XC 90, are visualized.
The pedestrian speed of 4.8 km/h (1.3 m/s) was determined from the video with the pedestrian pushing her bicycle across the road (Fig. 4.5 illustration top right) and compared with usual pedestrian speeds from expert literature (Bartels B, Liers H, 2014).
A multi-body model supports the visualization of the pedestrian’s first contact with the pushed bicycle on the front of the Volvo XC90 (Fig. 4.5 images top left and bottom left). The damaged front of the Volvo after the collision with the bicycle and pedestrian is documented in Fig. 4.5 below right.
Assuming a speed of 43 mph (69.2 km/h, 19.2 m/s) and an immediately effective emergency braking 1.2 seconds before collision with a deceleration of 8 m/s2, the accident would have been avoided.
$$1 mph = 1.609344\, \frac{km}{h} = 0.44704\, \frac{m}{s}$$
(4.1)
$$s = v*t = 19.2\, \frac{m}{s}*1.2\, s = 23.1\, m$$
(4.2)
$$a = \frac{{v^{2} }}{2s} = \frac{{\left( {19.2 \frac{m}{s}} \right)^{2} }}{2*23.1 m} = 8 \frac{m}{{s^{2} }}$$
(4.3)
The best braking coefficients of current vehicle types from 100 km/h are between 13.7 m/s2 for a sports car and 11.5 m/s2 for the Volvo XC 90.
$$100 \frac{km}{h} = 62.1\, mph = 27.8\, \frac{m}{s}$$
(4.4)
A Porsche 911 GT3 RS (991 II, production since 2017) came to a standstill after 28.2 meters from 100 km/h with two occupants and warm brakes in the test (Auto Motor und Sport, 9/2018). This corresponds to a deceleration of 13.7 m/s2:
$$a = \frac{{v^{2} }}{2s} = \frac{{\left( {27.8\, \frac{m}{s}} \right)^{2} }}{2*28.2\, m} = 13.7\, \frac{m}{{s^{2} }}$$
(4.5)
In June 2015, the general German automobile club (ADAC) tested the brakes of a comparable Volvo XC90 D5 with a braking distance of only 33.6 meters. The measured braking distances are average values from ten individual braking operations each (ADAC Technik Zentrum, 6/2015). The corresponding deceleration is thus 11.5 m/s2:
$$a = \frac{{v^{2} }}{2s} = \frac{{\left( {27.8\, \frac{m}{s}} \right)^{2} }}{2*33.6\, m} = 11.5\, \frac{m}{{s^{2} }}$$
(4.6)
With this average deceleration of 11.5 m/s2 for the Volvo XC 90, it was theoretically sufficient in the present pedestrian accident with an initial speed of 43 mph (69.2 km/h, 19.2 m/s) if the braking had started 16.1 meters before the pedestrian or slightly more than 0.8 seconds before the collision:
$$s = \frac{{v^{2} }}{2a} = \frac{{\left( {19.2\,\frac{m}{s} } \right)^{2} }}{{2*11.5\, \frac{m}{{s^{2} }}}} = 16.1\, m$$
(4.7)
$$t = \frac{{\text{s}}}{v} = \frac{{16.1{\text{ m}}}}{{19.2\, \frac{m}{s}}} = 0.8\, s \left( {0.837 s} \right)$$
(4.8)
This present traffic accident reconstruction and simulation allows the investigation of further assumptions with the corresponding effects on the relationships between distances, times and speeds (see Annex Fig. A.11).
The National Transportation Safety Board (NTSB) cited the following as contributing to the fatal crash: 1. The failure safety driver because she was visually distracted throughout the trip by her personal cell phone. 2. Inadequate safety risk assessment procedures at Uber’s Advanced Technologies Group. 3. Uber’s ineffective monitoring of vehicle operators. 4. Uber’s inability to address the automation complacency of its safety drivers monitoring the automated driving systems. 5. The victim was found to have methamphetamines in her system, and her impairment may have led her to cross the street outside the crosswalk. 6. Arizona’s “insufficient” policies to regulate automated vehicles on its public roads were found to have contributed to the crash (National Transportation Safety Board 2019).
The author’s own experience of previous product liability cases has shown that interdisciplinary structured and experience-based development is a minimum requirement. In case of damage, the following questions are the key for avoiding civil and criminal claims:
  • Has the new system already been checked for possible failures prior to development, considering the risks, probability of occurrence and benefits?
  • Can the vehicle be type-approved in the intended technological specification in order to be licensed for safe road traffic use?
  • What measures beyond purely legal framework were taken to minimize risk, damage, and hazards?
Essentially, besides general type approval requirements, no globally agreed upon and harmonized methods for fully automated vehicles exist today. These can be generated using international legally binding development guidelines including checklists (similar to the RESPONSE 3) ADAS Code of Practice for the Design and Evaluation of Advanced Driver Assistance Systems (“ADAS with active support for lateral and/or longitudinal control”) (Knapp A, Neumann M, Brockmann M, Walz R, Winkle T, 2009) linked to ISO 26262 (International Organization for Standardization, ISO 26262, 2018) in Section 3, Concept phase, Table B.6: Examples of possibly controllable hazardous events by the driver or by the persons potentially at risk, page 26/27, Controllability.
Future guidelines will either be orientated towards today’s requirements or to a large extend adopt them. The methods for evaluating risk during development (see Sec. 4.7.4) ensure that no unacceptable personal dangers are to be expected when using the vehicle. Therefore, the general legally valid requirements, guidelines, standards, procedures, during development process must at the very least, take into consideration as a minimum requirement:
  • Are generally accepted rules, standards, and technical regulations comprehensively checked?
Only complying with current guidelines is usually insufficient. Furthermore, it raises the following questions:
  • Was the system developed, produced, and sold with the required necessary care?
  • Could the damage that occurred have been avoided or reduced in its effect with a different design?
  • How do competitors’ vehicles behave, or how would they have behaved?
  • Would warnings have been able to prevent the damage?
  • Were warnings in the user manuals sufficient or additional measures required?
Whether an automated vehicle has achieved the required level of safety or not can be seen at the end of the development process:
  • Was a reasonable level of safety achieved with appropriate and sufficient measures in line with state of the art and science at the time it was placed on the market?
Even after a successful market introduction, monitoring of operation is absolutely necessary. This is still the case when all legal requirements, guidelines, and quality processes for potential malfunctions and safe use of the developed automated vehicle functions have been complied with. The duty to monitor is the result of the legal duty to maintain safety as found in Section 823 Paragraph 1 of the German Civil Code (BGB) (Köhler H, 2012), where breach of duty triggers liability for any defect that should have been recognized as such. This raises the concluding question for product liability cases:
  • Was or is the automated vehicle being monitored during customer use?

4.7.2 Potential Hazard Situations at the Beginning of Development

The day-to-day experience of our technologically advanced society shows: Risks and risky behavior are an unavoidable part of life. Uncertainty and imponderables are no longer seen as fateful acceptable events but rather as more or less calculable uncertainties (Grunwald, 2013, 2016). The results of this are higher demands referring to risk management for the producers of new technologies.
A structured analysis of the hazards in consideration of all possible circumstances can help to give an initial overview of potential dangers. Therefore, in the early development stages it makes sense to provide a complete specification of the automated vehicle, to ensure a logical hazard analysis and subsequent risk classification (see Sec. 4.7.4).
On this basis, it is possible for an interdisciplinary expert team (see Fig. 4.11) to draw up a first overview of well-known potentially dangerous situations at the start of a project. This usually leads to a large number of relevant situations. Due to practical considerations, scenarios for expert assessment and testing should later be restricted to the most relevant (e.g. worldwide relevant test scenarios based on comprehensively linked up geographically defined accident-, traffic-flow- and weather data collections, see Ch. 3).
According to the system definition, it is recommended to initially gather situations on a list or table. This should take the following into consideration:
  • When should the automated function be reliably assured (normal function)?
  • In what situations could automation be used in ways for which it is not designed for (misinterpretation and potential misuse)?
  • When are the performance limits for the required redundancy reached?
  • Are dangerous situations caused by malfunctioning automation (failure, breakdown)?
Jointly drawing up a maximum number of dangerous situations relevant to the system makes it likely that no relevant hazard is omitted or forgotten. Summarizing the risks directly which impact safety is recommended as a next step. After cutting the situations down to those that are actually safety-relevant, technical solutions will be developed.

4.7.3 Methods for Assessing Risks during Development

In discussing phasing out nuclear energy, a German Federal Government publication states that German society, as a “community with a common destiny” and as part of the “global community of risk,” wishes for progress and prosperity, but only accompanied by controllable risks (Merkel, et. al. 2011). This is surely only partially transferable to road traffic, where risks of automated vehicles are limited – in contrast to nuclear energy – to a manageable group of people. However, the specific requirements for the methods used in analyzing and assessing risks are similar. Five common methods are outlined below.

4.7.3.1 Hazard Analysis and Risk Assessment

The hazard analysis and risk assessment procedure (HARA), is described and annotated in ISO 26262 Part 3 for functional safety of complex electrical/electronic vehicle systems as well as in the referring ADAS Code of Practice definition for the development of active longitudinal and lateral functions (Knapp, Neumann, Brockmann, Walz & Winkle 2009; Donner, Winkle, Walz & Schwarz, 2007). Parts of the methods given as examples in the following section (HAZOP, FMEA, FTA, HIL) as well point to the HARA. Aim of HARA is to identify the potential hazards of a considered unit, to classify them, and set targets. This will enable dangers to be avoided, thus achieving a generally acceptable level of risk. In addition, an “item” is judged on its impact on safety and categorized to an Automotive Safety Integrity Level (ASIL). An “item” is defined in ISO 26262 as a complex electrical/electronic system or a function that may contain mechanical components of various technologies. The ASIL is ascertained through a systematic analysis of possible hazardous situations and operating conditions. It also involves an assessment of accident severity levels via Abbreviated Injury Scale (AIS) (Association for the Advancement of Automotive Medicine, 2005) in connection with the probability of occurrence.
Basically, for the assessment of a technical system, the risk is a central term.
It is defined as follows:
$${\text{Risk}} = {\text{Expected frequency of hazard}} * {\text{Potential severity of harm}}$$
(4.9)
For an analytical approach the risk \({\text{R}}\) can be expressed as function \({\text{F}}\) of the expected frequency \({\text{f}}\) whereby a hazardous event occurs, and the potential severity of harm \({\text{S}}\) of the resulting damage:
$${\text{R}} = {\text{F}}\left( {{\text{f}},{\text{S}}} \right)$$
(4.10)
The frequency f with which a hazardous event occurs is in turn influenced by various parameters. Another influence on whether a hazardous event occurs, is if monitoring drivers or/and other road users involved in the accident can react with timely response, preventing potentially damaging effects (C = controllability).
$${\text{R}} = {\text{F}}\left( {{\text{f}},{\text{C}},{\text{S}}} \right)$$
(4.11)
A final proof of controllability should be tested with “naive test persons” in relevant scenarios. “Naive test persons” means that they test the automated system to be assessed and do not have more experience and prior knowledge about the system than a later user would have. Test scenarios have “passed” if the test person reacts as expected before or they respond in an adequate way to control the traffic situation. Controllability is categorized in the Code of Practice definition and ISO 26262 between C0 and C3. In the following, the classes C0 until C3 of the ADAS Code of Practice referring to the ISO 26262 (Fig. 4.6):
The controllability consideration is always relevant when an average driver or any human road user can intervene in order to avoid an imminent collision. This applies to both mixed traffic and highly automated driving. For professional drivers who are particularly familiar with the vehicle this approach is only suitable to a limited extent.
The practical testing experience shows that a number of 20 valid records per scenario can provide a basic indication of validity. ISO 26262:2018 Part 3 Concept Phase refers to the Classes of Controllability indicated in the ADAS Code of Practice:
“NOTE 1: For C2, a feasible test scenario in accordance with RESPONSE 3 is accepted as adequate: “Practical testing experience revealed that a number of 20 valid data sets per scenario can supply a basic indication of validity”. If each of the 20 data sets complies with the pass-criteria for the test, a level of controllability of 85% (with a level of confidence of 95% which is generally accepted for human factors tests) can be proven. This is appropriate evidence of the rationale for a C2-estimate. …” (see Fig. 4.7)
Controllability via the driver, however, is not present in terms of driverless and fully automated vehicles participating in an accident.
One essential factor to consider is how often or how long a person is in a situation where a hazard can occur (E = exposure). The product E x C is a measure of the probability that a defect has the potential in a certain situation to have a corresponding impact on the damage described.
A further factor (\({\uplambda }\) = failure rate) can be traced back to undetected random hardware failures of system components and dangerous systematic errors remaining in the system. It gives the frequency of occurrence with regard to E with which the automated vehicle can trigger a hazardous event itself.
The product \({\text{f}}\) thus describes the number of events to be expected during period E, e.g. kilometers driven or the number of times a vehicle is started:
$${\text{f}} = {\text{E }} \times {\uplambda }$$
(4.12)
In the ISO 26262 standard, the following is assumed to be simplified:
$${\text{f}} = {\text{E }}$$
(4.13)
As a result, the risk \({\text{R}}\) is expressed as a function \({\text{F}}\) of the “probability of exposure \({\text{E}}\)”, the “controllability \(C\)” and the potential “severity of harm \({\text{S}}\)” of the resulting damage:
$${\text{R}} = {\text{F}}\left( {{\text{E}},{\text{C}},{\text{S}}} \right)$$
(4.14)
The increasing use of complex electronic components in automated vehicles requires to consider them with regard to functional safety-related issues. Therefore, ISO 26262 stipulates that the Failure in Time (FIT) of technical and electronic components must also be considered. The unit FIT gives the number of components that fail within 109 hours (see 4.7.6 “proven in use”).
$$1{\text{ FIT}} = \frac{{1{\text{ failure}}}}{{10^{9} {\text{hours of device operation}}}}$$
(4.15)
Thus, a FIT corresponds:
$$1{\text{ FIT}} = 1 * 10^{ - 9} { }\frac{{1{ }}}{{\text{h}}}$$
(4.16)
The failure rate \({\uplambda }\) of a hardware element is variable over time \({\uplambda }\left( {\text{t}} \right)\). This relation is usually represented by a “Weibull distribution” – often also known as the “bathtub curve”. It first describes the “early phase” in which the default rate is very high at the beginning due to early failures. Through revisions and improvements, the failure rate \({\uplambda }\left( {\text{t}} \right)\) in the “use phase” only reaches its minimum by random failures. Within the operational lifetime of the components, the failure rate in the “wearing phase” increases due to, for example, aging effects up to uselessness. In relation to the typical course of the “bathtub curve”, the failure rate \({\uplambda }\) is assumed to be constant over time \({\text{t}}\).
$${\uplambda }\left( {\text{t}} \right) \approx konst.$$
(4.17)
Instead of the failure rate as a parameter, a Mean Time to Failure (MTTF) can be assumed. In the case of a constant failure rate, the MTTF represents the reciprocal value of the failure rate:
$${\text{MTTF}} = { }\frac{{1{ }}}{{\uplambda }}$$
(4.18)
For repairable systems, a Mean Time to Repair (MTTR) can now be specified. With this MTTR, the Mean Time between Failures (MTBF) can be specified as the time between two failures:
$${\text{MTBF}} = {\text{MTTF}} + {\text{MTTR }}^{ }$$
(4.19)
If no repairable element is present or MTTF > MTTR is valid, it can be simplified with constant failure rates:
$${\text{MTBF}} = {\text{MTTF}} = { }\frac{1}{\lambda }$$
(4.20)
In the context of the assumption of constant failure rates during the utilization phase, an exponential distribution can be derived. The exponential distribution is often used in electrical engineering, since this is characteristic for electronic components. Within the framework of ISO 26262, an exponential distribution is also proposed in the context of the assumption of a constant failure rate (ISO 26262-5, Annex C.1.2).
$$f\left( t \right) = \frac{dF(t)}{{dt}} = \lambda * e^{ - \lambda * t}$$
(4.21)
The reliability R(t) in the reverse of the failure probability can be described by:
$$R\left( t \right) = 1 - F\left( t \right) = e^{ - \lambda * t}$$
(4.22)
Probability of occurrence f and – where possible – controllability C give the Automotive Safety Integrity Levels (ASIL). Four ASIL levels are defined: ASIL A, ASIL B, ASIL C and ASIL D. Among them ASIL A demands the lowest and ASIL D the highest requirement. In addition to these four ASIL levels, the QM class (quality management) does not require compliance with ISO 26262.
An ASIL will be determined for each hazardous event using the “severity”, “probability of exposure” and “controllability” parameters in accordance to the following table (Fig. 4.8).
A classification in ASIL A corresponds to a recommended probability of occurrence less than 10–6 per hour and is equivalent to a rate of 1000 FIT.
$${\text{ASIL A}} < 1 * 10^{ - 6} { }\frac{{1{ }}}{{\text{h}}} = 1000{\text{ FIT }}$$
(4.23)
Either rating with a recommended probability of occurrence lower than 10–7 per hour into ASIL B or required into ASIL C – corresponding to a rate of 100 FIT:
$${\text{ASIL B}},{\text{ ASIL C}} < 1 * 10^{ - 7} { }\frac{{1{ }}}{{\text{h}}} = 100{\text{ FIT }}$$
(4.24)
As already mentioned, the highest requirements exist for ASIL D (required probability of occurrence smaller than 10–8 per hour corresponding to a rate of 10 FIT):
$${\text{ASIL D}} < 1 * 10^{ - 8} { }\frac{{1{ }}}{{\text{h}}} = 10{\text{ FIT }}$$
(4.25)
Beyond normal vehicle operation, ISO 26262 also considers service requirements, including decommissioning of the vehicle. In this respect, developers have to consider the consequences of aging when selecting components. Control units or sensors have to be sufficiently protected by robust design. Any single failure must not close down any safety related functions (International Organization for Standardization, ISO 26262, 2018). The main target is to meet a societal and individually accepted risk applying measures for enhancing safety (see Fig. 4.9).
For each hazardous event with an ASIL evaluated in the hazard analysis a safety goal shall be determined. The ASIL, as attribute of a safety goal, will be passed on to each subsequent safety requirement. Similar safety goals may be combined into one safety goal. The safety goal can describe features or physical characteristics as a maximum steering wheel torque or maximum level of unintended acceleration. To comply with safety goals, the functional safety concept includes safety measures for: fault detection and failure mitigation; transitioning to a safe state; fault tolerance mechanisms, fault detection and warning to reduce the risk exposure time to an acceptable interval. The method of ASIL tailoring during the development process is called “ASIL decomposition”. A suggested measure is an arbitration logic where for example two working systems override and take over control from the system, which has failed or which generated a contradictory command.
ISO 26262 specifies recommended techniques which move from “suggested” to “required”. If a causing failure is detected, an appropriate system state should be transformed by means of a recovery into a system state without any detected errors or faults. This graceful degradation is one way of reducing functionality to continue a minimum performance instead of the occurrence of a failure. A graceful degradation can be activated as a reaction to a detected failure. Since the ASIL decomposition is a very central topic of ISO 26262, it is also dedicated to its own chapter (chapter 9 ASIL). The definition of decomposition is given in chapter 1:
“Apportioning of safety requirements redundantly to sufficiently independent elements (1.32), with the objective of reducing the ASIL (1.6) of the redundant safety requirements that are allocated to the corresponding elements”
The correct decomposition can be represented by a simple mathematical formula, in which the following agreements apply:
$${\text{QM}} _{{\left( {\text{X}} \right)}} {\text{ will be replaced by}} = > 0{ }$$
(4.26)
$${\text{ASIL A}} _{{\left( {\text{X}} \right)}} {\text{will}}\;{\text{be}}\;{\text{replaced}}\;{\text{by}} = > 1{ }$$
(4.27)
$${\text{ASIL B}} _{{\left( {\text{X}} \right)}} {\text{will}}\;{\text{be}}\;{\text{replaced}}\;{\text{by}} = > 2{ }$$
(4.28)
$${\text{ASIL C}} _{{\left( {\text{X}} \right)}} {\text{will}}\;{\text{be}}\;{\text{replaced}}\;{\text{by}} = > 3{ }$$
(4.29)
$${\text{ASIL D}} _{{\left( {\text{X}} \right)}} {\text{will}}\;{\text{be}}\;{\text{replaced}}\;{\text{by}} = > 4{ }$$
(4.30)
The sum of the decomposed elements must be equal to the value of the original classification. So, these “calculating methods” are correct:
$${\text{ASIL}}_{{{\text{new}}1}} + {\text{ASIL}}_{{{\text{new}}2}} = {\text{ASIL}}_{{{\text{old}}}}$$
(4.31)
$${\text{ASIL}}\;{\text{C}}_{{\left( {\text{D}} \right)}} + {\text{ASIL}}\;{\text{A}}_{{\left( {\text{D}} \right)}} = {\text{ASIL}}\;{\text{D}}$$
(4.32)
$$3{\kern 1pt}_{{\left( {{\text{ASIL}}\;{\text{C}} _{{\left( {\text{D}} \right)}} } \right)}} + 1{\kern 1pt}_{{\left( {{\text{ASIL}}\;{\text{A}} _{{\left( {\text{D}} \right)}} } \right)}} = 4{\kern 1pt}_{{\left( {{\text{ASIL}}\;{\text{D}}} \right)}}$$
(4.33)
$${\text{ASIL}}\;{\text{D}} = {\text{ASIL}}\;{\text{C}} _{{\left( {\text{D}} \right)}} + {\text{ASIL}}\;{\text{A}} _{{\left( {\text{D}} \right)}}$$
(4.34)
$$4{\kern 1pt}_{{\left( {{\text{ASIL}}{\kern 1pt} {\text{D}}} \right)}} = 3{\kern 1pt}_{{\left( {{\text{ASIL}}{\kern 1pt} {\text{C}} _{{\left( {\text{D}} \right)}} } \right)}} + 1{\kern 1pt}_{{\left( {{\text{ASIL}}{\kern 1pt} {\text{C}} _{{\left( {\text{D}} \right)}} } \right)}}$$
(4.35)
$${\text{ASIL}}\,{\text{C}} = {\text{ASIL}}\,{\text{A}} _{{\left( {\text{C}} \right)}} + {\text{ASIL}}\,{\text{A}} _{{\left( {\text{C}} \right)}} + {\text{ASIL}}\,{\text{A}} _{{\left( {\text{C}} \right)}}$$
(4.36)
$$3{\kern 1pt} _{{\left( {\text{ASIL C}} \right)}} = 1{\kern 1pt}_{{\left( {{\text{ASIL A}} _{{\left( {\text{C}} \right)}} } \right)}} + 1{\kern 1pt}_{{\left( {{\text{ASIL A}} _{{\left( {\text{C}} \right)}} } \right)}} + 1{\kern 1pt}_{{\left( {{\text{ASIL A}} _{{\left( {\text{C}} \right)}} } \right)}}$$
(4.37)
It must always be considered that, for example, an ASIL A(D) does not correspond to ASIL A:
$${\text{ASIL}}\,{\text{A}} _{{\left( {\text{D}} \right)}} \ne {\text{ASIL}}\,{\text{A}}$$
(4.38)
This means that if the decomposed elements should be equal parts or the same software should be used—then the dependent errors must be analyzed in order to detect systematic errors.
The hardware metrics for the architecture and also the random hardware errors which could lead to a violation of the safety target remain the same for the overall function! For the decomposed elements a sufficient independence must be shown. This applies to the following areas: criteria for co-existence; freedom from interference; cascading failures; dependent failures and common cause failures. The following requirements must also be applied to all decomposed elements with the original requirements of the safety target:
  • Confirmation measures in accordance with ISO 26262-2, 6.4.7 and ISO 26262-9, Section 5.4.11 a
  • Integration activities and subsequent activities in accordance with ISO 26262-9, Section 5.4.14 and ISO 26262-5 Section 10.4.2
  • Hardware metric analysis in accordance with ISO 26262-9, Section 5.4.13
If an ASIL D is to be decomposed, then all decomposed elements must meet the requirements for ASIL C. What is important is the distinction between decomposition and monitoring. During the decomposition, both elements must be redundant in relation to the safety target. Thus, for example, both the main computer and the safety computer must be able to switch into the safe state independently of one another when voltage, current or torque are too high.
On the other hand, in the case of monitoring, the diagnostic element only tells the main computer that something is wrong – but only the main computer can transfer the system into the Safe State. Overall, it is required that the developers must specify and document methodologies, best practices or guidelines for each phase of the development.
It is currently being discussed whether the current standard ISO 26262:2018 can also support using Artificial Intelligence (AI) trained data, which will be used increasingly, and how it can be applied. The safety of Artificial Intelligence, which is being used increasingly, is still considered as an independent field of research. Therefore, the author recommends further developing the current competences for the validation of controllability with regard to the influence other human road users. In the future, the importance of a systematic risk assessment and a systemic approach will increase.
In contrast to previously two basic risk management dimensions, more expert competence levels will be necessary in the future on the basis of area-wide information, modified systematic and systemic methods in connection with advanced controllability evaluations.
The influence parameter I stands for area-wide information. It implies that all data already available area-wide are used (see Ch. 3). That concerns accident, traffic and vehicle operating data. As a result, conclusions can also be drawn about near-accidents. Variable M stands for modified methods: This would include an actualization of the ADAS Code of Practice as well as further development for further automation levels corresponding to a Code of Practice for automated driving up to level 2. A controllability competence C with experts also enhances the third dimension. Such competence includes in-depth driving simulator studies or road tests with eye-tracking data to observe scanning behavior and cognitive processes including interviews for subjective and additional data. As a result, the variables of the formula for the risk assessment expand as follows:
$${\text{R}} = {\text{F}}\left( {{\text{E}},{\text{S}},{\mathbf{I}}, {\mathbf{M}},{\mathbf{C}} \ldots } \right)$$
(4.39)
In addition to the basis of comprehensive, further systematic and systemic modified methods M (see Fig. 4.10) will be required in the future. The methods of the following subsections (4.7.3.2 to 4.7.3.10), which are already known today, will be further developed in the future to understand the systemic interactions and mechanisms of automated driving levels.

4.7.3.2 Hazard and Operability Study – HAZOP

A Hazard and Operability Study (HAZOP) is an early risk assessment, developed in the process industry. A HAZOP looks for every imaginable deviation from a process in normal operation and then analyzes the possible causes and consequences. Typically, a HAZOP search is carried out systematically by a specialist team from the involved development units. This is to reduce the likelihood of overlooking any important factors (Knapp A, Neumann M, Brockmann M, Walz R & Winkle T, 2009).

4.7.3.3 Systems-Theoretic Methods – STAMP, STPA and FRAM

With the STAMP and STPA method (Systems-theoretic accident model and processes STAMP and Systems-theoretic process analysis STPA) the US-American safety researcher Nancy Leveson developed a model-based hazard analysis method, which analyses a safety-relevant system in a structured way using a semi-formal model (the so-called Safety Control Structures).
Objectives of STAMP are the definition of control limits for safe behavior of the safety-relevant system, socio-technical understanding of safety in complex systems, development of strategies for managing dangerous system states, support of optimization and adaptation processes for environmental influences, admission of fault tolerances and ensuring the detection and reversibility of faults. STAMP uses the safety control structures of a system to analyze control loops, to recognize the safety-critical operating processes of a system and to identify insufficient control structures (Ross H-L, 2019). The Functional Resonance Analysis Method (FRAM) is used to explain specific events which, due to coupling and different everyday performances, can lead to unexpected successes and also to failures (Hollnagel E 2012). With the support of FRAM for modelling complex socio-technical systems, mechanisms of road traffic can be differentiated. Additionally, the dependencies between the individual system elements can be identified and presented separately for the human driver or automation (see also Annex Fig. A.16). Subsequently, recommendations for the design of automated driving systems can be derived (Grabbe N, et. al. 2020).

4.7.3.4 Failure Mode and Effects Analysis – FMEA

Failure Mode and Effects Analysis (FMEA) and the integrated Failure Mode, Effects and Criticality Analysis (FMECA) are methods of analyzing reliability that identify failures with significant consequences for system performance in the application in question. FMEA is based on a defined system, module or component for which fundamental failure criteria (primary failure modes) are available. It is a technique for validating safety and estimating possible failure states in the specified design-review stage. It can be used from the first stage of an automation system design up to the completed vehicle. FMEA can be utilized in the design of all system levels (Werdich, 2012; Verband Deutscher Automobilhersteller, 2006).

4.7.3.5 Fault Tree Analysis – FTA

A Fault Tree Analysis (FTA) involves identifying and analyzing conditions and factors that promote the occurrence of a defined state of failure that noticeably impacts system performance, economic efficiency, safety, or other required properties. Fault trees are especially suitable for analyzing complex systems encompassing several functionally interdependent or independent subsystems with varying performance targets. This particularly applies to system designs needing cooperation between several specialized technical design groups. Examples of systems where Fault Tree Analysis is extensively used include nuclear power stations, aircraft and communication systems, chemical or other industrial processes.
The fault tree itself is an organized graphic representation of the conditions or other factors causing or contributing to a defined undesired incident, also known as the top event (Knapp, Neumann, Brockmann, Walz & Winkle 2009). As a result, it is a logical diagram which can be either qualitative or quantitative, depending on whether probabilities are supplemented.
Günter Reichart demonstrated the probability of road accidents by the use of a fault tree which presumes both: Inappropriate behavior and the existence of a conflicting object (Reichart, 2000).
Figure 4.11 shows an example for a quantitative FTA which results in an estimation of the probability of the top event (traffic accident with personal or fatal injury), which depends on the probabilities of the root causes. This Fault Tree Analysis demonstrates that traffic accidents result by the coincidence of several causes. A single failure does not necessarily have dangerous impact but series of unfortunate circumstances and inappropriate behavior of traffic participants can worsen the risk situation to be uncontrollable. Human traffic participants are the crucial link in the chain to prevent a car crash (see Ch. 2). Especially automated vehicles will require appropriate safety measures.
Figure 4.11 also demonstrates an excerpt of safety measures for a safe steering in case of a fully automated vehicle.

4.7.3.6 Hardware-in-the-Loop (HIL) Tests

Increasing vehicle interconnection places particular demands on validating the safety of the entire Electronic Control Unit (ECU) network, e.g. onboard wiring systems safety, bus communication, vehicle state management, diagnosis, and flash application’s behavior. Hardware-in-the-Loop (HIL) tests can be used as soon as a hardware prototype of the system or part of it, e.g. an electronic control unit in a vehicle, is available. As the Device under Test (DUT), the prototype is placed in a “loop,” a software-simulated virtual environment. This is designed to resemble the real environment as closely as possible. The DUT is operated under real-time conditions (Heising, Ersoy & Gies, 2013).

4.7.3.7 Software-in-the-Loop (SIL) Tests

The Software-in-the-Loop (SIL) method in contrast to HIL does not use special hardware. The created model of the software is only converted to the code understandable for the target hardware. This code is performed on the development computer with the simulated model, instead of running as Hardware-in-the-Loop on the target hardware. SIL tests must be applied before the HIL.

4.7.3.8 Virtual Assessment

Virtual assessment verifies prospective, quantitative traffic safety benefits and risks (see Section 2.​1.​2). They can be quantified using virtual simulation-based experimental techniques. For this purpose, traffic scenarios can be modeled considering safety-relevant key processes and stochastic simulation using large representative virtual samples. Virtual representations of traffic scenarios are based on detailed, stochastic models of drivers, vehicles, traffic flow, and road environment, along with their interactions. The models include information from global accident data (see Ch. 2), Field Operation Tests (FOT), Natural Driving Studies (NDS), laboratory tests, driving simulator tests, and other sources. Wide ranging, extensive simulations help identifying and evaluating safety relevant situations of automated vehicles.

4.7.3.9 Driving Simulator Tests

Driving simulator tests use models of vehicle dynamics and virtual driving scenarios. They allow artificial driving situations and repeatable tests with various subjects. Potentially hazardous traffic scenarios can also be tested because in contrast to real driving the virtual scenario is harmless. Different types of simulators, such as mock-up, fixed based simulator, or moving base simulator do exist. Subjective and objective methods can be exploited to measure the performance of test subjects in the driving task. Depending on the kind of potentially hazardous situations controllability can be tested by some of these methods. Typical situations for driving simulator tests are high risk situations, driver take over reactions or interaction between automated driving system environment monitoring and manual human driver mode.

4.7.3.10 Driving Tests and Car Clinics

Driving tests with different drivers provide useful feedback based on empirical data. Dynamic car clinics allow testing of driver behavior and performance while driving the automated vehicle in defined situations within a realistic environment. In a first step the objective is to identify relevant scenarios and environments (see Ch. 3). This enables to specify and implement virtual tests followed by confirmation via driving tests and car clinics on proving grounds. Finally, before sign off and start of production (SOP) field tests confirm identified scenarios and environments if necessary.

4.7.4 Approval Criteria from Expert Knowledge

During the approval process, test procedures must be provided. Approval criteria in terms of “passed” and “not passed” are thus recommended for the final safety verification of automated vehicles. Regardless of which methods were chosen for final sign-off confirmation, the experts should all agree on which test criteria suffice for the vehicle to cope successfully with specified situations during a system failure or malfunction. Generally accepted values for achieving the desired vehicle reactions should be used for such criteria. An evaluation can result by using established methods.
Taking the list of potential hazard situations as a basis (see Ch. 3), test criteria for safe vehicle behavior, and if possible also globally relevant test scenarios, are developed by internal and external experts. A team of system engineers and accident researchers is particularly required. The former group offers knowledge of the precise system functions, time factors, and experience of potential failures, while accident researchers bring with them practical knowledge of high-risk traffic situations (see Ch. 2). Every known risky situation that a vehicle can get into must be considered. At least one corrective action with regard to safety requirements should be specified by the developers for the risks identified. In terms of final sign-off confirmation, a test scenario has thus been “passed” when the automated vehicle reacts as expected or otherwise deals with the situation in a satisfactory accepted manner.

4.7.5 Steps to Increase Product Safety of Automated Vehicles in the General Development Process

To guarantee the product safety of automated vehicles, a thorough development concept is needed that is at least in line with state of the art and science. To this end, a general development process is proposed below, as is principally in use amongst car manufacturers for the development of series production vehicles, partially with small adjustments. For highly automated vehicles the development refers to measures regarding the safety process, activities to ensure controllability and appropriate human machine interaction (see Fig. 4.12).
The generic development process for fully automated vehicle functions focuses on expert knowledge, the safety process and as is represented graphically as a V-Model (see Fig. 4.12). As well as the development stages for the high automation it builds logical sequences of product development phases and selected milestones but not necessarily how long each stage lasts or the time between phases (Knapp, Neumann, Brockmann, Walz, Winkle, 2009).
The process of methods thus forms a simplified representation in the form of a V-Model. This allows for iteration loops within the individual development phases involving all parties. Within this V-shaped process structure (see Fig. 4.13) elements of the safety process are taken into consideration. In addition, early and regular involvement of interdisciplinary expert groups is recommended. From the definition phase until validation, sign-off, and start of production – experts from research, (pre-) development, functional safety, product analysis, legal services, traffic safety, technology ethics, ergonomics, production, and sales should participate in the development process.
In the development steps for advanced automated vehicles, product and functional safety stands out as a key requirement. It relates to the whole interaction between the vehicle and its environment. Save driver interaction and take-over procedures (Bengler, Flemisch, 2011; Bengler K, Zimmermann M, Bortot D, Kienle M & Damböck, 2012) should thus be considered when there is an interface necessary to the use case and functionality. Concerning product safety, fully automated vehicles essentially include five usage situations.
Ensuring functional safety of fully automated vehicles
1.
within performance limits
 
2.
at performance limits
 
3.
beyond performance limits
 
Functional safety should be examined:
4.
during system failures
 
5.
after system failures
 
Careful development with regard to a safe usage of driverless vehicles must ensure they are able to recognize the criticality of a situation, decide on suitable measures for averting danger (e.g. degradation, driving maneuver) that lead back to a safe state, and then carry out these measures. The requirements to be fulfilled from the above V-model, which correspond to the overall product life cycle, are extensive and necessary for a completely new development. However, most systems are not developed from the very beginning, but on the basis of existing components. Such existing components have been in use for a long time without any problems or errors. A developer does not want to have to carry out a new development for a component that has already proven itself in operation. In this case, a component can be qualified for use in a new automated driving system by verifying proven in use. When demonstrating “proven in use”, it must be proven that the development was carried out carefully and meets the relevant requirements. In addition, it must be confirmed that systematically collected data have shown that errors (see 4.7.3.1 “failure in time”) have occurred sufficiently rarely (see ISO 26262 Part 8 Paragraph 14). This proof is based on consistent configuration management during development and the evaluation of errors during operation.
Fig. 4.14 gives an overview of a possible workflow regarding final sign-off, up to decommissioning of a vehicle. In the final stages of developing an automated vehicle, the development team decides whether a final safety test for validation is required. This is to confirm that a sufficient level of safety for production has been reached. For this, the development team verifies that a vehicle reacts as previously predicted or in other ways appropriate to the situation. The data used here may come from risk assessment methods used during development, such as hazard and risk analysis. There are three equally valid paths for signing off vehicles. A direct sign-off will be carried out through an experience-based (e. g. proven in use) recommendation of the development team. In addition, final evidence of safety can be passed after corresponding reconfirmation via an interdisciplinary forum of internal and external experts or an objective proof. Evidence of functional safety is possible via means of a confirmation test with relevant traffic scenarios based on accident-, traffic-flow-, weather- and vehicle operation data (see Ch. 3), or other verifiable relevant samples (see Fig. 4.14).
The development team chooses an appropriate path for each individual scenario. A mixed approach is also possible. When the safety team has conclusively confirmed the safety of the system design functionality, the final sign-off can be given (see Knapp, Neumann, Brockmann, Walz, Winkle, 2009).

4.7.6 Product Monitoring After Market Launch

Subsequently to the careful development, a manufacturer is obliged to monitor automated vehicles after placing them on the market, in order to recognize previously unknown hazards and takes necessary additional safety measures. If necessary, car manufacturers are urged to analyze potential dangers (that can also arise in unintended use or misuse) and react with appropriate measures, such as product recalls, redesign, or user information (see Fig. 4.14).
A judgment of the German Federal Court of Justice (BGH) is often quoted amongst product safety experts as a particular example of the product-monitoring duty for combination risks with third-party accessories. Model-specific motorbike handlebar cladding, from accessories that had first been passed by officially recognized experts from a testing organization in June 1977, were supposed to have been responsible for three spectacular accidents including one fatality. On the day before the fatal accident, the motorcycle manufacturer in question wrote personal letters to warn all the riders of the affected model it had on record. The victim, however, never received the letter. Although the motorbike manufacturer expressly warned of using the cladding, the company was ordered to pay damages. The BGH established a fundamental judgment concerning this matter:
„Eine Pflicht zur Produktbeobachtung kann den Hersteller (und dessen Vertriebsgesellschaft) auch treffen, um rechtzeitig Gefahren aufzudecken, die aus der Kombination seines Produkts mit Produkten anderer Hersteller entstehen können, und ihnen entgegenzuwirken.“ (Bundesgerichtshof BGH, 1987)
In future, companies will not only be required to monitor the reliability of their products in practice but, above all, to refer their customers to any hazards in daily operation – including those that arise from the application or installation of accessories of other manufacturers.

4.7.7 Steps for Internationally Agreed Best Practices

Due to their networking and complexity, it will be difficult to get a clear overview about all the risks of automated vehicles in series operation. Therefore, the objective is to establish worldwide agreed best practices for legislation, liability, standards, risk assessment, ethics and tests.
The ADAS Code of Practice as a result of the Response 3 project was a fundamental step towards commonly agreed and legally binding European guidelines for advanced driver assistance systems. ADAS systems were characterized by all of the following properties: They support the driver in the primary driving task, provide active support for lateral and/or longitudinal control with or without warning, detect and evaluate the vehicle environment, use complex signal processing and interact directly between the driver and the system (Knapp, Neumann, Brockmann, Walz, Winkle, 2009).
Primarily ADAS systems operate rule based at the maneuvering level (between about one and ten seconds) and furthermore within parts of the skill-based stabilization level (time spans less than one second). High and fully automated vehicles, on the other hand, intervene knowledge-, skill- and rule-based for more than one second at all driving levels (see Fig. 4.15).
Increasing sensitivity for defects is visible through a significant growth in product recalls worldwide. If unknown failures appear after vehicles have gone into production, appropriate measures have to be taken where necessary according to a risk assessment.
For analyzing and evaluating risks stemming from product defects after market launch (in view of the necessity and urgency of product recalls) the EU and the German Federal Motor Transport Authority (Kraftfahrtbundesamt) use tables from the rapid alert system RAPEX (Rapid Exchange of Information System) (European Union, 2010). To classify risks, first accident severity (extend of damage S according to AIS, for example) and probability of harm are assessed – similarly to the ALARP principle (As Low As Reasonably Possible) (Becker, et. al. 2004), the ISO 26262 standard (International Organization for Standardization, ISO 26262, 2018), and ADAS Code of Practice for active longitudinal and lateral support. The degree of risk is derived from this. Final assessment concerning the urgency of required measures looks at the risk of injury for those at particular risk of being injured (as influenced by age, state of health, etc.) and hazard for a mentally healthy adult, and the use of protective measures as appropriate warnings (see Fig. 4.16).
With regard to the injury risk classification between “vulnerable humans” and “healthy adults” (Fig. 4.17) Kalache and Kickbusch – members of the Ageing and Health Program within the World Health Organization – published a report with a well-accepted concept in 1997. They showed that functional abilities, such as muscle strength and cardiovascular performance, peak in early adulthood and decrease linearly with age. Furthermore, the physical capacity of the population varies with age.
The illustration Figure 4.17 suggests that every human being in early adulthood has a similar functional capacity, which depends on lifestyle, disposition and environmental factors. The author’s many years of experience in road accident research confirm that age-dependent functional capacity has an influence on injury risk.
The following questions relate to the activities for functional safety management:
  • Are people responsible for the specified safety cycle named?
  • Are the developers and quality managers informed about the scope and phases?
  • How are the proofs for quality and project management provided?
  • Were the ASILs derived correctly and assigned correctly based on the risk of a dangerous event?
  • Which criteria are used to decide whether it is a new development or just a product takeover?
  • How are the results of the risk analysis documented and communicated?
  • Which processes are used to support hardware development?
  • Were adequate measures taken to avoid systematic errors in highly complex hardware?
  • Which activities were defined for all V-Modell phases?
  • What ensures that only the desired functions, but no unwanted functions are included?
  • Which measures ensure that the integrated software is compatible with the software architecture?
  • Have the required methods been applied for the ASIL to be achieved in accordance with the design, the software and hardware components used?
  • Are relevant methods intended for test cases to be tested?
  • Are necessary maintenance schedules and repair instructions created?
  • Which requirements must be fulfilled for a project safety plan?
  • How are changes to safety-relevant components analyzed and controlled?
  • Is a sufficiently independent auditor or assessor integrated into the development process?
  • Are the necessary processes documented for all project participants?
  • How is the final system and application safety documented?
(see Annex Fig. A.3, Example documentation sheet of the ADAS Code of Practice)

4.8 Conclusion and Outlook:

Automated driving is currently the focus of legal interest. In 2017, the “Automated and networked driving” ethics commission appointed by the German Federal Minister of Transport presented its report. At the same time, the new German Road Traffic Law came into force. In the current version in § 1 b StVG, the passage “The vehicle driver may (…) turn away from traffic events and vehicle control” is inserted. However, he “must remain so attentive” that he can take over control “at any time”. In addition the ECE R 157 (level 3) and a further German law create the legal framework for autonomous vehicles (level 4) in defined operating areas on public roads.
In both cases, the main focus was not to hinder any development that could be expected to have a clear potential for damage avoidance and damage minimization. It follows that remaining risks do not stand in contrast to the new technology if they contribute to a fundamentally positive risk balance (BGH decision). Dilemma situations have always served to clarify ethical and legal principles, such as in the famous example of the so-called “trolley case”. The answer of the law here is clear: the killing of a human being with the intention of saving others from certain death may be excused in a concrete case, but it remains illegal in any case. The solution is therefore to avoid accidents at any rate by adapting and forward-looking driving.
Shifting responsibility from the driver or holder to the person responsible for the technical systems in the sense of product liability is under discussion. In the sharing of the driving task between a human driver and a technical system, the responsibility must be redefined, as humans and machines occur in a shared driving task. The German liability system ultimately passes the risk of an accident on to the owner of the vehicle. Furthermore, the manufacturers are liable within the framework of mandatory product liability. With this shift in liability, it must also be discussed how much safer a technical system must be statistically seen so that it is accepted by society and which methods lead to a reliable confidence.
On the one hand, society’s expectations are understandable as they increasingly require the highest, state-of-the-art levels of safety for new technologies. On the other hand, unrealistic demands for technical perfection and the striving for 100% fault-free operation may hinder automated vehicles from being launched on the market, and thus the chance of revolutionary potential benefits.
The market launch of highly and fully automated vehicles has barriers placed in its path. The first vendors on the market (the pioneers) therefore take on increased risks at the outset, so that the potential total benefit of these new technologies to society can only be achieved together with all parties. Homann describes these decision conflicts during market launch by the decision theory concept. To overcome this dilemma as it pertains to highly and fully automated vehicles, the incalculable risks for manufacturers must be made assessable and determinable through new institutional arrangements (Homann, 2005). Unconditional information and transparent policy encourage and accelerate public discourse across all disciplines.
Due to previous licensing requirements for series production vehicles, drivers almost always have to keep their hands on the steering wheel and permanently stay in control of the vehicle. Automated vehicles and vehicle developments by IT companies, car manufacturers, and component suppliers will also be required to have a human driver as a responsible backup level in complex traffic situations for the nearby future.
Driverless vehicles, on the other hand, signify the beginning of an utterly new dimension. New approaches and activities are essential (Matthaei, et. al. 2015). It is required to orientate ourselves to the future potential of automated driving functions, to learn from previous patterns and within the bounds of what is technically and economically reasonable and adjust old methods to valid state of the art or state of science (Scharmer, Kaufer, 2013).
Besides generally clarifying who is responsible for accident and product risks, new accompanying measures depending on different automation and development levels are also of use for a successful market launch and safe operation. This includes identifying relevant scenarios, environments, system configurations and driver characteristics. Relevant maneuvers of driving robots have to be defined and assessed for example using accident data (see Ch. 2) and virtual methods. Further investigation of real driving situations in comparison with system specifications with tests on proving grounds, car clinics, field tests, human driver training or special vehicle studies are recommended. For the required exchange of information, storage of vehicle data (e.g. Event Data Recorder) and possible criminal attacks protective technical measures are necessary (see Ch. 4). Beside challenging and agreed data protection guidelines (Hilgendorf, 2015), experts in technology ethics will ensure compliance to ethical values. Within this, safety requirements have to be answered in terms of “How safe is safe enough?” Expert experience can also decisively contribute to increasing safety and meeting customer expectations for acceptable risks. In the light of increasing consumer demands, such experience – particularly of previous product liability procedures – makes a valuable contribution to improving product safety during development and approval stages.
Before highly complex automated vehicle technologies – which will additionally be applied in a multi-layered overall system – can go into mass commercialization, interdisciplinary concerted development and sign-off processes are required. A reliable evaluation for sustainable solutions ready for production demands new harmonized methods for comparable safety verification, e.g. by simulating relevant scenarios (Kompass K, et. al. 2015; Helmer, 2015) including the planning of field tests (Wisselmann, 2015) from worldwide available and combined accident-, traffic-flow-, weather- and vehicle operation data (see Ch. 3). This also applies to fulfilling legal and licensing regulations, identifying new options for risk distribution (see Matthaei et. al. 2015), and creating new compensation schemes.
To verify the duty of care in existing quality management systems, it is recommended to further develop experience-based, internationally valid guidelines with checklists built on the ADAS Code of Practice (Knapp et. al. 2009; Becker, Schollinski, Schwarz, Winkle, 2003). These standards will further embody and document state of the art and science within the bounds of technical suitability and economic feasibility. The ADAS Code of Practice was developed to provide safe Advanced Driver Assistance Systems, with active support of the main driving task (lateral and/or longitudinal control, including automated emergency brake interventions – AEB), on the market and published 2009 by the European Automobile Manufacturers Association (ACEA). It corresponds with the ISO 26262 for requirements of electrical, electronic and software components. As a development guideline it contains recommendations for analysis and assessment of ADAS Human Machine Interactions with occurrence during normal use and in case of failure (Knapp et. al. 2009; Donner, Winkle, Walz & Schwarz, 2007). With increasing levels of automation upgrades of functional safety, controllability (ISO 26262, ADAS Code of Practice) and other standardized methods will be necessary such as virtual simulation (Helmer, 2015). Today the standards do not cover functional disabilities for instance misinterpretation of objects, traffic situations and resulting false positive system interventions. An integral, scenario-based approach is recommended because automated systems will be able to control scenarios. In the event of serious malfunctions that threaten severe damage, product experts from the development process should be involved in the study of the causes and be listened to. Motor vehicle experts who are not directly involved in the development should acquire the expertise to be able to provide a specialist appraisal of new technologies in court.
In the development of automated driving, networked thinking covering all disciplines is required with a flexible, yet structured area for action. So far, the development has opened up an unknown world with many uncertainties that may cause reservation and resistance. For a successful launch of automated vehicles ready for production, insights collected in vivo from both the past as well as the present, are essential prerequisites. Despite the technical, legal, and economic risks, production readiness will be of benefit to society in this way.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/​), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
insite
INHALT
download
DOWNLOAD
print
DRUCKEN
Metadaten
Titel
Technical, Legal, and Economic Risks
verfasst von
Thomas Winkle
Copyright-Jahr
2022
DOI
https://doi.org/10.1007/978-3-658-34293-7_4

    Premium Partner