Skip to main content

2019 | Book

Fahrerassistenzsysteme 2018

Von der Assistenz zum automatisierten Fahren 4. Internationale ATZ-Fachtagung Automatisiertes Fahren


About this book

Der Tagungsband zur ATZlive-Veranstaltung "Fahrerassistenzsysteme 2018" thematisiert in Vorträgen u.a. welche fahrfremden Tätigkeiten der Fahrer im automatisierten Modus wie ausüben darf und wie sich die SAE-Level 3 und 4 voneinander abgrenzen lassen. Weitere Aspekte sind der Fahrer (Mensch) in der Interaktion mit dem Fahrzeug (Maschine) sowie die damit verbundenen Interdependenzen.

Die Tagung ist eine unverzichtbare Plattform für den Wissens- und Gedankenaustausch von Forschern und Entwicklern aller Unternehmen und Institutionen, um wichtige Impulse für ihre tägliche Arbeit zu erhalten.

Table of Contents

Radar for Autonomous Driving – Paradigm Shift from Mere Detection to Semantic Environment Understanding
The challenges for sensors and their correlated perception algorithms for driverless vehicles are tremendous. They have to provide more comprehensively than ever before a model of the complete static and dynamic surroundings of the ego-vehicle to understand the correlation of both with reference to the ego-vehicle’s movement. For dynamic objects, this means that radar has to provide the dimension and complete motion state as well as the class information, in highway, rural, and inner city scenarios. For the static world, new algorithm schemes have to be developed to enhance the shape representation of an object by image like semantics. In order to generate the necessary information, radar networking for 360° coverage have to be reinvented. Radar data processing toolchains have to be revolutionized by applying artificial intelligence and advanced signal processing in a synergetic manner.
Jürgen Dickmann, Jakob Lombacher, Ole Schumann, Nicolas Scheiner, Saeid K. Dehkordi, Tilmann Giese, Bharanidhar Duraisamy
Improving the Environment Model for Highly Automated Driving by Extending the Sensor Range
This paper describes a novel approach to cope with driving scenarios in highly automated driving which are currently solved only by the driver’s control. The approach presented in this paper is currently being implemented as a prototype to be used in our test fleet. It combines techniques well established in robotics like Simultaneous Localization And Mapping (SLAM) as well as end-to-end protection and image compression algorithms with big data technology used in a connected car context. This allows enhancing the positioning of individual vehicles in their Local Environment Model (LEM). This is the next step to overcome current dependencies to in-vehicle sensors by using additional cloud-based sensor processing to gain information.
Nicole Beringer
Efficient Sensor Development Using Raw Signal Interfaces
A sensor’s ability to accurately recognize the environment in situations of any kind is an essential requirement for the effectiveness of advanced driver assistance systems. Changing environmental conditions must never cause malfunctions, nor must complex traffic situations in a variety of surroundings. In addition to the development of the actual driving function, the recognition of the environment by individual sensors and the fusion of data from several sensors in a central environment model need to be considered. The vehicle should capture and process the environment as error-free as possible as a prerequisite for ensuring the perfect functionality of advanced driver assistance systems and automated driving functions (Fig. 1).
Martin Herrmann, Helmut Schön
360° Surround View Radar for Driver Assistance and Automated Driving
Fully autonomous vehicles (AVs) are unlikely to be commercially available before 2020. Meanwhile, advanced driver assistance systems (ADAS) will play a crucial role in preparing regulators and consumers for the predictable reality of vehicles taking over the control from drivers.
Dietmar Stapel, Carsten Roch, Helgo Dyckmanns, Martin Mühlenberg
Overall Approach to Standardize AD Sensor Interfaces: Simulation and Real Vehicle
The standardization of subsections of complex hardware and software setups with the focus on automated driving functionality allows cost reduction in development, test and validation, as well as a simplification of component test infrastructures involving many different partners. The current VDA initiative for an ISO standard (ISO 23150) towards standardized hardware sensor interfaces [1] was mainly motivated by the struggle of dealing with large sensor setups in cars for functions enabling automation level 3 and above. The integration of Radar, LIDAR, Ultrasonic and Camera systems amongst others from various suppliers, including the component tests of up to 50 sensors for fully automated driving, demand common test procedures. These are complicated by contradictory interpretation and description of the commonly perceived environment. This paper highlights the potential benefit of this undertaking by enlarging the scope towards other test instances involving the complete chain of effects focusing on Software in the Loop (SIL) using sensor models for development, verification and validation. Therefore, the Open Simulation Interface (OSI), endorsed by the Pegasus Project [2], is introduced as the first reference implementation of the upcoming ISO standard for rapid prototyping and the common development of sensor models.
Carlo van Driesten, Thomas Schaller
Virtualization for Verifying Functional Safety of Highly Automated Driving Using the Example of a Real ECU Project
Validating and verifying highly automated driving (HAD) systems is a huge challenge due to their complexity, short development cycles, and legal safety requirements. The latter are defined by ISO 26262, a standard for functional safety of automotive electric/electronic systems. ECU virtualization is key to mastering this challenge because it allows to transfer testing tasks from the road to simulations on a large scale. If a data-driven approach is chosen for this purpose, comprehensive, loss-free, and time-synchronized recordings of measurement data are required, which must be gathered during extensive test drives. Huge data volumes are the result. These volumes need to be managed, categorized, and processed in a way that they can serve as an input for simulated test drives later. In case of a model-driven approach, the accuracy of the models of driver, vehicle and environment is crucial to obtain meaningful test results. During ECU virtualization, it is vital that the virtual ECU reproduces the behavior of the real ECU as closely as possible. ETAS ISOLAR-EVE can provide substantial benefit due to its way of virtualization. In the end, the value of virtualization depends on sufficient equivalence of the simulated system behavior with the real system behavior. If this can be proven, simulation and virtualization can minimize the need for expensive prototypes, test vehicles, and test drives, while at the same time satisfying legal requirements. In addition, virtualization allows to keep development cycles short and costs limited. The feasibility of this approach is shown using the example of a real ECU project, for which ETAS has provided tools and consulting.
Johannes Wagner, Joachim Löchner, Oliver Kust
Derivation and Application of an Observer Structure to Detect Inconsistencies Within a Static Environmental Model
Smart vehicles like autonomously driving cars have the advantage to make a ride more safe and comfortable because intelligent vehicles respond better and faster than human beings to critical situations [1]. Having accurate knowledge of its environment is essential for a car that drives autonomously. Based on this, a maneuver adapted to the situation can be planned and carried out. The perception takes place via a variety of sensors such as cameras, radars and LIDAR’s [2, 3]. However, the disadvantage of an intelligent vehicle is the complex infrastructure consisting of sensors, ECUs, communication equipment, etc. that is needed to perceive the environment correctly. As the complexity increases, the susceptibility to errors of such a system increases the same way. Errors refer to the failure of individual units, the incorrect processing of information, but also the manipulative interference of unauthorized persons.
Moritz Lütkemöller, Malte Oeljeklaus, Torsten Bertram, Klaus Rink, Ulrich Stählin, Ralph Grewe
Security in Hybrid Vehicular Communication Based on ITS G5, LTE-V, and Mobile Edge Computing
The introduction of a Cooperative Intelligent Transportation System (C-ITS) is an enabler for increased safety on roads and future autonomous vehicle deployments. Further, traffic efficiency is intended to be increased by having smoother and more efficient traffic flow. New applications are planned in the area of driver convenience, public transportation and commercial carriage of goods. In this context, the term vehicle-to-everything (V2X) communication is used to describe real-time communication in transportation.
Jan-Felix van Dam, Norbert Bißmeyer, Christian Zimmermann, Kurt Eckert
Automated Driving – Misunderstandings About Level 3 and Euro NCAP’s Activities in this Field
This section aims to resolve common misunderstandings about Level 3 automated driving. It will commence with a brief overview over the fundamentals of automated driving, focusing to distinguish between Level 2 and Level 3. Next, common confusions about Level 3 automation will be addressed and put straight.
Andre Seeck, Elisabeth Shi, André Wiggerich
Putting People Center Stage – To Drive and to be Driven
Technological advancements allow the implementation of systems that can assist humans and even perform tasks that were previously carried out manually. To classify the multiple ways with which such systems can provide services to humans, generic frameworks were introduced (e.g., Jipp and Ackerman 2016; Kaber and Endsley 1997; Parasuraman et al. 2000). For example, Parasuraman et al. (2000) based their framework on a generic perception-cognition-action cycle and used four functions in which systems can assist humans.
Klas Ihme, Katharina Preuk, Uwe Drewitz, Meike Jipp
Towards Positive User Experience (UX) for Automated Driving
It is a matter of time until vehicles will be capable of chauffeuring the driver when he/she prefers that. After all, vehicles will gradually take over more and more driving tasks over the oncoming years so that the driver turns into a user who is free to engage in a variety of other pursuits. Prior to that, however, the driver needs to learn trusting the new technology and to develop an awareness of the current driving situation, which in turn means that the elements of the human-machine interaction need to exert a positive influence on the user experience (UX). The prerequisite for a positive UX is a holistic way of taking the driver’s preferences, requirements, the current situation, and the driver’s environment into account.
Guido Meier-Arendt
Systematically Generated and Complete Tests for Complex Driving Scenarios
A key success factor when realizing autonomous vehicles is the validation of their functionality. Due to their system architecture involving multiple environmental sensors, such as video cameras and LIDAR sensors, the input vector into the Advanced Driver Assistance System (ADAS) is high dimensional. The signal processing has to reliably execute the perception and cognition of the current driving situation. The environment consists of an arbitrary number of elements, including traffic participants, material properties, weather conditions, road signs or buildings. Based on the availability of a semantic, machine-readable representation of scenarios, such driving situations can be described. This allows the realization of a continuously growing test case database for the validation of autonomous driving functions.
Marc Habiger, Marius Feilhauer, Jürgen Häring
Connected Development in Driver Assistance – Paving the Way to Automated Driving Through Remote Validation and Big Data Analysis
The increasing complexity of driver assistance systems as well as the growing volume of data pose particular challenges to the development and validation of new systems. Connected Development is a new cloud-based approach which simplifies the validation process and shortens development iterations. Connected Development is an ongoing cycle, consisting of collecting data from the vehicle fleet and transferring it into a cloud storage, data management and analysis, writing new optimized software on request and software updates via remote flashing.
Even before the project starts, the engineers define which functions and driving situations should be monitored by Remote Validation. An on-board unit is required for transferring data to the cloud backend. While driving, surround sensors constantly capture the environment and process a multitude of data. If a defined situation occurs, such as Automatic Emergency Braking, only data from surround sensors categorized as relevant is captured by the on-board unit. The data is then uploaded to the cloud backend via a secure wireless connection practically in real time. At the same time and in the same manner, vehicle networking helps save data from locations around the world with different climatic conditions and from a range of vehicle configurations. The data center collects, organizes, checks and saves the recorded data from all networked vehicles and makes them available to the engineers. In this way, the engineers can effectively analyze the data from relevant driving scenarios. New software versions with optimized function performance and the corresponding validation jobs can instantly be developed based on the latest findings. After successful laboratory tests, the new software can be distributed securely and efficiently in the development vehicles via remote flashing.
Connected Development enables shorter learning cycles, increased efficiency and guaranteed quality in the development of new driver assistance systems. The field data exploration continues this approach even after serial production begins. In this case, it serves as a key enabling technology for the validation of future highly automated driving functions as well as for new data-based services such as Predictive Diagnosis and Maintenance.
Tobias Radke, Alberto Fernandez
Truck Platooning – A Pragmatical Approach
This paper was created for the 4th ATZ live conference Driver Assistance Systems 2018 on April 18th and 19th, 2018 in Wiesbaden Germany. WABCO as a global supplier of technologies and services that improve the safety, efficiency and connectivity of commercial vehicles is also working in the field of Platooning. While almost all attempts in the industry concerning Platooning are aiming for very close headways and a high automation level combined with electronically coupled vehicles WABCO is additionally investigating a slightly different Platooning approach, which might be easier to realize and might have higher acceptance amongst people.
Stephan Kallenbach
aFAS – How to Get a Driverless Prototype on the Road?
Welcoming sentence
Mobile road works on highway hard shoulder lanes are usually protected against on-coming traffic by protective vehicles equipped with a mobile warning trailer. The service staff operating the protection vehicle for road maintenance works have a high degree of risk of being involved into severe accidents with surrounding traffic. With the motivation to reduce this risk, the “Automatic unmanned protective vehicle for mobile construction sites on motor highways” (aFAS) project, funded by the Federal Ministry for Economic Affairs and Energy (BMWi), started in August 2014. The aim of this project is to develop an unmanned protective vehicle which is capable to perform this task driverless by automatically following mobile roadworks – or a preceding work vehicle equipped with V2V technology respectively.
Patrick Jiskra, Peter Strauß, Walter Schwertberger
CAN over Automotive Ethernet for Trailer Interface
Automated driving with commercial vehicles (CV) faces additional challenges due to a possibly attached trailer. In Europe approximately 65% of the CVs are towing trailers.
Andreas Goers, Sebastian Kühne
An Overview of Deep Learning and Its Applications
Deep learning is the machine learning method that changed the field of artificial intelligence in the last five years. In the view of industrial research, this technology is disruptive: It considerably pushes the border of tasks that can be automated, changes the way applications are developed, and is available to virtually everyone.
Michael Vogt
Potential of Virtual Test Environments for the Development of Highly Automated Driving Functions Using Neural Networks
This paper outlines the implications and challenges that modern algorithms such as neural networks may have on the process of function development for highly automated driving. In this context, an approach is presented how synthetically generated data from a simulation environment can contribute to accelerate and automate the complex process of data acquisition and labeling for these neural networks. A concept of an exemplary implementation is shown and first results of the training of a convolutional neural network using these synthetic data are presented.
Raphael Pfeffer, Patrick Ukas, Eric Sax
Incorporating Human Driving Data into Simulations and Trajectory Predictions
The development of algorithms for automated driving is a very challenging task. Recent progress in machine learning suggests that many algorithms will have a hybrid structure composed of deterministic or optimization and learning based elements. To train and validate such algorithms, realistic simulations are required. They need to be interaction based, incorporate intelligent surrounding traffic and the other traffic participants behavior has to be probabilistic. Current simulation environments for automotive systems often focus on vehicle dynamics. There are also microscopic traffic simulations that on the other hand don’t take vehicle dynamics into account. The few simulation software products that combine both elements still have at least one major problem. That is because lane change trajectories disregard human driving dynamics during such maneuvers. Consequently, machine learning algorithms developed and trained in simulations hardly generalize to non-synthetic data and therefore to real-world applications.
Manuel Schmidt, Carlo Manna, Till Nattermann, Karl-Heinz Glander, Torsten Bertram
Deep Learning-Based Multi-scale Multi-object Detection and Classification for Autonomous Driving
Autonomous driving vehicles need to perceive their immediate environment in order to detect other traffic participants such as vehicles or pedestrians. Vision based functionality using camera images have been widely investigated because of the low sensor price and the detailed information they provide. Conventional computer vision techniques are based on hand-engineered features. Due to the very complex environmental conditions this limited feature representations fail to uniquely identify a specific object. Thanks to the rapid development of processing power (especially GPUs), advanced software frameworks and the availability of large image datasets, Convolutional Neural Networks (CNN) have distinguished themselves by scoring the best on populthis information, the boundingar object detection benchmarks in the research community. Using deep architectures of CNN with many layers, they are able to extract both low-level and high-level features from images by skipping the feature design procedures of conventional computer vision approaches. In this work, an end-to-end learning pipeline for multi-object detection based on one existing CNN architecture, namely Single Shot MultiBox Detector (SSD) [1], with real-time capability, is first reviewed. The SSD detector predicts the object’s position based on feature maps of different resolution together with a default set of bounding boxes. Using the SSD architecture as a starting point, this work focuses on training a single CNN to achieve high detection accuracy for vehicles and pedestrians computed in real time. Since vehicles and pedestrians have different sizes, shapes and poses, independent NNs are normally trained to perform the two detection tasks. It is thus very challenging to train one NN to learn the multi-scale detection ability. The contribution of this work can be summarized as follows:
  • A detailed investigation on different public datasets (e.g., KITTI [2], Caltech [3] and Udacity [4] datasets). The datasets provide annotated images from real world traffic scenarios containing objects of vehicles and pedestrians.
  • A data augmentation and weighting scheme is proposed to tackle the problem of class imbalance in the datasets to enable the training for both classes in a balanced manner.
  • Specific default bounding box design for small objects and further data augmentation techniques to balance the number of objects in different scales.
  • Extended SSD+ and SSD2 architectures are proposed in order to improve the detection performance and keeping the computational requirements low.
Maximilian Fink, Ying Liu, Armin Engstle, Stefan-Alexander Schneider
4. ATZ-Fachtagung Fahrerassistenzsysteme – Von der Assistenz zum automatisierten Fahren
220 Teilnehmer, und damit deutlich mehr als im Vorjahr, kamen am 18. und 19. April 2018 im gerade erst eröffneten Rhein-Main Congress Centrum in Wiesbaden zusammen, um sich über Neuigkeiten rund um die Themen Assistenzsysteme, automatisiertes Fahren und autonome Fahrzeuge zu informieren und auszutauschen. 22 Aussteller begleiteten die Tagung mit ihren Ständen und Exponaten.
Mathias Heerwagen
Fahrerassistenzsysteme 2018
Univ.-Prof. Dr. Torsten Bertram
Copyright Year
Electronic ISBN
Print ISBN

Premium Partner