Skip to main content

2017 | Buch

Computer Vision for Driver Assistance

Simultaneous Traffic and Driver Monitoring

insite
SUCHEN

Über dieses Buch

This book summarises the state of the art in computer vision-based driver and road monitoring, focussing on monocular vision technology in particular, with the aim to address challenges of driver assistance and autonomous driving systems.

While the systems designed for the assistance of drivers of on-road vehicles are currently converging to the design of autonomous vehicles, the research presented here focuses on scenarios where a driver is still assumed to pay attention to the traffic while operating a partially automated vehicle. Proposing various computer vision algorithms, techniques and methodologies, the authors also provide a general review of computer vision technologies that are relevant for driver assistance and fully autonomous vehicles.

Computer Vision for Driver Assistance is the first book of its kind and will appeal to undergraduate and graduate students, researchers, engineers and those generally interested in computer vision-related topics in modern vehicle design.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Vision-Based Driver-Assistance Systems
Abstract
This chapter outlines the general context of the book. Autonomous driving is still at a stage where drivers are expected to be in control of the vehicle at all times, but provided automated control features of the vehicle (based on input data generated by different sensors) already enhance safety and driver comfort. We especially consider automated control features possible by using camera data.
Mahdi Rezaei, Reinhard Klette
Chapter 2. Driver-Environment Understanding
Abstract
This book focuses in particular on driver-environment understanding as briefly outlined at the end of the previous chapter. This chapter provides a more detailed introduction, motivations, and a review of the state-of-the-art in this area of vision-based driver-assistance systems. The chapter also discusses existing challenges and outlines the structure of the book.
Mahdi Rezaei, Reinhard Klette
Chapter 3. Computer Vision Basics
Abstract
In this chapter we present and discuss the basic computer vision concepts, techniques, and mathematical background that we use in this book. The chapter introduces image notations, the concept of integral images, colour space conversions, the Hough transform for line detection, camera coordinate systems, and stereo computer vision.
Mahdi Rezaei, Reinhard Klette
Chapter 4. Object Detection, Classification, and Tracking
Abstract
In this chapter we outline object detection and object recognition techniques which are of relevance for the remainder of the book. We focus on supervised and unsupervised learning approaches. The chapter provides technical details for each method, discussions on the strengths and weaknesses of each method, and gives examples and various applications for each method. Material is provided to support a decision for an appropriate object detection technique for computer vision applications, including driver-assistance systems.
Mahdi Rezaei, Reinhard Klette
Chapter 5. Driver Drowsiness Detection
Abstract
In this chapter we propose a method to assess driver drowsiness based on face and eye-status analysis. The chapter starts with a detailed discussion on effective ways to create a strong classifier (the “training phase”), and it continues with a novel optimization method for the “application phase” of the classifier. Both together significantly improve the performance of our Haar-like based detectors in terms of speed, detection rate, and detection accuracy under non-ideal lighting conditions and for noisy images. The proposed framework includes a preprocessing denoising method, introduction of Global Haar-like features, a fast adaptation method to cope with rapid lighting variations, as well as an implementation of a Kalman filter tracker to reduce the search region and to indirectly support our eye-state monitoring system. Experimental results obtained for the MIT-CMU dataset, Yale dataset, and our recorded videos and comparisons with standard Haar-like detectors show noticeable improvements compared to previous methods.
Mahdi Rezaei, Reinhard Klette
Chapter 6. Driver Inattention Detection
Abstract
This chapter mainly proposes a comprehensive method for detecting driver’s distraction and inattention. We introduce an asymmetric appearance-modelling method and an accurate 2D-to-3D registration technique to obtain the driver’s head pose, yawing detection, and head-nodding detection. Chapter 5 and this chapter present the first major objective of this book’s focus on “driver behaviour” (i.e. driver drowsiness and distraction detection). The final objective of this book is to develop an ADAS that correlates driver’s direction of attention to the road hazards, by analyzing both simultaneously. This is presented in Chap. 8
Mahdi Rezaei, Reinhard Klette
Chapter 7. Vehicle Detection and Distance Estimation
Abstract
“Collision warning systems” are actively researched in the area of computer vision and the automotive industry. Using monocular vision only, this chapter discusses the part of our study that aims at detecting and tracking the vehicles ahead, to identify safety distances, and to provide timely information to assist a distracted driver under various weather and lighting conditions. As part of the work presented in this chapter, we also adopt the previously discussed dynamic global Haar (DGHaar) features for vehicle detection. We introduce “taillight segmentation” and a “virtual symmetry detection” technique for pairing the rear-light contours of the vehicles on the road. Applying a heuristic geometric solution, we also develop a method for inter-vehicle “distance estimation” using only a monocular vision sensor. Inspired by Dempster–Shafer theory, we finally fuse all the available clues and information to reach a higher degree of certainty. The proposed algorithm is able to detect vehicles ahead both at day and night, and also for a wide range of distances. Experimental results under various conditions, including sunny, rainy, foggy, or snowy weather, show that the proposed algorithm outperforms other currently published algorithms that are selected for comparison.
Mahdi Rezaei, Reinhard Klette
Chapter 8. Fuzzy Fusion for Collision Avoidance
Abstract
In this chapter we discuss how to assess the risk level in a given driving scenario based on the eight possible inputs: driver’s direction of attention (yaw, roll, pitch), signs of fatigue or drowsiness (yawning, head nodding, eye closure), and from road situations (distance, and the angle of the detected vehicles to the ego-vehicle). Using a fuzzy-logic inference system, we develop an integrated solution to fuse, to interpret, and to process all of the above information. The ultimate goal is to prevent a traffic accident by fusing all the existing “in-out” data from inside the car cockpit and outside on the road. We aim to warn the driver in case of high-risk driving conditions and to prevent an imminent crash.
Mahdi Rezaei, Reinhard Klette
Erratum to: Computer Vision for Driver Assistance: Simultaneous Traffic and Driver Monitoring
Mahdi Rezaei, Reinhard Klette
Backmatter
Metadaten
Titel
Computer Vision for Driver Assistance
verfasst von
Prof. Mahdi Rezaei
Prof. Reinhard Klette
Copyright-Jahr
2017
Electronic ISBN
978-3-319-50551-0
Print ISBN
978-3-319-50549-7
DOI
https://doi.org/10.1007/978-3-319-50551-0

Premium Partner