Skip to main content

2014 | Buch

Distributed Embedded Smart Cameras

Architectures, Design and Applications

herausgegeben von: Christophe Bobda, Senem Velipasalar

Verlag: Springer New York

insite
SUCHEN

Über dieses Buch

This publication addresses distributed embedded smart cameras –cameras that perform on board analysis and collaborate with other cameras. This book provides the material required to better understand the architectural design challenges of embedded smart camera systems, the hardware/software ecosystem, the design approach for and applications of distributed smart cameras together with the state-of-the-art algorithms. The authors concentrate on the architecture, hardware/software design, realization of smart camera networks from applications to architectures, in particular in the embedded and mobile domains.

Inhaltsverzeichnis

Frontmatter

Architectures and Technologies for Embedded Smart Cameras

Frontmatter
Chapter 1. Platforms and Architectures for Distributed Smart Cameras
Abstract
Embedded computer vision places huge computational demands on smart cameras; in addition, these systems must often be designed to consume very low power and be inexpensive to manfuacture. In this chapter, we consider computational platforms for both smart cameras and networks of smart cameras. A platform is a combination of hardware and software that provides a set of features and services for an application space. We first compare a broad range of computing fabrics suitable for embedded computer vision: FPGAs, GPUs, video signal processors, and heterogeneous multiprocessor system-on-chip. We then look at approaches to the design of a platform for distributed services in a smart camera network.
Marilyn Wolf
Chapter 2. A Survey of Systems-on-Chip Solutions for Smart Cameras
Abstract
With the advances in electronic manufacturing technologies, integration of disparate technologies including sensors, analog components, mixed signal units and digital processing cores into a single chip has become reality, which is an increasing trend in different application domains, especially for distributed smart camera products. In this chapter, a survey of existing System-on-Chip solutions for distributed smart cameras able to capture and intelligently process video in real-time and communicate with other cameras and sensors remotely is presented.
Ali Ahmadinia, David Watson
Chapter 3. Reconfigurable Architectures for Distributed Smart Cameras
Abstract
Embedded smart cameras must provide enough computational power to handle complex image understanding algorithms on huge amount of data in-situ. In a distributed set-up, smart cameras must provide efficient communication and flexibility in additional to performance. Programmability and physical constraints such as size, weight and power (SWAP) complicate design and architectural choices. In this chapter, we explore the use of FPGAs as computational engine in distributed smart cameras and present a smart camera system designed to be used as node in a camera sensor network. Beside the performance and flexibility size and power requirements are addressed through a modular and scalable design. The programability of the system is addressed by a seamless integration of the Intel OpenCV computer vision library to the platform.
Christophe Bobda, Michael Mefenza, Franck Yonga, Ali Akbar Zarezadeh
Chapter 4. Design and Verification Environment for High-Performance Video-Based Embedded Systems
Abstract
In this chapter, we propose a design and verification environment for computational demanding and secure embedded vision-based systems. Starting with an executable specification in OpenCV, we provide subsequent refinements and verification down to a system-on-chip prototype into an FPGA-based smart camera. At each level of abstraction, properties of image processing applications are used along with structure composition to provide a generic architecture that can be automatically verified and mapped to a lower abstraction level, the last of which being the FPGA. The result of this design flow is a framework that encapsulates the computer vision library OpenCV at the highest level, integrates Accelera’s SystemC/TLM with the Universal Verification Methodology (UVM) and QEMU-OS for virtual prototyping, verification, and low-level mapping.
Michael Mefenza, Franck Yonga, Christophe Bobda

Smart Cameras in Mobile Environments

Frontmatter
Chapter 5. Distributed Mobile Computer Vision: Advances, Challenges and Applications
Abstract
The role of mobile devices has shifted from purely passively transmitting text messages and voice calls to proactively providing any kind of information that is also accessible to a PC. The recent advances in the field of micro technology have also made possible to include a camera sensor in any mobile device. This innovation is now attracting both the research community and the industries that aim to develop mobile applications that exploit recent computer vision algorithms. In this chapter we provide an analysis of the recent advances of mobile computer vision, then we discuss the current challenges that the community is currently dealing with. Next, an analysis of two recent case studies where mobile vision is used for augmented reality and surveillance applications is discussed. Finally, we introduce the next challenges in mobile vision where the mobile devices are part of a visual sensor network.
Niki Martinel, Andrea Prati, Christian Micheloni
Chapter 6. Autonomous Tracking of Vehicle Taillights and Alert Signal Detection by Embedded Smart Cameras
Abstract
An important aspect of collision avoidance and driver assistance systems, as well as autonomous vehicles, is the tracking of vehicle taillights and the detection of alert signals (turns and brakes). In this chapter, we present the design and implementation of a robust and computationally lightweight algorithm for a real-time vision system, capable of detecting and tracking vehicle taillights, recognizing common alert signals using a vehicle-mounted embedded smart camera, and counting the cars passing on both sides of the vehicle. The system is low-power and processes scenes entirely on the microprocessor of an embedded smart camera. In contrast to most existing work that addresses either daytime or nighttime detection, the presented system provides the ability to track vehicle taillights and detect alert signals regardless of lighting conditions. The mobile vision system has been tested in actual traffic scenes and the obtained results demonstrate the performance and lightweight nature of the algorithm.
Akhan Almagambetov, Senem Velipasalar
Chapter 7. Automatic Fall Detection and Activity Classification by a Wearable Camera
Abstract
Automated monitoring of everyday physical activities of elderly has come a long way in the past two decades. These activities might range from critical events such as falls requiring rapid and robust detection to classifying daily activities such as walking, sitting and lying down for long term prognosis. Researchers have constantly strived to come up with innovative methods based on different sensor systems in order to build a robust automated system. These sensor systems can be broadly classified into wearable and ambient sensors. Various vision and non-vision based sensors have been employed in the process. Most popular wearable sensors employ non-vision based sensors such as accelerometers and gyroscopes and have the advantage of not being confined to restricted environments. But resource limitations leave them vulnerable to false positives and render the task of classifying activities very challenging. On the other hand, popular ambient vision based sensors like wall mounted cameras which have resource capabilities for better activity classification are confined to a specific monitoring environment and by nature raise privacy concerns. Recently, integrated wearable sensor systems with accelerometers and camera on a single device have been introduced wherein the camera is used to provide contextual information in order to validate the accelerometer readings. In this chapter, a new idea of using a smart camera as a waist worn fall detection and activity classification system is presented. Therefore, a methodology to classify sitting and lying down activities with such a system is introduced in order to further substantiate the concept of event detection and activity classification with wearable smart cameras.
Koray Ozcan, Anvith Mahabalagiri, Senem Velipasalar

Applications

Frontmatter
Chapter 8. Tracking by Detection Algorithms Using Multiple Cameras
Abstract
Detecting and tracking people using cameras is a basic task in many applications such as video surveillance and smart environment. In this chapter, we review approaches that detect and track targets using a single camera. After that, we explore the approaches that fuse multiple sources of information to enable tracking in a camera network. At last, we show an application that estimates the occupancy in a smart room.
Zixuan Wang, Hamid Aghajan
Chapter 9. Consistent Human Tracking Over Self-organized and Scalable Multiple-camera Networks
Abstract
In this chapter, a self-organized and scalable multiple-camera tracking system that tracks human across the cameras with nonoverlapping views is introduced. Given the GPS locations of uncalibrated cameras, the system automatically detects the existence of camera link relationships within the camera network based on the routing information provided by Google Maps. The connected zones in any pair of directly-connected cameras are identified based on the feature matching between the camera’s view and Google Street View. To overcome the adverse issues of nonoverlapping field of views among cameras, we propose an unsupervised learning scheme to build the camera link model, including transition time distribution, brightness transfer function, region mapping matrix, region matching weights, and feature fusion weights. Our unsupervised learning scheme tolerates well the presence of outliers in the training data and the learned camera link model can be continuously updated even after the tracking is started. The systematic integration of multiple features enables us to perform an effective re-identification across cameras. The pairwise learning and tracking manner also enhances the scalability of the system. Thanks to the unsupervised pairwise learning and tracking in our system, the camera network is self-organized, and our proposed system is able to be scale up efficiently when more cameras are added into the network. Thanks to the unsupervised pairwise learning and tracking in our system, the camera network is self-organized, and our proposed system is able to scale up efficiently when more cameras are added into the network.
Kuan-Hui Lee, Chun-Te Chu, Younggun Lee, Zhijun Fang, Jenq-Neng Hwang
Chapter 10. Soft-Biometrics and Reference Set Integrated Model for Tracking Across Cameras
Abstract
Multi-target tracking in non-overlapping cameras is challenging due to the vast appearance change of the targets across camera views caused by variations in illumination conditions, poses, and camera imaging characteristics. Therefore, direct track association based on color information only is difficult and prone to error. In most previous methods the appearance similarity is computed either using color histograms directly or based on pre-trained Brightness Transfer Function (BTF) that maps color between cameras. In this chapter, besides color histograms, other soft-biometric features that are invariant to illumination and view changes are also integrated into the feature representation of a target. A novel reference set based appearance model is proposed to improve multi-target tracking in a network of non-overlapping video cameras. Unlike previous work, a reference set is constructed for a pair of cameras, containing targets appearing in both camera views. For track association, instead of comparing the appearance of two targets in different camera views directly, they are compared to the reference set. The reference set acts as a basis to represent a target by measuring the similarity between the target and each of the individuals in the reference set. The effectiveness of the proposed method over the baseline models on challenging real-world multi-camera video data is validated by the experiments.
Xiaojing Chen, Le An, Bir Bhanu
Chapter 11. A Parallel Approach for Statistical Texture Parameter Calculation
Abstract
This chapter focusses on the development of a new image processing technique for the processing of large and complex images, especially SAR images. We propose here a new and effective approach that outperforms the existing methods for the calculation of high order textural parameters. With a single processor, this approach is about \(256^{n-1}\) times faster than the co-occurrence matrix approach considered as classical, where \(n\) is the order of the textural parameter for a 256-gray scales image. In a parallel environment made of N processor, this performance can almost be multiply by the factor N. Our approach is based on a new modeling of textural parameters of a generic order \(n>1\) equivalent to the classical formulation, but which is no longer based on the co-occurrence matrix of order \(n>1\). By avoiding the calculation of the co-occurrence matrix of order \(n>1\), the resulted model enables a gain of about \(256^{n}\) bytes of the required memory space.
Narcisse Talla Tankam, Albert Dipanda, Christophe Bobda, Janvier Fotsing, Emmanuel Tonyé
Chapter 12. Multi-modal Sensing for Distracted Driving Mitigation Using Cameras and Crowdsourcing
Abstract
Driving-related accidents and human casualties are on the rise in the US and around the globe (Brace et al. in Analysis of the literature: the use of mobile phones while driving, 2007). The US government spends more than 10 billion dollars a year to address the aftermath of accidents caused due to distracted driving and driving in dangerous conditions.
Amol Deshpande, Mahbubur Rahaman, Nilanjan Banerjee, Christophe Bobda, Ryan Robucci
Backmatter
Metadaten
Titel
Distributed Embedded Smart Cameras
herausgegeben von
Christophe Bobda
Senem Velipasalar
Copyright-Jahr
2014
Verlag
Springer New York
Electronic ISBN
978-1-4614-7705-1
Print ISBN
978-1-4614-7704-4
DOI
https://doi.org/10.1007/978-1-4614-7705-1

Neuer Inhalt