Skip to main content

2015 | Buch

Computer Vision in Control Systems-2

Innovations in Practice

insite
SUCHEN

Über dieses Buch

The research book is focused on the recent advances in computer vision methodologies and innovations in practice. The Contributions include:

· Human Action Recognition: Contour-Based and Silhouette-based Approaches.

· The Application of Machine Learning Techniques to Real Time Audience Analysis System.

· Panorama Construction from Multi-view Cameras in Outdoor Scenes.

· A New Real-Time Method of Contextual Image Description and Its Application in Robot Navigation and Intelligent Control.

· Perception of Audio Visual Information for Mobile Robot Motion Control Systems.

· Adaptive Surveillance Algorithms Based on the Situation Analysis.

· Enhanced, Synthetic and Combined Vision Technologies for Civil Aviation.

· Navigation of Autonomous Underwater Vehicles Using Acoustic and Visual Data Processing.

· Efficient Denoising Algorithms for Intelligent Recognition Systems.

· Image Segmentation Based on Two-dimensional Markov Chains.

The book is directed to the PhD students, professors, researchers and software developers working in the areas of digital video processing and computer vision technologies.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Practical Matters in Computer Vision
Abstract
A brief description of researches close to implementation in technical systems is represented in this chapter. Human action recognition and audience analysis systems as well as smart software tool for panorama construction help for a well-being of a human. The application of novel methods in robot navigation systems and the perception of audio visual information for mobile robots are the issues of other innovative investigations. The adaptive comprehensive surveillance algorithms for situation analysis, the enhanced, synthetic, and combined vision technologies for civil aviation, and the navigation techniques reflect the recent achievements in machine vision for robotics and autonomous vehicles. Also the efficient denoising algorithms and the image segmentation based on 2D Markov chains are useful in intelligent recognition systems.
Lakhmi C. Jain, Margarita N. Favorskaya
Chapter 2. Human Action Recognition: Contour-Based and Silhouette-Based Approaches
Abstract
Human action recognition in videos is a desired field in computer vision applications since it can be applied in human computer interaction, surveillance monitors, robot vision, etc. Two approaches of features are investigated in this chapter. First approach is a contour-based type. Four features are investigated in this approach such as Cartesian Coordinate Features (CCF), Fourier Descriptors Features (FDF), Centroid-Distance Features (CDF), and Chord-Length Features (CLF). The second approach is a silhouette-based type. Three features are investigated in this approach such as Histogram of Oriented Gradients (HOG), Histogram of Oriented Optical Flow (HOOF), and Structural Similarity Index Measure (SSIM) features. All these features are simple to compute, efficient to classify, and fast to calculate. Therefore, these features demonstrate a promising field for human action recognition. Moreover, the classification is achieved using two classifiers: K-Nearest-Neighbor (KNN) and Support Vector Machine (SVM). The experimental results demonstrated that these features have a promising potential and useful for the human action recognition in videos.
Salim Al-Ali, Mariofanna Milanova, Hussain Al-Rizzo, Victoria Lynn Fox
Chapter 3. The Application of Machine Learning Techniques to Real Time Audience Analysis System
Abstract
An application for video data analysis based on computer vision methods is presented in this chapter. The proposed system consists of five consecutive stages: face detection, face tracking, gender recognition, age classification, and statistics analysis. The AdaBoost classifier is utilized for face detection. A modification of Lucas and Kanade algorithm is introduced on the stage of face tracking. Novel gender and age classifiers based on adaptive features and support vector machines are proposed. More than 90 % accuracy of viewer’s gender recognition is achieved. All stages are united into a single system of audience analysis. The system allows to extract all possible information about depicted people from the input video stream, aggregate and analyze this information in order to measure different statistical parameters. The proposed software solution can find its applications in different areas, from digital signage and video surveillance to the automatic systems of accident prevention and intelligent human-computer interfaces.
Vladimir Khryashchev, Lev Shmaglit, Andrey Shemyakov
Chapter 4. Panorama Construction from Multi-view Cameras in Outdoor Scenes
Abstract
The applications of panoramic images are wide spread in computer vision including navigation systems, object tracking, virtual environment creation, among others. In this chapter, the problems of multi-view shooting and the models of geometrical distortions are investigated under the panorama construction in the outdoor scenes. Our contribution are the development of procedure for selection of “good” frames from video sequences provided by several cameras, more accurate estimation of projective parameters in top, middle, and bottom regions in the overlapping area during frames stitching, and also the lighting improvement of the result panoramic image by a point-based blending in a stitching area. Most proposed algorithms have high computer cost because of mega-pixel sizes of initial frames. The reduction of frames sizes, the use of CUDA technique, or the hardware implementation will improve these results. The experiments show good visibility results with high stitching accuracy, if the initial frames were selected well.
Lakhmi C. Jain, Margarita N. Favorskaya, Dmitry Novikov
Chapter 5. A New Real-Time Method of Contextual Image Description and Its Application in Robot Navigation and Intelligent Control
Abstract
Computer vision and image understanding are of crucial importance in robotic systems of the future. In this chapter, a new real-time method of contextual image description is presented, and its application to image understanding problems, arising in robotics, is given and discussed. A new vector form of contextual description and segmentation of color images is proposed. This form, called STructural Graph (STG) of color bunches, conserves the geometric constraints in the image and projects real objects onto certain formal objects in the contextual vector description, which can be axiomatically (mathematically) described and found by the real-time segmentation algorithms. The relation between formal objects in the STG and real objects in the image is established. The concept of local and global contrast objects in the STG is put forward. Real-time algorithms for segmentation and detection of global contrast (salient) objects in the STG are described, which provide the stable segmentation components for solving scene recognition problems. These algorithms are based on the geometrized histogram method developed by the author. Selection of stable segmentation components in images is applied to the analysis of video sequences in order to find and recognize visual landmarks. Applications of the developed technique to autonomous robot navigation are presented and discussed.
Konstantin I. Kiy
Chapter 6. Perception of Audio Visual Information for Mobile Robot Motion Control Systems
Abstract
Motion is the main characteristic of intelligent mobile robots. There exist a lot of methods and algorithms for mobile robots motion control. These methods are based on different principles, but the results from these methods must leads to one final goal—to provide a precise mobile robot motion control with clear orientation in the area of robot perception and observation. First, in the proposed chapter the mobile robot audio and visual systems with the corresponding audio (microphone array) and video (mono, stereo or thermo cameras) sensors, accompanied with laser rangefinder sensor, are outlined. The audio and video information captured from the sensors is used in the perception audio visual model proposed to perform joint processing of audio visual information and to determine the current mobile robot position (current space coordinates) in the area of robot perception and observation. The captured from audio visual sensors information is estimated with the suitable algorithms developed for speech and image quality estimation to apply the preprocessing methods for increasing the quality and to minimizing the errors of mobile robot position calculations. The current space coordinates determined from laser rangefinder are used as supplementary information of mobile robot position, for error calculation and for comparison with the results from audio visual mobile robot motion control. In the development of the mobile robot perception audio visual model, some methods are used: method RANdom SAmple Consensus (RANSAC) for estimation of parameters of a mathematical model from a set of observed audio visual coordinate data; method Direction Of Arrival (DOA) for sound source direction localization with microphone array of speaker sending voice commands to the mobile robot; method for speech recognition of the voice command sending from the speaker to the robot. The current mobile robot position calculated from joint usage of perceived audio visual information is used in appropriate algorithms for mobile robot navigation, motion control, and objects tracking: map based or map less methods, path planning and obstacle avoidance, Simultaneous Localization And Mapping (SLAM), data fusion, etc. The error, accuracy, and precision of the proposed mobile robot motion control with perception of audio visual information are analyzed and estimated from the results of the numerous experimental tests presented at the end of this chapter. The experiments are carried out mainly with simulations of the algorithms listed above, but are trying also parallel computing methods in implementation of the developed algorithms to reach real time robot navigation and motion control using perceived audio visual information from the mobile robot audio visual sensors.
Snejana Pleshkova, Alexander Bekiarski, Shima Sehati Dehkharghani, Kalina Peeva
Chapter 7. Adaptive Surveillance Algorithms Based on the Situation Analysis
Abstract
One of the major trends in the development of robotic systems is the designing of methods ensuring their autonomous functioning and action planning in complex variable environment. This chapter examines the organization aspects of surveillance task calculations for automatic robotic systems such as visual navigation or search in variable and uncertain conditions. The application of methods based on the pixel-by-pixel or particular attribute comparison is hampered by their high computational load. Besides, it becomes impossible to design the reference images for uncertain surveillance conditions. The novel approach based on the complex adaptive algorithm is discussed in this chapter. The adaptive algorithm involves a range of particular surveillance algorithms, situation description, and situation analysis. The technique for generation of the descriptions of the observed scene uses the predicates and is based on the statistical methods of object recognition. Examples demonstrate the implementation of our approach in visual navigation and search of on-land mobile objects.
Nikolay Kim, Nikolay Bodunkov
Chapter 8. Enhanced, Synthetic and Combined Vision Technologies for Civil Aviation
Abstract
The perspective of aviation safety improvement is closely tied with the development of novel avionics solutions, aimed to enhance a flight visibility and a situation awareness of a flight crew. Such solutions include Enhanced Vision System (EVS), Synthetic Vision System (SVS), and Combined Vision System (CVS). These systems provide a supplemental view of external cabin space for a flight crew using technical vision, computer graphics, and augmented reality. The chapter addresses the general principles of the EVS/SVS/CVS development and proposes a number of original methods and algorithms for image enhancement, TV and infrared (IR) image fusion, vision based runway and obstacle detection, the SVS image creation, the EVS/SVS image fusion.
Oleg Vygolov, Sergey Zheltov
Chapter 9. Navigation of Autonomous Underwater Vehicles Using Acoustic and Visual Data Processing
Abstract
A navigation model for an Autonomous Underwater Vehicle (AUV) combines acoustic and vision-based navigation principles. The acoustic guidance is based on the Time-Of-Flight (TOF) measurements carried out in a one-way asynchronous mode. Vision-based positioning employs a digital image processing approach using the log-polar transformations for a temporal series of on-board camera images. A proportional-integral-derivative controller is used to change a vehicle’s position and course. The corresponding control and error functions are provided. The model is implemented and tested numerically. The experiments confirmed a high reliability of the developed algorithms, which can be further applied in autonomous vehicle navigation and docking systems.
Igor Burdinsky, Anton Myagotin
Chapter 10. Efficient Denoising Algorithms for Intelligent Recognition Systems
Abstract
A great variety of electronic devices nowadays provide images with different quality, resolution, and color depth parameters. Despite of the differences, all intelligent recognition systems built from the basic image capturing components inevitably involve image preprocessing blocks, which in addition to other tasks perform the “raw” image filtration. This chapter presents the algorithms of modern filtration techniques dealing with the Principal Component Analysis (PCA) and non-local processing based on the denoising algorithms including multiple examples. Some attention is devoted to the modeling of noise on raw images. Finally, the charter ends with a discussion on the applicability of the specific filtration algorithms to modern tasks such as pattern recognition and object tracking.
Andrey Priorov, Kirill Tumanov, Vladimir Volokhov
Chapter 11. Image Segmentation Based on Two-Dimensional Markov Chains
Abstract
The image segmentation is a crucial problem in many tasks of computer vision. In this chapter, it is proposed the mathematical theory of conditional Markov processes, the representation of halftone g-digit images as a collection of g binary images, and the entropy approach to segment the objects of interest and texture areas in images, particular in the satellite images. The proposed approach develops the novel efficient methods for contour and texture segmentation in real world noisy images provided a high accuracy and less computational resources in comparison with the conventional methods (Canny, Laplacian of Gaussian, Roberts, Prewitt, and Sobel). The performed mathematical models and experimental researches confirm that the developed segmentation algorithms are effective in terms of quality and processing speed.
Elena Medvedeva, Ekaterina Kurbatova
Metadaten
Titel
Computer Vision in Control Systems-2
herausgegeben von
Margarita N. Favorskaya
Lakhmi C. Jain
Copyright-Jahr
2015
Electronic ISBN
978-3-319-11430-9
Print ISBN
978-3-319-11429-3
DOI
https://doi.org/10.1007/978-3-319-11430-9

Premium Partner