Skip to main content
main-content

Über dieses Buch

This book presents an introduction to new and important research in the images processing and analysis area. It is hoped that this book will be useful for scientists and students involved in many aspects of image analysis. The book does not attempt to cover all of the aspects of Computer Vision, but the chapters do present some state of the art examples.

Inhaltsverzeichnis

Frontmatter

Advances in Intelligent Image Analysis

This chapter presents some recent advances in the area of computer vision. The various stages involved in the area of image processing and their interpretation are described. The first step is that of image registration. That is, the overlay two or more images of the same scene. These are taken from different viewpoints, at different times or possibly by different sensors. The next phase is image preprocessing. This mainly involves image enhancement and clearing for example. Other problem is that of image analysis. This is the extraction of important features of the image. Having obtained a description of the image, the process of object (pattern) recognition can be performed. All of these tasks are very important and useful, they still do not give a semantic interpretation of images. Image interpretation, similar image searching is still a major challenge facing researchers. The second part of this chapter summarises the remaining chapters of the book.

Halina Kwaśnicka, Lakhmi C. Jain

Multi-class Classification in Image Analysis via Error-Correcting Output Codes

A common way to model multi-class classification problems is by means of Error-Correcting Output Codes (ECOC). Given a multi-class problem, the ECOC technique designs a codeword for each class, where each position of the code identifies the membership of the class for a given binary problem.A classification decision is obtained by assigning the label of the class with the closest code. In this paper, we overview the state-of-the-art on ECOC designs and test them in real applications. Results on different multi-class data sets show the benefits of using the ensemble of classifiers when categorizing objects in images.

Sergio Escalera, David M. J. Tax, Oriol Pujol, Petia Radeva, Robert P. W. Duin

Morphological Operator Design from Training Data

A State of the Art Overview

Mathematical morphology offers a set of powerful tools for image processing and analysis. From a practical perspective, the expected results of many morphological operators can be intuitively explained in terms of geometrical and topological characteristics of the images. From a formal perspective, mathematical morphology is based on complete lattices, which provides a solid theoretical framework for the study of algebraic properties of the operators. Despite of these nice characteristics, designing morphological operators is not a trivial task; it requires knowledge and experience. In this chapter, a self-contained exposition on the design of translation-invariant morphological operators from training data is presented. The described training procedure relies on the canonical sup-decomposition theorem of morphological operators, which in the context of binary images states that any translation-invariant operator can be expressed uniquely in terms of two elementary operators, erosions and dilations, plus set operations. An important issue considered in this exposition is how the bias-variance tradeoff manifests within the training context and how its understanding can lead to approaches that generate better results. Several application examples that illustrate the usefulness of the described design procedure are also presented.

Nina S. T. Hirata

Task-Specific Salience for Object Recognition

Object recognition is a complex and challenging problem. It involves examining many different hypothesis in terms of the object class, position, scale, pose, etc., but the main trend in computer vision systems is to lazily rely on the brute force capacity of computers, that is to explore every possibilities indifferently. Sadly, in many case this scheme is way too slow for real-time or even practical applications. By incorporating salience in the recognition process, several approaches have shown that it is possible to get several orders of speed-up. In this chapter, we demonstrate the link between salience and cascaded processes and show why and how those ones should be constructed. We illustrate the benefits that it provides, in terms of detection speed, accuracy and robustness, and how it eases the combination of heterogeneous feature types (i.e. dense and sparse features) by some innovating strategies from the state-of-the-art and a practical application.

Jerome Revaud, Guillaume Lavoue, Yasuo Ariki, Atilla Baskurt

Fast and Efficient Local Features Detection for Building Recognition

The vast growth of image databases creates many challenges for computer vision applications, for instance image retrieval and object recognition. Large variation in imaging conditions such as illumination and geometrical properties (including scale, rotation, and viewpoint) gives rise to the need for invariant features; i.e. image features should have minimal differences under these conditions. Local image features in the form of key points are widely used because of their invariant properties. In this chapter, we analyze different issues relating to existing local feature detectors. Based on this analysis, we present a new approach for detecting and filtering local features. The proposed approach is tested in a real-life application which supports navigation in urban environments based on visual information. The study shows that our approach performs as well as existing methods but with a significantly lower number of features.

G. P. Nguyen, H. J. Andersen

Visual Perception in Image Analysis

Digital Image Content via Tolerance Near Sets

This chapter considers how visual perception can be used to advantage in image analysis. The key to the solution to this problem was first pointed out by J.H. Poincaré in 1893 in his representation of the results of G.T. Fechner’s 1860 psychophysics experiments with sensation sensitivity in lifting small weights. The focus of Fechner’s experiments was on sensation sensitivity. By contrast, the focus of Poincaré rendition of Fechner’s experiments was on determining sets of similar sensations that serve as a model for a physical continuum. In what he later called a representative space (

aka

, tolerance space), Poincar’e informally discerned tolerance relations in determining tolerance classes containing perceptually indistinguishable sensations. A formal view of tolerance spaces was first introduced by E.C. Zeeman in 1962 (nearly 70 years after Poincaré’s work on representative spaces). Unlike Poincaré, Zeeman focused on visual acuity in formulating the idea of a tolerance space. By defining a tolerance relation, one provides a basis for a rigorous study of resemblance between perceptual objects such as digital images or observed behaviour patterns of collections of social robots. Eventually, the study of the resemblance of disjoint sets by Z. Pawlak and J.F. Peters, starting in 2002, led to the discovery of a formal basis for measuring the degree of nearness between distinct tolerance spaces. The main contribution of this paper is the introduction of a form of perceptual image analysis in terms of a methodology for determining the resemblance between pairs of visual tolerance spaces defined within the context of digital images.

James F. Peters

An Introduction to Magnetic Resonance Imaging: From Image Acquisition to Clinical Diagnosis

Magnetic resonance imaging provides a comprehensive and non-invasive view of the structural features of living tissue at very high resolution (typically on the 1-2 mm scale). A variety of pulse sequences have been developed that provide quantitative information regarding the structural features of a variety of tissue classes, providing details that are extremely beneficial in a clinical setting. Unlike positron emission tomography (PET), MRI as it does not deploy the use of radioactive isotopes, and hence can be performed repeatedly. Modern day MRI scanners can provide extremely high resolution images in a relatively short period of time (approximately 20 minutes) on average in a typical diagnostic scan. A variety of measurements can be made in a single scanning session through the application of serial pulse sequences. These pulse sequences are computer programmes that control the scanner parameters, which in turn control factors such as tissue contrast. By deploying the appropriate pulse sequence, One can obtain detailed information about the vasculature of a region of the body (magnetic resonance angiogram), deep tissue injury, and more recently one can obtain information regarding the microstructural features of the brain. Indeed, MRI is routinely used to identify and/or confirm the diagnosis of a variety of brain parenchyma or vasculature diseases such as multiple sclerosis and stroke respectively. With further improvements in the electronics and pulse sequences, more detailed and accurate imaging techniques may provide medical science with the opportunity to automate the diagnosis of a variety of diseases which present ultastructural changes.

Kenneth Revett

Image Analysis in Poincaré-Peters Perceptual Representative Spaces

A Near Set Approach

The problem considered in this paper is how to detect similarities in the content of digital images, useful in image retrieval and in the solution of the image correspondence problem,

i.e.

, to what extent does the content of one digital image correspond to content of other digital images. The solution to this problem stems from a recent extension of J.H. Poincaré’s representative spaces from 1895 introduced by J.F. Peters in 2010 and near sets introduced by J.F. Peters in 2007. Elements of a perceptual representative space are sets of perceptions arising from n-dimensional image patch feature vector comparisons. An image patch is a set of subimages. In comparing digital images, partitions of images determined by a particular form of indiscernibility relation ~ℬ is used. The

L

1

(taxicab distance) norm in measuring the distance between feature vectors for objects in either a perceptual indiscernibility or a perceptual tolerance. These relations combined with finite, non-empty sets of perceptual objects constitute various representative spaces that provide frameworks for image analysis and image retrieval. An application of representative spaces and near sets is given in this chapter in terms of a new form of content-based image retrieval (CBIR). This chapter investigates the efficacy of perceptual CBIR using Hausdorff, Mahalanobis as well as tolerance relation-based distance measures to determine the degree of correspondence between pairs of digital image. The contribution of this chapter is the introduction of a form of image analysis defined within the context of Poincare-Peters perceptual representative spaces and near sets.

Sheela Ramanna

Local Keypoints and Global Affine Geometry: Triangles and Ellipses for Image Fragment Matching

Image matching and retrieval is one of the most important areas of computer vision. The key objective of image matching is detection of near-duplicate images. This chapter discusses an extension of this concept, namely, the retrieval of near-duplicate image fragments. We assume no

a’priori

information about visual contents of those fragments. The number of such fragments in an image is also unknown. Therefore, we address the problem and propose the solution based purely on visual characteristics of image fragments The method combines two techniques: a local image analysis and a global geometry synthesis. In the former stage, we analyze low-level image characteristics, such as local intensity gradients or local shape approximations. In the latter stage, we formulate global geometrical hypotheses about the image contents and verify them using a probabilistic framework.

Mariusz Paradowski, Andrzej Śluzek

Feature Analysis for Object and Scene Categorization

Feature extraction and selection has always been an interesting issue for pattern recognition tasks. There have been numerous feature schemes proposed and empirically validated for image scene and object categorization problems, no matter it is for general-purposed applications such as image retrieval, or for specific domains such as medical image analysis. On the other hand, there are few attempts in assessing the effectiveness of these features using machine learning methods of feature analysis. We review some recent advances in feature selection and investigate the use of feature analysis and selection in two case studies. Our aim is to demonstrate that feature selection is indispensable in providing clues for finding good feature combination schemes and building compact and effective classifiers that produce much improved performance.

Jeremiah D. Deng

Introduction to Curve and Edge Parametrization by Moments

Curve parametrization is the task of determining the parameters of a general curve equation describing a structure in an image (or a surface in a higher-dimensional dataset). A common example is the widespread use of the Hough transform to determine parameters of straight lines in an image. Moment-based methods offer an attractive alternative to Hough-type methods for this task, especially as the number of parameters or the dimension of the space increases. Moment-based methods require no large accumulator array, are computationally efficient, and robust with respect to pixelization and high-frequency noise. This paper presents an overview of the state of the art for moment-based curve parametrization techniques. We discuss both abstract mathematical results guaranteeing the existence of a unique curve corresponding to a given set of moment values and allowing determination of parameter values for specialized quadrature-domain boundary curves, along with broad practical reconstructive results for a wide class of curves and hypersurfaces with arbitrarily many parameters and in arbitrarily many dimensions. Examples show the methods applied to analytically-defined image functions, generated images, and real-world images.

Irina Popovici, Wm. Douglas Withers

Intelligent Approaches to Colour Palette Design

Colour palettes are used for representing image data using a limited number of colours. As the image quality directly depends on the chosen colours in the palette, deriving algorithms for colour palette design is a crucial task. In this chapter we show how computational intelligence approaches can be employed for this task. In particular, we discuss the use of generic optimisation techniques such as simulated annealing, and of soft computing based clustering algorithms founded on fuzzy and rough set ideas in the context of colour quantisation. We show that these methods are capable of deriving good colour palettes and that they outperform standard colour quantisation techniques in terms of image quality.

Gerald Schaefer

Mean Shift and Its Application in Image Segmentation

Mean shift techniques have been demonstrated to be capable of estimating the local density gradients of similar image pixels. These gradient estimates are iteratively performed so that for all pixels similar pixels in corresponding images can be identified. In this chapter, we show how the application of a mean shift process can lead to improved image segmentation performance. We present several mean shift-based segmentation algorithms and demonstrate their superior performance against the classical approaches. Conclusions are drawn with respect to the effectiveness, efficiency and robustness of image segmentation using these approaches.

Huiyu Zhou, Xun Wang, Gerald Schaefer

Backmatter

Weitere Informationen

Premium Partner

Neuer Inhalt

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.

Whitepaper

- ANZEIGE -

Product Lifecycle Management im Konzernumfeld – Herausforderungen, Lösungsansätze und Handlungsempfehlungen

Für produzierende Unternehmen hat sich Product Lifecycle Management in den letzten Jahrzehnten in wachsendem Maße zu einem strategisch wichtigen Ansatz entwickelt. Forciert durch steigende Effektivitäts- und Effizienzanforderungen stellen viele Unternehmen ihre Product Lifecycle Management-Prozesse und -Informationssysteme auf den Prüfstand. Der vorliegende Beitrag beschreibt entlang eines etablierten Analyseframeworks Herausforderungen und Lösungsansätze im Product Lifecycle Management im Konzernumfeld.
Jetzt gratis downloaden!

Bildnachweise