Skip to main content
main-content

Über dieses Buch

This book contains the proceedings of the 4th International Conference on Data Analysis and Processing held in Cefalu' (Palermo, ITALY) on September 23-25 1987. The aim of this Conference, now at its fourth edition, was to give a general view of the actual research in the area of methods and systems for achieving artificial vision as well as to have an up-dated information of the current activity in Europe. A number of invited speakers presented overviews of statistical classification problems and methods, non conventional archi­ tectures, mathematical morphology, robotic vision, analysis of range images in vision systems, pattern matching algorithms and astronomical data processing. Finally a survey of the discussion on the contribution of AI to Image Analysis is given. The papers presented at the Conference have been subdivided in four sections: knowledge based approaches, basic pattern recognition tools, multi features system based solutions, image analysis-applications. We must thank the IBM-Italia and the Digital Equipment Corpo­ ration for sponsoring this Conference. We feel that the days spent at Cefalu' were an important step toward the mutual exchange of scientific information within the image processing community. v. Cantoni Pavia University V. Di Gesu' Palermo University S. Levialdi Rome University v CONTENTS INVITED LECTURES . • • • • . • • • 3 Morphological Optics.

Inhaltsverzeichnis

Frontmatter

Invited Lectures

Frontmatter

Morphological Optics

Image analysis methods are first classified into four groups of theories related to optics, after which the associate hypotheses, the structures, the laboratory equipment and the mathematical framework for one of them, better known by the term of morphological optics are studied in detail. Conclusions are drawn as to the role played by hypotheses in image analysis.

J. Serra

Image Processing Architectures

It has seldom been disputed that the conventional von Neumann computer architecture is inappropriate for image analysis. As increasingly elaborate image processing tasks were attempted and as the size and resolution of the images to be processed increased, so also did it become more and more apparent that adequate processing speeds could never be achieved by systems employing only a single processor. This paper reviews past and current solutions to the problem of combining many processors into a unified system and also discusses the general considerations affecting the choice of multiprocessor architectures for image analysis computing.

M. J. B. Duff

Robotic Vision

Some results of research and development concerning 2-D and 3-D robot vision systems carried out in V.M. Glushkov Institute of Cybernetics are presented. The problems of 3-D object model construction and algorithms of its structure elements extraction from greyscale and range-finder data are discussed. An example of 2-D industrial vision system with its soft- and hardware is offered to illustrate practical implementation of the results achieved. Research and development are provided by means of simulation and a subsequent test of methods and algorithms on real data. This approach is implemented in specialized hard- and software complex.

Alexander I. Boldyrev, Vitaly I. Rybak

Understanding Unconventional Images

A computational strategy is suggested for images obtained from a laser range finding scanner. The scanned scenes may contain many objects of arbitrary sizes and in arbitrary positions and orientations. No a priori information is available on scene contents but the scenes are assumed to be “man-made”. The computations are mathematically very simple but many steps have to be carried out to discover scene contents. The procedures are practical only if appropriate hardware is designed. The paper also discusses philosophical aspects of image understanding as applied to range images.

T. Kasvand

Statistical Pattern Recognition: The State of the Art

Objects or events in the universe are perceived by biological systems as patterns. Pattern recognition is a process which assigns these sensory stimuli into perceptually meaningful categories. For the last four decades a considerable effort has been made to simulate the human pattern recognition capabilities by a machine. This quest for automation of pattern recognition processes is primarily driven by applications in computer vision for flexible manufacturing, speech recognition, text recognition, remote sensing, medicine and others. In computer vision for robots, for instance, the pattern recognition task may involve identification of object shape. In speech recognition the object categories may be words, phonemes or diphones and the sensory data on which classification is based could be vector quantised speech signal. In text recognition the objects of interest are characters and their groups forming words. Object categories in remote sensing relate to land cover and the sensory data are reflected energies in several spectral channels of the electromagnetic spectrum.

Josef Kittler

Pattern Matching in Strings

The problem of pattern recognition in strings of symbols has received considerable attention. In fact most formal systems handling strings can be considered as defining patterns in strings. It is the case for formal grammars and especially for regular expressions which provide a technique to specify simple patterns. Other kind of patterns on words may also be defined (see for instance [23], [4]) but lead to less efficient algorithms.

Maxime Crochemore, Dominique Perrin

Image Analysis Problems in Astronomy

The topics dealt with in this paper are as follows. Firstly, to set the scene, we briefly overview the major astronomical image processing systems, and make reference to current software engineering problems in this area. Secondly, we survey a range of pattern recognition problems, which all have classification as their central objective. These pattern recognition problems are: object searching and classification in photometry; the classifying of galaxies on the basis of their morphological shapes; and the classification of stellar spectra. Finally, we review some of the specific problems expected for Hubble Space Telescope image data.

F. Murtagh

A Panel on: Pattern Recognition and Image Processing with or without Intelligence?

It is difficult to summarize the different views as expressed by the panelists on a tricky subject, i.e. what help (if any) can Artificial Intelligence provide to pattern recognition and image processing as scientific disciplines.

V. Cantoni, V. Di Gesù, S. Levialdi

Knowledge Based Approaches

Frontmatter

A Knowledge Based Approach to Industrial Scenes Analysis: Shadows and Reflexes Detection

This paper describes a framework for detecting regions of special interest, like reflexes and shadows in industrial scenes. A new approach involving Artificial Intelligence techniques is tested; in fact procedural approaches exhibit several drawbacks such as lack of flexibility and of transparency in knowledge representation because knowledge is buried in code. A Knowledge-based system is proposed that is strictly interfaced with several low level image processing tools.A brief description of the image processing modules is presented together with details concerning knowledge representation and the control strategy adopted. For a better understanding of the system performance and man-machine interface, a detailed analysis session is reported.

M. Adimari, S. Masciangelo, L. Borghesi, G. Vernazza

An Approach to Random Images Analysis

The study of random images requires new approaches, as a matter of fact that their meaning is strongly dependent from the context of the research field and the classical techniques for shape analysis are not sufficient. Aim of the paper is the definition of an “open” knowledge based system for the analysis of problems dealing with such kind of image data.

V. Di Gesu, M. C. Maccarone

Description-Based Image Interpretation as a Tool for Heuristic Inference: A Geological Application

The interpretation of gravimetric surveys combines with other methodologies such as the interpretation of seismic and magnetic surveys, on-the-spot exploration, etc., to create a geological model of the region in question. This will provide a great deal of information on below-ground tectonics.

Anna Della Ventura, Piero Mussio, Raimondo Schettini

The Fur Project: Understanding Functional Reasoning

The FUR (FUnctional Reasoning) project aims to develop a computational model for the representation and use of functional knowledge. Reasoning in terms of function seems to be very common: in everyday language, for instance, objects are often referred to in terms of the function they provide (e.g. a “washing mashine”). In many cases a relation exists between function and shape, i.e. function can be inferred from structure and vice versa. Tools like hammers, screwdrivers or spanners are well- known examples.

M. DiManzo, E. Trucco, F. Giunchiglia, F. Ricci

A Knowledge Based Approach for Image Understanding

Many works in the field of image processing stress the utility of using Artificial Intelligence tools to obtain enhanced performances in the scene understanding task. Feigenbaum (1977) states that “..the power of an expert system derives from the knowledge it possesses not from the particular formalism and inference schemas it employs…”. In our opinion this assertion can be extended to all complex cognitive problems (such as natural language and automatic image understanding) in which human reasoning capabilities are required. “From this point of view a theory for the vision problem resolution must necessarily contain elements of a more general theory of thinking (Minsky, 1974)”. In order to translate these philosophical statements into action, we have to understand what kind of knowledge is useful for the vision problem resolution and how to represent it. Looking at the image understanding problem as a perception problem it is possible to individuate two different knowledge sources. The first one descends from the perceptual grouping laws of visible entities; the other derives directly from an explicit description of the objects to be recognized in the image describing a real scene. Normally these two knowledges are used in a hierarchical fashion with a special emphasis on one source at the expenses of the other. Moreover in many of these applications a low level processing attached to the image extracts features and a high level step tries to match the previously selected features with the model descriptions. A drawback of this approach is that an intelligent matching can be applied to an essentially non intelligent image partition.

S. Losito, G. Pasquariello, G. Sylos-Labini, A. Tavoletti

A Dynamic Programming Approach to Knowledge Based Contour Segmentation

Contour segmentation problem is investigated in the context of an object recognition system dealing with overlapping parts. A knowledge based approach is adopted, in order to get stable results. Contours are described by curvature function. Angles are detected in a preliminary phase. Dynamic programming allows a fast approximation with straight lines and circle arcs. A simple rule based system is developed in order to choose the best approximation. Some results are discussed.

F. Mangili, G. Viano

Learning from Examples in Computervision: Preliminary Statements

Maybe learning from examples is the field where inductive learning is more helpful and the results of the studies seem clearer.

V. Cantoni, L. Lombardi

Automatic Training in Statistical Pattern Recognition

The traditional way of constructing a statistical classification procedure involves estimation of class densities from a set of feature vectors from each class. In order to be effective these training sets often must be rather large, for example of the order of 100 from each class. This training stage is sometimes very costly and can involve many hours of tedious labelling and editing work. The present paper proposes a way of greatly reducing or almost avoiding this bottleneck stage, by automatically updating class descriptions via exploitation of the unclassified feature vectors. We treat the multivariate normal case in particular, but mention generalisations suitable for non-normal cases, including completely nonparametric updating methods. Application of the normal-based updating methods to two symbol recognition tasks is discussed.

Nils Lid Hjort, Torfinn Taxt

A New Knowledge Driven, Omnifont, Multiline OCR Process

This paper describes a new knowledge-driven, multiline, omnifont, feature extraction OCR process. A new representation of the alphabet knowledge allows usage of a limited set of rules when building a character description, disregarding particularities of font style and size, and directs the recognition process to the creation of a specific limited set of hypotheses, and their consequent test and verification. The process, although tested mostly with printed characters, was designed with the possibility of extension to handprinted and handwritten characters.

Jose Paster, Evelina Zemelman

Basic Pattern Recognition Tools

Frontmatter

Optimal Convex Set Included in a Binary Figure

The interpretation of a binary figure requires geometric methods. The access to interpretation can be obtained either by an approximation process or by a decomposition process.For example when using the concept of convexity, we use the convex hull notion to obtain an approximative representation of the initial figure. In this paper we present a new method oriented to the search for a convex set included in an initial figure.

Jean-Marc Chassery

A New Concept for Binary Images: The Kernel

In image processing, skeletonization of binary patterns consists in thinning the pattern until to get a line drawing. The thinned pattern, called the skeleton, must preserve the connectedness and shape of the original one. Many skeletonization algorithms exist such as those of Hilditch (5), Stefanelli and Rosenfeld (6), Chassery (2), …

M. Lamure, J. J. Milan

Graph Environment from Medial Axis for Shape Manipulation

This paper deals with shape description using a representation by shape covering with primitives.In particular, I will present methods developed from the medial axis transform in discrete space where the primitives are squares.After a presentation of the methods which define a graph environment to structure these primitives, a method based on the medial line transform will be presented.The resulting medial line graph can be used for shape manipulation processes such as filtering or decomposition. Each graph node is associeted to a contribution to the original shape.

Annick Montanvert

Weighted Distance Transforms: A Characterization

In many instances, it is convenient to label the space enclosed within the contour of a single-valued digital figure F. Labeling F by means of its distance transform DT, has been one of the first approaches to give structure to an otherwise amorphous space, and has been useful to reveal some of its features, especially those dependent on shape. In this framework, the set of the local maxima present in the DT plays a crucial role. In fact, the local maxima are necessary to identify the medial axis of F /1/. Moreover, figure decomposition techniques can be derived by suitably grouping the discs associated with the local maxima /2/.

Carlo Arcelli, Gabriella Sanniti di Baja

Distance Transformations in Hexagonal Grids

A distance transformation converts a binary image, consisting of feature and non-feature pixels, into a distance image. In this distance image each non-feature pixel has a value that approximates (or is equal to) the distance to the nearest feature pixel. Distance transformation will be denoted DT henceforth. In this paper DTs for the hexagonal pixel grid are derived and presented. A very small example of such a DT is shown i Fig. 1.

Gunilla Borgefors

Optimization of the Generalized Hough Transform

Many image processing problem require curve detection. These include vision directed automation, remote control of vehicles, biomedical applications and so on. The Hough transform[1] [2] is a technique for detecting straight lines within a noisy image and later adapted for the detection of circles, ellipses and other analytically defined shapes. This method has been modified by D.H.Ballard[3] for detecting arbitrary shapes, which is called generalized Hough transform.

Makoto Sato, Hidemitsu Ogawa

Synaptic Patterns for Straight Line Segment Detection

In this paper the mechanism of the straight line segment detection in the brain of primates is investigated. Specifically, a synaptic pattern model of neuronal small assembly units for segments detection is proposed and some results of its simulation are shown.

S. Impedovo, M. Castellano, A. Giannelli

The Measurement of Binocular Disparity

Current stereopsis algorithms rely on the detection of sophisticated landmarks from bandpass version of the monocular images. The process of extracting these landmarks and determining their inter-ocular correspondence is considered to be one of the hard computational tasks in stereopsis. In this paper we propose that symbolic features should not be extracted in the first stages of processing; rather we propose a technique for measuring the local phase difference between the two images. The local phase difference can be used to measure the relative local disparity between the monocular images. A later level of processing must be used to reduce the “false targets” that may be detected.

Michael R. M. Jenkin, Allan D. Jepson

Point Pattern Matching and Corner Finding For Line Drawings

This paper proposes a robust method for matching labeled point patterns. A point pattern is partitioned into a set of triangles using the Delaunay triangulation. For the corresponding triangle pair, a consistency graph is constructed based on the pairwise compatibility between the points in the triangles. The matching is accomplished by locating the largest maximal clique of the consistency graph. A new method for detecting corners is also proposed, based on the local symmetry of a discrete curve. The corners are used as the feature points in point pattern representation and matching of the line drawings.

Hideo Ogaxtfa

Photometric Approach to Tracking of Moving Objects

A new approach to object tracking in industrial environments is presented. It is based on three dimensional information on objects gathered by means of the stereophotometric technique. Knowledge about the set of objects we deal with and constraints on their motion allow a simplified reasoning to solve the correspondence problem between objects in two consecutive images. Finally the performances of such a tracking system will be discussed.

V. Cantoni, L. Carrioli, M. Diani, M. Savini, G. Vecchio

A Fast Algorithm For Moment Invariants Generation

Moment invariants have been used as feature descriptors in a variety of object recognition applications. When assuming a continuous image function, moments calculated using a double-integral formulation, are invariant to variations in translation, rotation, and size of the object. However, due to the recursive nature of the calculations and the limited speed of microprocessors, the moments were not computable in real-time. In this paper we present real-time invariant moment computations using the ‘Delta Method’, as a means of scene representation.

Marwan F. Zakaria, Louis J. Vroomen, Paul J. A. Zsombor-Murray, Jan M. H. M. Van Kessel

Model Generation from Images

With the recent increased interest in model-based matching1, 2, 3, 4 has appeared a concomitant interest in automated model creation.5, 6, 7 Ideally, models should be constructed automatically from a scene, so that the process of matching (for example) would require no human intervention. In any case when computer models of an object are needed but are not available, it would be convenient to have an automatic method for constructing the models from image data.

J. R. Stenstrom, C. I. Connolly

A Triangle Based Data Structure For Multiresolution Surface Representation

A hierarchical model for approximating 2-1/2 dimensional surfaces is described. This model, called Delaunay pyramid, is a method for compression of spatial data and representation of a surface at successively finer levels of detail. The Delaunay pyramid is based on a sequence of Delaunay triangulations of suitably defined subsets of the set of data points.

Leila De Floriani

Algorithmic Information of Images

Men and animals instantly detect “regularity” and “constancy” in visual patterns (and, in general, their structural aspects) with high efficiency. Our approach to evaluating the structural content of a pattern starts from the definition of algorithmic information or complexity, given by Kolmogorov and Chaitin, in which we distinguish two parts containing the metric and structural aspects of the pattern. SIT theory, developed by Leeuwenberg et al. in the area of visual perception, allows one to evaluate efficiently the structural complexity of a linguistic pattern code. We analyse the formal properties of SIT in the context of the theory of reduction calculi.

O. Martinoli, F. Masulli, M. Riani

Effects of Heterogeneity of Variance on the Probability of Correctly Identifying the Best Normal Population

This study examines the effects of heterogeneity of variance on the probability of making the correct selection when using the means procedure for selecting the population with the largest mean from a set of independent normal populations. The study is conducted by using Monte Carlo simulation techniques for 3, 4, and 5 normal populations as an application of pattern recognition and classification. The population means and standard deviations are assumed to be equally-spaced. Two types of heterogeneity of variane are considered: (1) associating larger variances with larger means, and (2) associating smaller variances with larger means.

Adel M. Zaher, Zaki A. Azmi

Multifeatures and System Based Solutions

Frontmatter

Integrating Disparity Measurements Over Space and Spatial-Frequency

Rather than build a stereopsis model based upon determining correspondences between sophisticated monocular features such as zero-crossings or peaks in band-pass versions of the monocular input, we propose that disparity detectors should be constructed that act directly upon band-pass versions of the monocular inputs. Building upon the results of these disparity detectors, we show that a simple surface model based on object cohesiveness and local planarity across a range of spatial-frequency tuned channels can be used to reduce false matches. The resulting local planar surface support could be used to segment the image into planar regions in depth. Due to the independent nature of both the disparity detection and local planar support mechanism, this method is capable of dealing with both opaque and transparent stimuli.

Michael R. M. Jenkin, Allan D. Jepson, John K. Tsotsos

Improving Boundary Contour Matching Using Viewing Transforms

Boundary contour matching typically involves classifying a sequence of curves and deciding what class of object that sequence represents. The geometry of the shapes is ordinarily a factor in the curve classification. Once the curves are classified the shape information is usually ignored. The variation of curve shapes in the object contour boundary should arise from a single consistent viewing transform. In this paper, techniques are developed to insure that boundary curve sequences reflect a consistent viewing transformation.

J. Ross Stenstrom

3-D Range Estimation From the Focus Sharpness of Edges

In the proposed method the focus sharpness of edge points in a recorded image is used to estimate their actual position in the 3-D space. As a matter of fact, the amount of blurring in a picture depends on the distance of the object details from the plane which is “in focus” in the observed scene. Henceforth, from such measures it is possible to roughly provide depth estimates on strong edges of a recorded image. Moreover, sharpness estimates can be effectively used as additional features of the detected contours in other problems of image registration (stereo and motion). This approach can be used to roughly provide depth estimates on strong edges of a recorded image; some examples are enclosed to show its performance on a set of real scenes.

G. Garibotto, P. Storace

Object Recognition and Location by a Bottom-Up Approach

The advantages deriving from a hierarchical imp1ementation of the “Labeled Hough Transform” are pointed out. It offers flexibility? confidence and a reduced computation time. Moreover it furnishes more information? useful when we face the partial occlusion problem. After a description of the technique and its main characteristics, we present some encouraging experimental results obtained with artificial images.

V. Cantoni, L. Carrioli, M. Diani, M. Ferretti, L. Lombardi, M. Savini

The Dynamic Pyramid a Model for the Motion Analysis with Controlled Continuity

The Dynamic Pyramid is a model to solve the correspondence problem of image sequences. A robust estimation of local displacements is combined with controlled continuity constraints. At the heart of the model is the functional of an elastic membrane whose elastic constants are subject to variation. The continuity control function is derived from the tension in the displacement vector field at grayvalue edges. The displacement term of the functional is based on robust local binary correlations derived from the signs of the bandpass filtered images. The basic representation of the model is the pyramid: The original images are converted into Laplacian pyramids, the signs of which are the features to determine the local displacements as well as the continuity control function. The vector field is built up as a pyramid from coarse to fine, giving the final displacement vector field at the finest level.

J. Dengler, M. Schmidt

Combining Laplacian-Pyramid Zero-Crossings: From Theory to Applications to Image Segmentation

Image segmentation is a transformation of the original pixel array into a much more compact description, whose primitive elements should both represent complete information and capture significant properties of the physical world. A basic problem with image segmentation is that on one side we want to gain global properties of structures but on the other side don’t want to loose information about local details.

G. Gerig

A Parallel Pyramidal Algorithm to Determine Curve Orientation

We present in this paper a parallel algorithm to orientate curves, which executes efficiently on a pyramidal machine. Independently of its direct interest for image processing, it gives example of the methodology needed to conceive new algorithms for massively parallel architectures. Some deviations are presented, which extend the method to a class of algorithms.

Ph. Clermont, A. Belaid, A. Merigot

Parallel Image Processing in a CSP-Environment: Performance Evaluation

The field of Computer Vision is characterized by the need of processing very large amounts of data in a time that, for many applications, is extremely short. Moreover, many of the algorithms proposed for solving the so called low level vision tasks exhibit a high degree of parallelism while, as far as one ascends to higher perception levels, it becomes evident the need to resort to complex reasoning schemes, where information derived from a variety of sophisticated computations, involving sequential processing, has to be combined with a knowledge base. In any case, all has to be performed fast enough to interact with the real world changes. Hence the demand for computer arrays whose structure reflects the problem’s structure, and for powerful tools that allow an optimal mapping of logical to physical architectures.

R. Chianese, L. P. Cordella, M. De Santo, R. Marcelli, M. Vento

Image Restoration by Fast Local Convolution

A local spatial convolution filter for the restoration of one- and two—dimensional signals is suggested whose design is based on approximating any global linear restoration filter, which might be the Wiener, pseudoinverse, constrained least squares, or projection filter. The local filter provides a restoration that is as close as possible to the global restoration. It is shown by an example using a blurred standard image that the restorations are satisfactory even when the filter size is quite small. Quantitative properties of the suggested localization filter are discussed.

Erkki Oja, Jouko Lampinen

A Cost-Effective Architecture for Vision

Parallel processing appears as the only solution to the performance bottleneck of current industrial vision systems, mostly based on Von-Neumann processors. In this paper two of the most common approaches to date are analyzed from a point of view of cost-effectiveness; SIMD arrays are more flexible, simpler to design, and have lower latency than pipeline processors. FAMA, a new SIMD array based architecture is introduced whose main features are cost-effectiveness at the processor element level, and applicability to different levels of processing.

A. Alcolea, A. Roy, A. Martínez, P. Laguna, J. Navarro, T. Pollán, S. J. Vicente

Hierarchial Specification Image Data Types and Level Language

Several reasons have led to the structuring of image processing and the formulation of this structuring by programming languages. The most important of these reasons is the need for providing clearer specification, and better control of associated image data structures, and for greater clarity in program structures, in order to increase the readability and make debugging easier. Another important reason is the desire to exploit new possibilities offered by recent advances in hardware technology. Many dedicated processors can now cooperate in parallel execution to provide efficient parallel implementation for some of the most commonly used image processing algorithms, and this for a manageable level of investment.

A. Belaid, Z. Boufriche

Geometric Transformations of Raster Images on SIMD Processors

The geometric rectification of images is an important preprocessing task, Dewarping is necessary especially in multispectral and multitemporal image analysis to achieve registration with subpixel accuracy. Lines, edges, and textures are essential features for segmentation and object recognition and should not be destroyed by coarse signal interpolation techniques. This oaper shows that the Single Instruction Stream Multiple Data Stream (SIMD) processing scheme is well suited for the geometric transformation task.

Wolfgang Wilhelmi

A Pictorial and Textual IR Environment based on Image Description

The availability of more and more on-line mass storage has made feasable the realisation of new applications in the field of the management of image archives by means of Information Retrieval (IR) techniques. These applications can be realised by using documents that contain both formatted and unformatted information that describes the image itself. In addition, the documents contain suitable pointers that allows to have the images back from an external memory. It doesn’t matter the recording support: microfilm, magnetic or optical disks, and so on. Formatted information and descriptions are processed with IR techniques in order to retrieve the documents and obtain a reproduction of the desired images.

Isabella Gagliardi, Dora Merelli, Fulvio Naldi, Piero Mussio, Marco Padula, Marco Protti

Image Analysis - Applications

Frontmatter

A Low Cost 3-D Vision System for Robotic Assembly

A 3-D vision system designed to enable automatic acquisition of mechanical parts in a bin by means of an assembly robot is presented. Hardware cost is kept low by using standardised parts: a microcomputer, a frame grabber, a translating table and one or two video cameras. Yet versatility is preserved and real time industrial rate is achieved. A very fast recognition method has been designed which is based on local features of the objects to be identified in the bin and metric information. Early processing steps avoid lengthy treatments such as systematic edge detection. Rather a few local features are firstly identified in an image in order to achieve a rough estimate of the correspondence between an object in the bin and a stored model of the same object. This correspondence is then confirmed and refined iteratively. Depth information is obtained by stereoscopy. An experiment involving the acquisition of snubber valves is presented.

Maurice Poulenard, Georges Stamon

Determination of Egomotion and Environmental Layout from Noisy Time-Varing Image Velocity in Monocular Image Sequences

In this paper, we present a algorithm for computing the motion and structure parameters that describe egomotion and environmental layout from image velocity fields generated by a moving monocular observer viewing a stationary environment. Egomotion is defined as the motion of the observer relative to his environment and can be described by 6 parameters; 3 depth-scaled translational parameters, $$\vec u$$, and 3 rotation parameters, $$\vec \omega $$. Environmental layout refers to the 3-D shape and location of objects in the environment. For monocular image sequences, environmental layout is described by the normalized surface gradient, $$\vec \alpha $$, at each image point. To determine these motion and structure parameters we derive nonlinear equations relating image velocity at some image point $$\vec Y(\vec P'\,,t')$$ to the underlying motion and structure parameters at $$\vec Y(\vec P,t).$$. The computation of egomotion and environmental layout from image velocity is sometimes called the reconstruction problem; we reconstruct the observer’s motion, and the layout of his environment, from (time-varying) image velocity. A lot of research has been devoted to devising reconstruction algorithms. However, a little addressed issue concerns their performance for noisy input: how accurate does the input image velocity have to be to get useful output?

John L. Barron, Allan D. Jepson, John K. Tsotsos

Heuristic Description of Spatial and Temporal Behaviour of Rain Patterns Using Simple Physical Model

The goal of this paper is to present the case of image interpretation showing a design aspect of a tool developed for radar images.

Alona Pawlina Bonati, Piero Mussio

A Framework for Region Characterization in Remote Sensing Images by Fractal—Based Approach

The fractal—based technique is one of the most recent approach in texture analysis; the potentiality of this approach is under investigation in various fields, for the analysis of images, curves and natural 3—D structures.In this work, we have verified the hypothesis of fractal behaviour of intensity surfaces of remote sensing images by appropriate tests. Then we have applied a specific method for estimation of fractal dimension (the so—called “blanket” method) for texture discrimination in remote sensing images (in particular, in satellite SAR images, Meteosat images and aerial photographs). By analyzing the fractal dimension histograms and splitting them appropriately, a segmentation of analyzed images was also implemented. Interesting results in texture discrimination and image analysis have been obtained for almost all the considered images.

L. Giberti, L. Piccollo, S. Dellepiane, S. B. Serpico, G. Vernazza

Issues in The Integration of Spatially-Distributed Data Ancillary to Remotely Sensed Images

In remote sensing integration is the task of bringing together information for the systematic analysis of spatially-distributed data by digital image processing. This paper discusses some practical problems that are too often disregarded or underestimated when attempting to evaluate the cost and complexity of a data-integration task and its impact in a research project. Topics such as an application-independent approach to integration, the knowledge expected for a user of remotely-sensed data, data formats, media, metrics, and uncertainty, the cost of constructing an integrated data set, vector and raster data conversion, and some analytical requirements of integrated data, are discussed with an eye to the practical challenges of the present and of the future in remote sensing. This paper intentionally avoids emphasizing the power of special-purpose computer systems and comprehensive organizational schemes in order to focus on practical problems that still make data integration one of the most difficult and costly tasks in remote sensing.

Andrea G. Fabbri, Ko B. Fung, Tonis Kasvand

Reconstruction and Representation OF 3-D Surfaces from Geophysical Data

In this paper a method for reconstructing and representing 3-D surfaces from a set of parallel planar slices or cross sections is proposed. The method is quite general but we focus on an application in geophysics, that is 3-D reconstruction of geologycal layers, having as input a sequence of seismic sections. To generate a surface description the following three steps are involved. First, the surface contours at each slice are detected. A surface contour is the image of a curve representing the cross section of the surface with the plane of the slice. Second, a procedure for relating each surface contour with the correspondent one in the successive slice is performed and a triangulation process is accomplished over pairs of correspondent contours to generate the local bounding surface structure. Finally, indications about the surface structure are obtained by analyzing the orientation of adjacent triangular patches. In this paper we present the first step in details; the input seismic sections are converted into digital images and an algorithm for detecting the layer surface contours is described. Some initial results of 3-D surface reconstruction are also included.

Maria F. Costabile

Hipparcos Project: Imaging Approach to Multiple Star Recognition

The data collected by HIPPARCOS satellite can be used in an imaging approach, like in VLBI field, to reconstruct the stars system image and to extract the astrometric parameters. The method is able to treat multiple star systems to determine the stars position and their intensities. It is not able to process variable stars, very faint stars and systems with high orbital motion. This paper is devoted to give a short description of the approach and to present a complete outline of the implemented algorithm. Results related to cpu time usage and to reduction results are given with particolar enphasis on detectable separation limits and parameters precision.

L. Borriello

Segmenting Positron Emission Tomography (PET) Imagery

PET provides images on the concentration of radioactivity throughout organs of patients. This information is relevant because it gives a measure of the functional activity of the various organ regions. The analysis of these images involves the delineation of anatomical regions of interest within the PET scan. An algorithm for automatically delineating regions of interest in PET images is presented in this paper.

G. G. Pieroni, I. Rousseau, N. Volkow

Design and Testing of a Classification System which Recognizes Coronary Stenosis by Site and Relative Severity, Using Myocardial Tl-201 Scintigrams

The main theme of this paper is an implementation of fuzzy clustering in order to determine whether perfusion patterns exist which are diagnostically specific for the location of stenosis in a particular major coronary artery. During the past decade, Tl-201 myocardial scintigraphy has proven its value for the detection of ischemic heart disease, and, more recently, for establishing prognosis [1]. Other clinically important data, however, such as the site of greatest coronary artery obstruction and its relative severity compared to stenoses in other arteries, has not been as readily derived from planar Tl-201 scintigrams.

Krzysztof J. Cios, Lucy S. Goodenday

Automatic Interpretation of Digital Autoradiograph of DNA Sequencing Gels

An image processing system has been developed for sequencing DNA gels by digital autoradiography. A multi-wire proportional counter (MWPC) images DNA band patterns which form tracks on an electrophoresis gel. Algorithms have been developed to interpret the MWPC image to obtain a DNA sequence. The algorithms include: (1) a dynamic programming procedure for track detection; (2) a maximum entropy deconvolution algorithm for smoothing and sharpening the image, and (3) a procedure for assigning the correct band sequence. The sequence produced by this method can be confirmed by human operators working from conventional film autoradiographs. The algorithm is being evaluated on various gels and methods for incorporating the knowledge base are currently being investigated. With these improvements we expect the system will approach the performance of expert sequencers.

D. Q. Xu, W. J. Martin, M. K-S. Tso

Supporting Diagnosis and Surgical Planning by Analysis and 3D Display of Volume Images

Presently several 3D imaging methods such as computed tomography (CT) and magnetic resonance (MR) imaging have become standard clinical tools in medical diagnosis as well as in planning and monitoring of therapies. In normal clinical practice the data are evaluated by visually investigating suitable 2D slices of the volume. It requires, however, extensive training to visualize 3D phenomena based on pure slice representations so that the vast information amount in volume images is not fully exploited. The need for more powerful analysis methods has become even more pronounced after the development of fast data acquisition methods for MR imaging. An emerging standard of the order of 2563 volume elements (voxels) can no more be evaluated by traditional visual methods in a reasonable amount of time.

J. Ylä-Jääski, O. Kübler

Backmatter

Weitere Informationen