Skip to main content

Über dieses Buch

Machine Vision technology is becoming an indispensible part of the manufacturing industry. Biomedical and scientific applications of machine vision and imaging are becoming more and more sophisticated, and new applications continue to emerge. This book gives an overview of ongoing research in machine vision and presents the key issues of scientific and practical interest. A selected board of experts from the US, Japan and Europe provides an insight into some of the latest work done on machine vision systems and appliccations.



1. Active Optical Range Imaging Sensors

Active, optical range imaging systems collect three-dimensional coordinate data from object surfaces. These systems can be useful in a wide variety of automation applications, including shape acquisition, bin picking, assembly, inspection, gauging, robot navigation, medical diagnosis, cartography, and military tasks. The range-imaging sensors in such systems are unique imaging devices in that the image data points explicitly represent scene surface geometry in a sampled form. At least six different optical principles have been used to actively obtain range images: (1) radar, (2) triangulation, (3) moire, (4) holographic interferometry, (5) lens focusing, and (6) diffraction. The relative capabilities of different sensors and sensing methods are evaluated using a figure of merit based on range accuracy, depth of field, and image acquisition time.
Paul J. Besl

2. 3-D Structures from 2-D Images

In recent years, an important area of research in computer vision has been the recovery or inference of the 3-D information about a scene from its 2-D image. In light of the fascinating power of humans in inferring the 3-D structure of objects from visual images, a great deal of effort has been directed towards the understanding of each module in the human visual system. The results of these efforts have yielded mathematical models for most of these modules. In this chapter, we survey results of a variety of research work conducted in this area using different depth cues, including depth from stereopsis; structure from motion parallax and optical flow, shape from shading, texture, and surface contours; and shape from occluding contours. Also included are two nonanthropomorphic approaches, i.e., structure from volume intersection and shape from spatial encoding, which are closely tied to cues used in the human visual system for inferring 3-D structure from 2-D images. We present the strengths and shortcomings of each approach, and briefly discuss the possibility of combining different approaches in order to obtain more robust and reliable results, and suggest the direction of future research.
J. K. Aggarwal, C. H. Chien

3. 3-D Sensing for Industrial Computer Vision

Computer vision is becoming an important issue in many industrial applications such as automatic inspection of manufactured parts, robotic manipulations, autonomous vehicle guidance, and automatic assembly. Since these applications are performed in a three-dimensional world, it is imperative to gather reliable information on the 3-D structure of the scene. Range-finder cameras are usually used to collect 3-D data. This chapter presents a review of various range-finding techniques. Early designs and more recent developments are discussed along with a critical assessment of their performances. Purely optical techniques are not covered.
Denis Poussart, Denis Laurendeau

4. Methodology for Automatic Image-Based Inspection of Industrial Objects

Many articles have been written dealing with applic;ations of automated inspection for industry. These articles typically describe the application as well as selected algorithms and a specific proposed or implemented inspection system but do not describe the methodology of how the system was developed. This chapter attempts to describe basic concepts and general requirements for automation of an inspection application and suggests a methodology that might simplify algorithm development efforts for new applications. A systematic, general approach, which was originally developed for automatic real-time X-ray inspection of industrial objects with intricate internal structures, is described. Software-based functions for the automatic inspection are addressed, from geometric modeling of image content, through development of high-level process filters, creation of inspection process plans, and final simulation of system performance. An interactive development and simulation software environment that was designed and implemented to provide the necessary tools for automating an image-based inspection process is also described.
Kristina Hedengren

5. A Design Data-Based Visual Inspection System for Printed Wiring

Until very recently, printed wiring board (PWB) fabrication relied on electrical testing and visual inspection by humans to provide feedback for process control. The low efficiency of visual inspection is often a severe problem for printed wiring board manufacturers. During the past few years, considerable work has been done both in industry and research institutions to solve inspection problems with image-analysis techniques. However, many of these efforts have focused on satisfying the needs of a limited set of users. This makes them vulnerable to changes in fabrication technology and the geometrics of wiring patterns. In the factories of the future, computer-aided design (CAD) data will be the source of all the control information for the fabrication processes. Accordingly, the primary goal of the work presented here has been to devise an approach that can be integrated to CAD data-driven production environments. The main result is the CAD data-based verification of wiring patterns. Other important objectives have been high throughput, low cost, and compact implementation. From the developmental point of view, the goal has been to build a functionally complete experimental system that can be upgraded to run at the speed of image acquisition by adding standard hardware.
Olli Silvén, Ilkka Virtanen, Tapani Westman, Timo Piironen, Matti Pietikäinen

6. Extracting Masks from Optical Images of VLSI Circuits

This paper explores line labeling algorithms for extracting the mask layers from a clean line drawing representing the optical image of a VLSI chip. We start by developing a suitable world model for VLSI images, treating a chip as a multilayer sandwich of translucent layers, each of which is composed of planar and rectilinear strips. The arrangement of these strips is interpreted according to a hierarchical description of the chip in which strips combine to form electrical elements, which in turn form gates, which in tum form even higher-level building blocks. We model the optical image of the VLSI chip as a simple 2-D projection of these layers, in which all strip boundaries are preserved but all depth and layer identification is lost. We assume that this image is a perfect line drawing of the chip. Our vision problem is to reverse the image-formation process in order to reconstruct the original scene and extract the original masks. To recover this information, we show that features and design constraints on the layers translate into a natural labeling scheme for the lines, junctions, and regions defined by the line drawing. We present two different algorithms for extracting masks. The first uses a constraint propagation algorithm, exploiting the natural constraints on the junctions to reduce the set of possible interpretations of the lines. The second algorithm attaches a series of labels to the image, building up path fragments from lines, then linking them into paths, assigning paths to layers, labeling the layers, and assigning insides and outsides within each layer. The key issue is to use as much knowledge as possible about VLSI, together with hints from the operator, to reduce the ambiguity of the line drawing, and thereby reduce the number of sets of masks that could possibly form the image. Performance of the system is shown on a typical CMOS gate. We conclude by showing how our approach can be used to generalize previous line drawing interpretation methods for projected images of 3-D trihedral blocks worlds with both opaque and transparent surfaces.
Hong Jeong, Bruce R. Musicus

7. Control-Free Low-Level Image Segmentation: Theory, Architecture, and Experimentation

In this chapter, the computer vision problem of segmenting images is addressed. Our approach is based upon the fact that low-level image segmentation is a model-driven operation, conveyed in a way that all relevant knowledge gathered in a supervised learning phase is used in parallel in the segmentation process. Such a control-free image segmentation can be achieved by using a pattern-recognition approach. This method uses a relatively large number of local image features and combines them optimally according to the scene knowledge acquired in a training phase by the use of a supervised classification procedure. In this methodology, training is performed by the user, outlining the image regions belonging to each class. There are two major advantages accrued from this approach. First, the need for expert image-analysis knowledge is minimized, since the user selects what is to be segmented and is not required to determine how this segmentation is to be accomplished. Second, this approach is amenable to parallel pipeline hardware implementation. Extensive experimentation with many different industrial problems demonstrates that this approach is an effective and useful building block for low-level computer vision applications.
W. E. Blanz, J. L. C. Sanz, D. Petkovic

8. Computer Vision: Algorithms and Architectures

Ever since computers were used for pattern recognition, image processing, and more generally for vision, a number of special-purpose algorithms and architectures have been developed. As new architectures reached the construction stage, different classes of algorithms emerged in order to produce more effective and efficient solutions to the new, heavy burdens of color and moving images, stereo vision, and real-time performance. The multiprocessor machines (including a variety of interconnection patterns, memory organization, control structures, and input-output management) stimulated algorithm designers to develop suitable data structures, choice of appropriate primitive operations, adequate sequencing of input image data, use of concurrency of local computations, etc. This chapter presents a review of some basic image-processing algorithms implemented on different multiprocessor machines. Algorithms for connected component labeling, line detection, and stereo matching are considered on the systolic, mesh, tree, and pyramid machines. The analysis and evaluation of these algorithms may hopefully lead to their use in real applications in a cost-effective way.
Concettina Guerra, Stefano Levialdi

9. Image Understanding Architecture and Applications

We describe in this chapter how various image-understanding problems can be mapped onto an architecture and associated implementation specifically designed for such problems. This parallel architecture, which has been funded as part of the DARPA Image Understanding Program, provides a hierarchical, heterogeneous structure to support the wide granularity of processing encountered in the image-understanding domain. In addition, it has an associative capability that allows rapid feedback of global and local summary information to facilitate knowledge-directed processing. We present several applications of this architecture, which span a considerable space of potential use.
David B. Shu, Greg Nash, Charles Weems

10. IDATEN: A Reconfigurable Video-Rate Image Processor

This chapter describes a reconfigurable real-time image-processing system, IDA TEN , that can process time-varying images at video rate. The development goal was to devise a system that could process and analyze dynamically moving objects in a scene, while also being able to process images at high speed. The basic design concept of this system is to improve the overall processing efficiency, from input to output of image data. We present a reconfigurable pipeline architecture for the image-processing system. In this architecture, multiple-processing modules are interconnected via a network. Each processing module can execute basic functions for image processing at video rate. The network is based on a Benes multistage switching network, and its configuration is extended such that multiple branching is supported for image processing. Based on this architecture, we have developed a prototype system named IDATEN, a video-rate image processor. The system was made up of a 16 x 16 network unit and a processor unit that consisted of 15 high-speed processing modules and video input/output modules. To process a time-varying image, the system programmers have only to determine the connection of pipelines and set parameters for processing modules in order to specify pertinent connection information in the network unit and to select the function of each processing module.
Shigeru Sasaki, Toshiyuki Gotoh, Masumi Yoshida

11. Applying Iconic Processing in Machine Vision

Shape-based (iconic) approaches play a vital role in the early stages of a computer vision system. Many computer vision applications require only 2-D information about objects. These applications allow the use of techniques that emphasize pictorial or iconic features. In this chapter we present an iconic approach using morphological image processing as a tool for analyzing images to recover 2-D information. We also briefly discuss a special architecture that allows very fast implementation of morphological operators to recover useful information in diverse applications. We demonstrate the efficacy of this approach by presenting details of an application. We show that the iconic approach offers features that could simplify many tasks in machine vision systems.
Robert M. Lougheed, David McCubbrey, Ramesh Jain


Weitere Informationen