Skip to main content
main-content

Über dieses Buch

This book summarizes the results of our modeling-from-reality (MFR) project which took place over the last decade or so. The goal of this project is to develop techniques for modeling real objects and/or environments into geometric and photometric models through computer vision techniques. By developing such techniques, time consuming modeling process, currently un­ dertaken by human programmers, can be (semi-)automatically performed, and, as a result, we can drastically shorten the developing time of such virtual reality systems, reduce their developing cost, and widen their application areas. Originally, we began to develop geometric modeling techniques that acquire shape information of objects/environments for object recognition. Soon, this effort evolved into an independent modeling project, virtual-reality modeling, with the inclusion of photometric modeling aspects that acquire appearance information, such as color, texture, and smoothness. Over the course of this development, it became apparent that environmental modeling techniques were necessary when applying our techniques to mixed realities that seamlessly combine generated virtual models with other real/virtual images. The material in his book covers these aspects of development.

Inhaltsverzeichnis

Frontmatter

Geometric Modeling

Frontmatter

Chapter 1. Principal Component Analysis with Missing Data and Its Application to Polyhedral Object Modeling

Abstract
Observation-based object modeling often requires integration of shape descriptions from different views. In current conventional methods, to sequentially merge multiple views, an accurate description of each surface patch has to be precisely known in each view, and the transformation between adjacent views needs to be accurately recovered. When noisy data and mismatches are present, the recovered transformation become erroneous, in addition, the transformation errors accumulate and propagate along the sequence, resulting in an inaccurate object model. To overcome these problems, we have developed a weighted least-squares (WLS) approach which simultaneously recovers object shape and transformation among different views without recovering interframe motion as an intermediate step.
We show that object modeling from a sequence of range images is a problem of principal component analysis with missing data (PCAMD), which can be generalized as a WLS minimization problem. An efficient algorithm is devised to solve the problem of PCAMD. After we have segmented planar surface regions in each view and tracked them over the image sequence, we construct a normal measurement matrix of surface normals, and a distance measurement matrix of normal distances to the origin for all visible regions appeared over the whole sequence of views, respectively. These two measurement matrices, which have many missing elements due to noise, occlusion, and mismatching, enable us to formulate multiple view merging as a combination of two WLS problems. A two-step algorithm is presented to computer planar surface descriptions and transformations among different views simultaneously. After surface equations are extracted, spatial connectivity among these surfaces is established to enable the polyhedral object model to be constructed.
Experiments using synthetic data and real range images show that our approach is robust against noise and mismatching and generates accurate polyhedral object models by averaging over all visible surfaces. Two examples are presented to illustrate the reconstruction of polyhedral object models from sequences of real range images.
Harry Shum, Katsushi Ikeuchi, Raj Reddy

Chapter 2. Building 3-D Models from Unregistered Range Images

Abstract
In this paper, we describe a new approach for building a three-dimensional model from a set of range images. The approach is able to build models of free-form surfaces obtained from arbitrary viewing directions, with no initial estimate of the relative viewing directions. The approach is based on building discrete meshes representing the surfaces observed in each of the range images, to map each of the meshes to a spherical image, and to compute the transformations between the views by matching the spherical images. The meshes are built using an iterative fitting algorithm previously developed; the spherical images are built by mapping the nodes of the surface meshes to the nodes of a reference mesh on the unit sphere and by storing a measure of curvature at every node. We describe the algorithms used for building such models from range images and for matching them. We show results obtained using range images of complex objects.
Kazunori Higuchi, Martial Hebert, Katsushi Ikeuchi

Chapter 3. Consensus Surfaces for Modeling 3D Objects from Multiple Range Images

Abstract
In this paper, we present a robust method for creating a triangulated surface mesh from multiple range images. Our method merges a set of range images into a volumetric implicit-surface representation which is converted to a surface mesh using a variant of the marching-cubes algorithm. Unlike previous techniques based on implicit-surface representations, our method estimates the signed distance to the object surface by finding a consensus of locally coherent observations of the surface. We call this method the consensus-surface algorithm. This algorithm effectively eliminates many of the troublesome effects of noise and extraneous surface observations without sacrificing the accuracy of the resulting surface. We utilize octrees to represent volumetric implicit surfaces—effectively reducing the computation and memory requirements of the volumetric representation without sacrificing accuracy of the resulting surface. We present results which demonstrate that our consensus-surface algorithm can construct accurate geometric models from rather noisy input range data.
Mark D. Wheeler, Yoichi Sato, Katsushi Ikeuchi

Photometric Modeling

Frontmatter

Chapter 4. Object Shape and Reflectance Modeling from Observation

Abstract
An object model for computer graphics applications should contain two aspects of information: shape and reflectance properties of the object. A number of techniques have been developed for modeling object shapes by observing real objects. In contrast, attempts to model reflectance properties of real objects have been rather limited. In most cases, modeled reflectance properties are too simple or too complicated to be used for synthesizing realistic images of the object.
In this paper, we propose a new method for modeling object reflectance properties, as well as object shapes, by observing real objects. First, an object surface shape is reconstructed by merging multiple range images of the object. By using the reconstructed object shape and a sequence of color images of the object, parameters of a reflection model are estimated in a robust manner. The key point of the proposed method is that, first, the diffuse and specular reflection components are separated from the color image sequence, and then, reflectance parameters of each reflection component are estimated separately. This approach enables estimation of reflectance properties of real objects whose surfaces show specularity as well as diffusely reflected lights. The recovered object shape and reflectance properties are then used for synthesizing object images with realistic shading effects under arbitrary illumination conditions.
Yoichi Sato, Mark D. Wheeler, Katsushi Ikeuchi

Chapter 5. Eigen-Texture Method: Appearance Compression Based on 3D Model

Abstract
Image-based and model-based methods are two representative rendering methods for generating virtual images of objects from their real images. Extensive research on these two methods has been made in CV and CG communities. However, both methods still have several drawbacks when it comes to applying them to the mixed reality where we integrate such virtual images with real background images. To overcome these difficulties, we propose a new method, which we refer to as the Eigen-Texture method. The proposed method samples appearances of a real object under various illumination and viewing conditions, and compresses them in the 2D coordinate system defined on the 3D model surface. The 3D model is generated from a sequence of range images. The Eigen-Texture method is practical because it does not require any detailed reflectance analysis of the object surface, and has great advantages due to the accurate 3D geometric models. This paper describes the method, and reports on its implementation.
Ko Nishino, Yoichi Sato, Katsushi Ikeuchi

Environmental Modeling

Frontmatter

Chapter 6. Acquiring a Radiance Distribution to Superimpose Virtual Objects onto a Real Scene

Abstract
This paper describes a new method for superimposing virtual objects with correct shadings onto an image of a real scene. Unlike the previously proposed methods, our method can measure a radiance distribution of a real scene automatically and use it for superimposing virtual objects appropriately onto a real scene. First, a geometric model of the scene is constructed from a pair of omni-directional images by using an omni-directional stereo algorithm. Then radiance of the scene is computed from a sequence of omni-directional images taken with different shutter speeds and mapped onto the constructed geometric model. The radiance distribution mapped onto the geometric model is used for rendering virtual objects superimposed onto the scene image. As a result, even for a complex radiance distribution, our method can superimpose virtual objects with convincing shadings and shadows cast onto the real scene. We successfully tested the proposed method by using real images to show its effectiveness.
Imari Sato, Yoichi Sato, Katsushi Ikeuchi

Chapter 7. Illumination Distribution from Shadows

Abstract
The image irradiance of a three-dimensional object is known to be the function of three components: the distribution of light sources, the shape, and reflectance of a real object surface. In the past, recovering the shape and reflectance of an object surface from the recorded image brightness has been intensively investigated. On the other hand, there has been little progress in recovering illumination from the knowledge of the shape and reflectance of a real object. In this paper, we propose a new method for estimating the illumination distribution of a real scene from image brightness observed on a real object surface in that scene. More specifically, we recover the illumination distribution of the scene from a radiance distribution inside shadows cast by an object of known shape onto another object surface of known shape and reflectance. By using the occlusion information of the incoming light, we are able to reliably estimate the illumination distribution of a real scene, even in a complex illumination environment.
Imari Sato, Yoichi Sato, Katsushi Ikeuchi

Epilogue: MFR to Digitized Great Buddha

Frontmatter

Chapter 8. The great Buddha Project: Modeling Cultural Heritage Through Observation

Abstract
This chapter presents an overview of our efforts in modeling cultural heritage through observation. These efforts span three aspects: how to create geometric models of cultural heritage; how to create photometric models of cultural heritage; and how to integrate such virtual heritages with real scenes. For geometric model creation, we have developed a two-step method: simultaneous alignment and volumetric view merging. For photometric model creation, we have developed the eigen-texture rendering methods, which automatically create photorealistic models by observing the real objects. For the integration of virtual objects with real scenes, we have developed a method that renders virtual objects based on real illumination distribution. We have applied these component techniques to constructing a multimedia model of the Great Buddha of Kamakura, and demonstrated their effectiveness.
Daisuke Miyazaki, Takeshi Oishi, Taku Nishikawa, Ryusuke Sagawa, Ko Nishino, Takashi Tomomatsu, Yutaka Takase, Katsushi Ikeuchi

Backmatter

Weitere Informationen