Skip to main content
main-content

Über dieses Buch

27 contributions treat the state of the art in Monte Carlo and Finite Element methods for radiosity and radiance. Further special topics dealt with are the use of image maps to capture light throughout space, complexity, volumetric stochastic descriptions, innovative approaches to sampling and approximation, and system architecture. The Rendering Workshop proceedings are an obligatory piece of literature for all scientists working in the rendering field, but they are also very valuable for the practitioner involved in the implementation of state of the art rendering system certainly influencing the scientific progress in this field.

Inhaltsverzeichnis

Frontmatter

The Light Volume: an aid to rendering complex environments

Abstract
The appearance of an object depends on both its shape and how it interacts with light. Alter either of these and its appearance will change. Neglect either of these and realism will be compromised. Computer graphics has generated images ranging from nightmarish worlds of plastic, steel, and glass to gently-lit, perfect interiors that have obviously never been inhabited. The reassuring realism lacking in these extremes requires the simulation of both complex geometry and complex light transport.
Ken Chiu, Kurt Zimmerman, Peter Shirley

Light-Driven Global Illumination with a Wavelet Representation of Light Transport

Abstract
We describe the basis of the work he have currently under way to implement a new rendering algorithm called light-driven global illumination. This algorithm is a departure from conventional raytracing and radiosity Tenderers which addresses a number of deficiencies intrinsic to those approaches.
Robert R. Lewis, Alain Fournier

Global Illumination using Photon Maps

Abstract
This paper presents a two pass global illumination method based on the concept of photon maps. It represents a significant improvement of a previously described approach both with respect to speed, accuracy and versatility. In the first pass two photon maps are created by emitting packets of energy (photons) from the light sources and storing these as they hit surfaces within the scene. We use one high resolution caustics photon map to render caustics that are visualized directly and one low resolution photon map that is used during the rendering step. The scene is rendered using a distribution ray tracing algorithm optimized by using the information in the photon maps. Shadow photons are used to render shadows more efficiently and the directional information in the photon map is used to generate optimized sampling directions and to limit the recursion in the distribution ray tracer by providing an estimate of the radiance on all surfaces with the exception of specular and highly glossy surfaces.
The results presented demonstrate global illumination in scenes containing procedural objects and surfaces with diffuse and glossy reflection models. The implementation is also compared with the Radiance program.
Henrik Wann Jensen

Geometry Caching for Ray-Tracing Displacement Maps

Abstract
We present a technique for rendering displacement mapped geometry in a raytracing Tenderer. Displacement mapping is an important technique for adding detail to surface geometry in rendering systems. It allows complex geometric variation to be added to simpler geometry, without the cost in geometric complexity of completely describing the nuances of the geometry at modeling time and with the advantage that the detail can be added adaptively at rendering time.
The cost of displacement mapping is geometric complexity. Renderers that provide it must be able to efficiently render scenes that have effectively millions of geometric primitives. Scan-line renderers process primitives one at a time, so this complexity doesn’t tax them, but traditional ray-tracing algorithms require random access to the entire scene database, so any part of the scene geometry may need to be available at any point during rendering. If the displaced geometry is fully instantiated in memory, it is straightforward to intersect rays with it, but displacement mapping has not yet been practical in ray-tracers due to the memory cost of holding this much geometry.
We introduce the use of a geometry cache in order to handle the large amounts of geometry created by displacement mapping. By caching a subset of the geometry created and rendering the image in a coherent manner, we are able to take advantage of the fact that the rays spawned by traditional ray-tracing algorithms are spatially coherent. Using our algorithm, we have efficiently rendered highly complex scenes while using a limited amount of memory.
Matt Pharr, Pat Hanrahan

Cost Prediction in Ray Tracing

Abstract
Although it is generally known that ray tracing is ‘time consuming’, yet rewarding with respect to image quality, there are few attempts to predict the rendering time for a given model in advance. This paper focusses on the development of such a technique.
The cost of ray tracing using adaptive spatial subdivisions has been studied by analysing the probability that a ray intersects an object. Per spatial subdivision cell the surface area relative to the cell size provides a measure for this probability. This cost function is refined by taking into account possible overlap when multiple objects inhabit the same cell. A further refinement is applied by computing the average tree depth of the spatial subdivision and by assuming that each ray will on average traverse the spatial subdivision at this depth. To evaluate and validate our method we applied it to some complex models and compared the results with the actual rendering cost.
Erik Reinhard, Arjan J. F. Kok, Prederik W. Jansen

Towards an Open Rendering Kernel for Image Synthesis

Abstract
In order to use realistic image synthesis successfully in research and development as well as in commercial products, two important prerequisites have to be fulfilled. First of all, good, accurate, robust, and fast algorithms are required. Impressive progress has been made in this respect during the last years, which has also been documented in this workshop. The second step is the creation of a suitable and general software architecture, that offers an environment into which these rendering algorithms can be integrated.
In this paper, we develop an architecture that consists of a small, but flexible rendering kernel. This kernel provides a general framework for rendering algorithms and defines suitable interfaces for specific aspects of rendering, like reflection (BRDF) or emission. Algorithms for a certain aspect of the rendering process can then be plugged into the kernel in order to implement a particular rendering strategy. The benefits of this approach is demonstrated with several applications.
Philipp Slusallek, Hans-Peter Seidel

Fast Rendering of Subdivision Surfaces

Abstract
Subdivision surfaces provide a curved surface representation that is useful in a number of applications, including modeling surfaces of arbitrary topological type, fitting scattered data, and geometric compression and automatic level-of-detail generation using wavelets. Subdivision surfaces also provide an attractive representation for fast rendering, since they can directly represent complex surfaces of arbitrary topology.
We present a method for subdivision surface triangulation that is fast, uses minimum memory, and is simpler in structure than a naive rendering method based on direct subdivision. These features make the algorithm amenable to implementation on both general purpose CPUs and dedicated geometry engine processors, allowing high rendering performance on appropriately equipped graphics hardware.
Kari Pulli, Mark Segal

High-Fidelity Radiosity Rendering at Interactive Rates

Abstract
Existing radiosity rendering algorithms achieve interactivity or high fidelity, but not both. Most radiosity renderers optimize interactivity by converting to a polygonal representation and Gouraud interpolating shading samples, thus sacrificing visual fidelity. A few renderers achieve improved fidelity by performing a per-pixel irradiance “gather” operation, much as in ray-tracing. This approach does not achieve interactive frame rates on existing hardware.
This paper bridges the gap, by describing a data structure and algorithm which enable interactive, high-fidelity rendering of radiosity solutions. Our algorithm “factors” the radiosity rendering computation into two components: an offline phase, in which a per-surface representation of irradiance is constructed; and an online phase, in which this representation is rapidly queried, in parallel, to produce a radiosity value at each pixel. The key components of the offline phase are a heuristic discontinuity ranking algorithm, which identifies the strongest discontinuities, and a hybrid quadtree-mesh data structure which prevents combinatorial interactions between most discontinuities. The online phase involves a novel use of perspective-correct texture-mapping hardware to produce nonlinear, analytic shading effects.
Stephen Hardt, Seth Teller

Non-symmetric Scattering in Light Transport Algorithms

Abstract
Non-symmetric scattering is far more common in computer graphics than is generally recognized, and can occur even when the underlying scattering model is physically correct. For example, we show that non-symmetry occurs whenever light is refracted, and also whenever shading normals are used (e.g. due to interpolation of normals in a triangle mesh, or bump mapping [5]).
We examine the implications of non-symmetric scattering for light transport theory. We extend the work of Arvo et al. [4] into a complete framework for light, importance, and particle transport with non-symmetric kernels. We show that physically valid scattering models are not always symmetric, and derive the condition for arbitrary model to obey Helmholtz reciprocity. By rewriting the transport operators in terms of optical invariants, we obtain a new framework where symmetry and reciprocity are the same.
We also consider the practical consequences for global illumination algorithms. The problem is that many implementations indirectly assume symmetry, by using the same scattering rules for light and importance, or particles and viewing rays. This can lead to incorrect results for physically valid models. It can also cause different rendering algorithms to converge to different solutions (whether the model is physically valid or not), and it can cause shading artifacts. If the non-symmetry is recognized and handled correctly, these problems can easily be avoided.
Eric Veach

Rendering Participating Media with Bidirectional Path Tracing

Abstract
In this paper we show how bidirectional path tracing can be extended to handle global illumination effects due to participating media. The resulting image-based algorithm is computationally expensive but more versatile than previous solutions. It correctly handles multiple scattering in non-homogeneous, anisotropic media in complex illumination situations. We illustrate its specific advantages by means of examples.
Eric P. Lafortune, Yves D. Willems

Quasi-Monte Carlo Radiosity

Abstract
The problem of global illumination in computer graphics is described by a second kind Fredholm integral equation. Due to the complexity of this equation, Monte Carlo methods provide an interesting tool for approximating solutions to this transport equation. For the case of the radiosity equation, we present the deterministic method of quasi-random walks. This method very efficiently uses low discrepancy sequences for integrating the Neumann series and consistently outperforms stochastic techniques. The method of quasi-random walks is also applicable to transport problems in settings other than computer graphics.
Alexander Keller

Importance-driven Stochastic Ray Radiosity

Abstract
The stochastic ray radiosity method [10] is a radiosity method in which no form-factors are computed explicitly. Because of this, the method is very well-suited to compute the radiance distribution in very complex diffuse environments. In this paper we present an extension of this method which will provide a significant reduction of computational cost in cases where accurate knowledge of the illumination is needed in only a small part of the scene. This is accomplished by computing a second quantity, called importance, during the radiance computation. Importance is then used to modulate the patch sampling probabilities in order to obtain lower variance in relevant regions of the scene.
Attila Neumann, László Neumann, Philippe Bekaert, Yves D. Willems, Werner Purgathofer

Efficiently Representing the Radiosity Kernel through Learning

Abstract
A novel method for approximating the radiosity kernel by a discrete set of basis functions is presented. The algorithm is characterized by selecting samples from the geometry definition and iteratively creates a functional model instantiated by a set of Gaussian basis functions. These are supported over the whole environment and thus, surfaces are not considered separately. Together with the implicit clustering algorithm provided by the applied learning scheme, the algorithm accounts ideally for coherence in the global kernel function.
On one hand, this leads to a very sparse representation of the kernel. On the other hand, by avoiding the creation of initial basis functions for separate pairs of surfaces, the method is capable of calculating even huge geometries to a desired accuracy with a proportional amount of computing resources.
Recent results from the field of artificial neural networks (the Growing Cell Structures) are extended for the presented learning algorithm. This work is done in Flatland, but there are no methodical constraints which bound the application to two dimensions.
Christian-A. Bohn

Accurate Error Bounds for Multi-Resolution Visibility

Abstract
We propose a general error-driven algorithm to compute form factors in complex scenes equipped with a suitable cluster hierarchy. This opens the way for the efficient approximation of form factors in a controlled manner, with guaranteed error bounds at every stage of the calculation. In particular we discuss the issues of bounding the error in the form factor approximation using average cluster transmittance, combining subcluster calculations with proper treatment of visibility correlation, and the calculation and storage of the necessary information in the hierarchy. We present results from a 2D implementation, that demonstrate the validity of the approach; the form factor approximations are effectively bounded by the user-supplied threshold.
Cyril Soler, François Sillion

Proximity Radiosity : Exploiting Coherence to Accelerate Form Factor Computations

Abstract
This paper introduces a new acceleration principle for the zonal method. The core concept resides in exploiting the coherence that exists between form factors of two close voxels (or patches). Primarily, we dissociate the radiometric part of the form factors form the geometrical part, the remaining geometrical expressions including volume integrals can be then developed with the Green-Ostrogradski theorem in terms of double surface integrals. These new expressions are less complex and allow us to divide computational time by a factor of about 4. Secondary, we show how all voxels in the neighborhood of a given “reference” voxel, have form factors (with another patch or voxel) that are weighted sums of the reference voxel form factor and a series of associated integrals of generalized orthogonal polynomials. Subsequently, calculation time decreases while a control of the generated error is maintained.
D. Arquès, S. Michelin

Error Control for Radiosity

Abstract
In this paper, we address the problem of computing the radiance in a diffuse environment up to a beforehand specified accuracy. We consider this problem in the context of wavelet radiosity, a hierarchical radiosity method with higher order radiance approximations. We first present an analysis of the discretisation error, which is the error introduced by projecting the problem onto a finite set of basis functions. This analysis leads to an algorithm for a-posteriori error estimation and a criterion and strategy for hierarchical refinement, generalising previous work for constant and bilinear approximations. We propose a new hierarchical radiosity algorithm in which a user controls the accuracy of the solution by specifying directly the maximum allowable absolute radiance error rather than an interaction error threshold.
Philippe Bekaert, Yves D. Willems

Hierarchical Rendering of Trees from Precompiled Multi-Layer Z-Buffers

Abstract
Chen and Williams [2] show how precomputed z-buffer images from different fixed viewing positions can be reprojected to produce an image for a new viewpoint. Here, images are precomputed for twigs and branches at various levels in the hierarchical structure of a tree, and adaptively combined depending on the position of the new viewpoint. The precomputed images contain multiple z levels to avoid missing pixels in the reconstruction, subpixel masks for antialiasing, and colors and normals for shading after reprojection.
Nelson Max

A Temporal Image-Based Approach to Motion Reconstruction for Globally Illuminated Animated Environments

Abstract
This paper presents an approach to motion sampling and reconstruction for globally illuminated animated environments (under fixed viewing conditions) based on sparse spatio-temporal scene sampling, a resolution-independent temporal file format, and a Delaunay triangulation pixel reconstruction method. Motion usually achieved by rendering complete images of a scene at a high frame rate (i.e. flipbook style frame-based animation) can be adequately reconstructed using many fewer samples (often on the order of that required to generate a single, complete, high quality frame) from the sparse image data stored in bounded slices of our temporal file. The scene is rendered using a ray tracing algorithm modified to randomly sample over space — the image plane (x, y), and time (t), yielding (x, y, t) samples that are stored in our spatio-temporal images. Reconstruction of object motion, reconstructing a picture of the scene at a desired time, is performed by projecting the (x, y, t) samples onto the desired temporal plane with the appropriate weighting, constructing the 2D Delaunay triangulation of the sample points, and Gouraud (or Phong) shading the resulting triangles. Both first and higher order visual effects, illumination and visibility, are handled as the information is included in the individual samples. Silhouette edges and other discontinuities are more difficult to track but can be addressed with a combination of triangle filtering and image postprocessing.
Jeffry Nimeroff

The Multi-Frame Lighting Method: A Monte Carlo Based Solution for Radiosity in Dynamic Environments

Abstract
In this paper we present a method for radiosity computation in dynamic scenes. The algorithm is intended for animations in which the motion of the objects is known in advance. Radiosity is computed using a Monte Carlo approach. Instead of computing each frame separately, we propose to compute the lighting simulation of a sequence of frames in a unique process. This is achieved by the merging of the whole sequence of frames into a single scene, so each moving object is replicated as many times as frames.
We present results which show the performance of the proposed method. This is specially interesting for sequences of a significant number of frames. We also present an analysis of the algorithm complexity. An important feature of the algorithm is that the accuracy of the image in each frame is the same as the one we would obtain by means of computing each frame separately.
Gonzalo Besuievsky, Mateu Sbert

Wavelet Based Texture Resampling

Abstract
The integral equation arising from space variant 2-D texture resampling is reformulated through wavelet analysis. We transform the standard convolution integral in texture space into an inner product over sparse representations for both the texture and the warped filter function. This yields an algorithm that operates in constant time in the area of the domain of convolution, and that is sensitive to the frequency content of both the filter and the texture. The reformulation admits further acceleration for space-invariant resampling.
Silviu Borac, Eugene Fiume

Modeling Textiles as Three Dimensional Textures

Abstract
The modeling and renderung of textile materials has already been investigated in detail in the computer graphics literature. Modeling the 3D microstructure of textiles as volume data sets allows a more realistic image generation. Textiles, e.g., knitwear, are typically characterized by highly repetitive structures. These repetitive features enable time and memory efficient rendering through object instancing even when using the ray-tracing technique. This paper concentrates on the rendering of more general textiles whose macrostructures are defined by free-form surfaces. We show how object instancing and tracing curved rays through object space efficiently produce realistic looking images. Concerning realism our approach of approximating the 3D microstructure of textiles with volume data sets compares favorably with previous techniques which used the mapping of 2D textures onto textile surfaces. This is especially true when a close-up inspection of synthetic textiles is required.
Eduard Gröller, René T. Rau, Wolfgang Straßer

Synthesizing Verdant Landscapes using Volumetric Textures

Abstract
Volumetric textures are able to represent complex repetitive data such as foliage, fur and forests by storing one sample of geometry in a volumetric texel to be mapped onto a surface. This volume consists in samples of densities and reflectances stored in voxels. The texel can be prefiltered similarly to the mip-mapping algorithm, giving an efficient rendering in ray-tracing with low aliasing, using a single ray per pixel.
Our general purpose is to extend the volumetric texture method in order to provide a convenient and efficient tool for modeling, animating and rendering highly complex scenes in ray-tracing. In this paper, we show how to convert usual 3D models into texels, and how to render texels mapped onto any mesh type. We illustrate our method with verdant landscapes such as forests and lawns.
Fabrice Neyret

Ray Tracing in Non-Constant Media

Abstract
In this paper, we explore the theory of optical deformations due to continuous variations of the refractive index of the air, and present several efficient implementations. We introduce the basic equations from geometrical optics, outlining a general method of solution. Further, we model the fluctuations of the index of refraction both as a superposition of blobs and as a stochastic function. Using a well known perturbation technique from geometrical optics, we compute linear approximations to the deformed rays. We employ this approximation and the blob representation to efficiently ray trace non linear rays through multiple environments. In addition we present a stochastic model for the ray deviations derived from an empirical model of air turbulence. We use this stochastic model to precompute deformation maps.
Jos Stam, Eric Languénou

Hierarchical Back-Face Computation

Abstract
We present a sub-linear algorithm to compute the set of back-facing polygons in a polyhedral model. The algorithm partitions the model into hierarchical clusters based on the orientations and positions of the polygons. As a pre-processing step, the algorithm constructs spatial decompositions with respect to each cluster. For a sequence of back-face computations, the algorithm exploits the coherence in view-point movement to efficiently determine if it is in front of or behind a cluster. Due to coherence, the algorithm’s performance is linear in the number of clusters on average. We have applied this algorithm to speed up the rendering of polyhedral models. On average, we are able to cull almost half the polygons. The algorithm accounts for 5 – 10% of the total CPU time per frame on an SGI Indigo2 Extreme. The overall frame rate is improved by 40 – 75% as compared to the standard back-face culling implemented in hardware.
Subodh Kumar, Dinesh Manocha, William Garrett, Ming Lin

The 3D visibility complex : a new approach to the problems of accurate visibility

Abstract
Visibility computations are central in any computer graphics application. The most common way to reduce this expense is the use of approximate approaches using spatial subdivision. More recently analytic approaches efficiently encoding visibility have appeared for 2D (the visibility complex) and for certain limited cases in 3D (aspect graph, discontinuity meshes). In this paper we propose a new way of describing and studying the visibility of 3D space by a dual space of the 3D lines, such that all the visibility events are described. A new data-structure is defined, called the 3D visibility complex, which encapsulates all visibility events. This structure is global and complete since it encodes all visibility relations in 3D, and is spatially coherent allowing efficient visibility queries such as view extraction, aspect graph, discontinuity mesh, or form factor computation. A construction algorithm and suitable data structures are sketched.
Frédo Durand, George Drettakis, Claude Puech

Conservative Radiance Interpolants for Ray Tracing

Abstract
Classical ray-tracing algorithms compute radiance returning to the eye along one or more sample rays through each pixel of an image. The output of a ray-tracing algorithm, although potentially photorealistic, is a two-dimensional quantity — an image array of radiance values — and is not directly useful from any viewpoint other than the one for which it was computed.
This paper makes several contributions. First, it directly incorporates the notion of radiometric error into classical ray-tracing, by lazy construction of conservative radiance interpolants in ray space. For any relative error tolerance ε, we show how to construct interpolants which return radiance values within ε of those that would be computed by classical (e.g., Whitted) ray-tracing. The second contribution of the paper is an explication of the four sources of aliasing inherent in classical ray tracing — termed gaps, blockers, funnels, and peaks — and an adaptive subdivision algorithm for identifying ray space regions guaranteed to be free of these phenomena. Finally, we describe a novel data structure that exploits object-space coherence in the radiance function to accelerate not only the generation of single images, but of image sequences arising from a smoothly varying sequence of eyepoints. We describe a preliminary implementation incorporating each of these ideas.
Seth Teller, Kavita Bala, Julie Dorsey

Accurate Visibility and Meshing Calculations for Hierarchical Radiosity

Abstract
Precise quality control for hierarchical lighting simulations is still a hard problem, due in part to the difficulty of analysing the source of error and to the close interactions between different components of the algorithm. In this paper we attempt to address this issue by examining two of the most central components of these algorithms: visibility computation and the mesh. We first present an investigation tool in the form of a new hierarchical algorithm: this algorithmic extension encapsulates exact visibility information with respect to the light source in the form of the backprojection data structure, and allows the use of discontinuity meshes in the solution hierarchy. This tool permits us to study separately the effects of visibility and meshing error on image quality, computational expense as well as solution convergence. Initial experimental results are presented by comparing standard quadtree-based hierarchical radiosity with point-sampling visibility to the approaches incorporating backprojections, discontinuity meshes or both.
George Drettakis, François Sillion

Backmatter

Weitere Informationen