Skip to main content
main-content

Über dieses Buch

This book contains the final versions of the proceedings of the fifth EUROGRA­ PHICS Workshop on Rendering held in Darmstadt, Germany, between 13-15 June 1994. With around 80 participants and 30 papers, the event continued the successful tradition of the previous ones establishing the event as the most im­ portant meeting for persons working on this area world-wide. After more than 20 years of research, rendering remains an partially unsolved, interesting, and challenging topic. This year 71 (!) papers have been submitted from Europe, North America, and Asia. The average quality in terms of technical merit was impressive, showing that substantial work is achieved on this topic from several groups around the world. In general we all gained the impression that in the mean time the technical quality of the contributions is comparable to that of a specialised high-end, full­ scale conference. All papers have been reviewed from at least three members of the program committee. In addition, several colleagues helped us in managing the reviewing process in time either by supporting additional reviews, or by assisting the members of the committee. We have been very happy to welcome eminent invited speakers. Holly Rush­ meier is internationally well known for her excellent work in all areas of rendering and gave us a review of modelling and rendering participating media with em­ phasis on scientific visualization. In addition, Peter Shirley presented a survey about future rends in rendering techniques.

Inhaltsverzeichnis

Frontmatter

Viewing Solutions

Frontmatter

Results of the 1994 Survey on Image Synthesis

Abstract
At the 1992 Rendering Workshop in Bristol, Michael Cohen presented the results of what he called a very unscientific survey of image synthesis researchers. This survey stimulated a great deal of discussion, so we ran a second survey (again distributed by email), and collected twenty-two responses from researchers with an average of ten years experience in image synthesis. The results of this survey, along with the results of Cohen’s survey are given here.
Peter Shirley, Georgios Sakas

Quantization Techniques for Visualization of High Dynamic Range Pictures

Abstract
This paper proposes several techniques that enable to display high dynamic range pictures (created by a global illumination rendering program, for instance) on a low dynamic range device. The methods described here are based on some basic knowledge about human vision and are intended to provide “realistic looking” images on the visualization device, even with critical lighting conditions in the rendered scene. The main features of the new techniques are speed (only a handful of floating point operations per pixel are needed) and simplicity (only one single parameter, which can be empirically evaluated has to be provided by the user). The goal of this paper is not to propose a psychovisual or neurological model for subjective perception, but only to described some experimental results and propose some possible research directions.
Christophe Schlick

Rendering, Complexity, and Perception

Abstract
Computer graphics researchers have spent great effort in the last ten years on strengthening the physical foundations of computers graphics. We now need to step back and examine the nature of scenes our end-users wish to render, and what qualities these rendered images must possess. Humans view these images, not machines, and this crucial distinction must guide the research process, lest we become an increasingly irrelevant enclave, divorced from the users we profess to serve.
Kenneth Chiu, Peter Shirley

Participating Media

Frontmatter

Rendering Participating Media: Problems and Solutions from Application Areas

Abstract
Physically accurate rendering of radiatively participating media is an extremely demanding computational task. In this paper, current and potential applications requiring such renderings are reviewed. Some ideas for a practical rendering system, based on insights from application areas, are presented.
Holly Rushmeier

A Model for Fluorescence and Phosphorescence

Abstract
If you are indoors and reading this document on paper, then the page may be lit by a fluorescent light bulb. The gases inside the bulb absorb high-energy electrons, and then fluoresce, or re-radiate that absorbed energy at a different frequency. The particular gases in common fluorescent bulbs are chosen to be efficient at re-radiating this energy in the visible wavelengths. If you are reading this document on-line, then you’re probably reading it on a cathode-ray tube (CRT). The face of the CRT is lined with phosphors, which absorb the high-energy electrons directed at them, and gradually release that energy over time in the visible band. The two phenomena of fluorescence and phosphorescence are not as common as simple reflection and transmission, but do have an important part to play in the complete description of macroscopic physical behavior that should be modeled by image synthesis programs. This paper presents a mathematical model for global energy balancing which includes these phenomena.
Andrew S. Glassner

Global Illumination in Presence of Participating Media with General Properties

Abstract
In the recent years a number of techniques have been devised by computer graphics researchers for rendering participating media like haze, fog, clouds, dust… Due to the complexity of the underlying physics, strong assumptions (isotropy, single scattering, no spontaneous emission…) are often made to make these techniques tractable. In this paper, we propose a method which lifts these assumptions. It relies on the discrete ordinate method and allows light transfer simulation in an environment made up of diffuse objects and participating media.
Eric Languénou, Kadi Bouatouch, Michael Chelle

Efficient Light Propagation for Multiple Anisotropic Volume Scattering

Abstract
Realistic rendering of participating media like clouds requires multiple anisotropic light scattering. This paper presents a propagation approximation for light scattered into M direction bins, which reduces the “ray effect” problem in the traditional “discrete ordinates” method. For a regular grid volume of n 3 elements, it takes O(Mn 3 log n + M 2 n 3) time and 0(Mn 3 + M 2) space.
Nelson Max

Clustering and Volume Scattering for Hierarchical Radiosity Calculations

Abstract
This paper introduces a new approach to hierarchical radiosity computation, making it practical for the simulation of energy exchanges in very complex environments. Results indicate that the new formulation allows the effective simulation of environments of significant complexity, containing several thousands of surfaces or volumes.
In this new technique a hierarchy is constructed in a bottom-up fashion, in effect grouping together nearby surfaces for the purpose of evaluating their energy exchanges with distant objects. This clustering approach eliminates the need for an O(n 2) initial linking stage, by establishing connections between abstract entities that behave like volumes.
A general hierarchical transfer algorithm for volumes is first derived and its adaptation to clustered environments is then discussed. In particular the mechanisms required to efficiently simulate the radiant interactions between surfaces and clusters are reviewed.
François Sillion

Ray Tracing and Monte Carlo

Frontmatter

Adaptive Splatting for Specular to Diffuse Light Transport

Abstract
We present an extension to existing techniques to provide for more accurate resolution of specular to diffuse transfer within a global illumination framework. In particular this new model is adaptive with a view to capturing high frequency phenomena such as caustic curves in sharp detail and yet allowing for low frequency detail without compromising noise levels and aliasing artefacts. A 2-pass ray-tracing algorithm is used, with an adaptive light-pass followed by a standard eye-pass. During the lightpass, rays are traced from the light sources (essentially sampling the wavefront radiating from the sources), each carrying a fraction of the total power per wavelength of the source. The interactions of these rays with diffuse surfaces are recorded in illumination-maps, as first proposed by Arvo[Ar86]. The key to reconstructing the intensity gradients due to this light-pass lies in the construction of the illumination maps. We record the power carried by the ray as a splat of energy flux, deposited on the surface using a Gaussian distribution kernel. The kernel of the splat is adaptively scaled according to an estimation of the wavefront divergence or convergence, thus resolving sharp intensity gradients in regions of high wavefront convergence and smooth gradients in areas of divergence. The 2nd pass eye-trace modulates the surfaces radiance according to the power stored in the illumination map in order to include the specular to diffuse light modelled during the first pass.
Steven Collins

Rayvolution: An Evolutionary Ray Tracing Algorithm

Abstract
In computer graphics the accurate simulation of radiant light transfer is an essential to realistic rendering. In general, for every elementary surface area within a scene the total irradiance incident from the entire half-space has to be accounted for. In a mathematical formulation this leads to a complex system of integral equations, referred to as Rendering Equation [Kaj86]. Since usually it is not possible to find a closed form analytical solution, the Rendering Equation is solved approximately by defining a probabilistic model of the radiation exchange process and applying Monte Carlo methods.
Brigitta Lange, Markus Beyer

Bidirectional Estimators for Light Transport

Abstract
Most of the research on the global illumination problem in computer graphics has been concentrated on finite-element (radiosity) techniques. Monte Carlo methods are an intriguing alternative which are attractive for their ability to handle very general scene descriptions without the need for meshing. In this paper we study techniques for reducing the sampling noise inherent in pure Monte Carlo approaches to global illumination. Every light energy transport path from a light source to the eye can be generated in a number of different ways, according to how we partition the path into an initial portion traced from a light source, and a final portion traced from the eye. Each partitioning gives us a different unbiased estimator, but some partitionings give estimators with much lower variance than others. We give examples of this phenomenon and describe its significance. We also present work in progress on the problem of combining these multiple estimators to achieve near-optimal variance, with the goal of producing images with less noise for a given number of samples.
Eric Veach, Leonidas Guibas

The Ambient Term as a Variance Reducing Technique for Monte Carlo Ray Tracing

Abstract
Ray tracing algorithms often approximate indirect diffuse lighting by means of an ambient lighting term. In this paper we show how a similar term can be used as a variance reducing technique for stochastic ray tracing. In a theoretical derivation we prove that the technique is mathematically correct. Test results demonstrate its usefulness and effectiveness in practice.
Eric P. Lafortune, Yves D. Willems

An Importance Driven Monte-Carlo Solution to the Global Illumination Problem

Abstract
We propose a method for solving the global illumination problem with no restrictive assumptions concerning the behaviour of light either on surface or volume objects in the scene. Surface objects are defined either by facets or parametric patches and volume objets are defined by voxel grids which define arbitrary density distributions in a discrete tridimensional space. The rendering technique is a Monte-Carlo ray-tracing based radiosity which unifies the processing of objects in a scene, whether they are surfacic or volumic. The main characteristics of our technique are the use of separated Markov chains to prevent the explosion of the number of rays and an optimal importance sampling to speed-up the convergence.
Philippe Blasi, Bertrand Le Saëc, Christophe Schlick

Importance-driven Monte Carlo Light Tracing

Abstract
One possible method for solving the global illumination problem is to use a particle model, where particles perform a random walk through the scene to be rendered. The proposed algorithm uses this particle model, but computes the illumination of the pixels in a direct manner. In order to optimise the sampling process, adaptive probability density functions are used. The result is that particles are shot to those regions with a high potential capability. This algorithm has some advantages, such as the absence of a mesh and the possibility to handle all types of light-surface interactions with the same method.
Philip Dutré, Yves D. Willems

Radiosity

Frontmatter

A New Stochastic Radiosity Method for Highly Complex Scenes

Abstract
This paper presents a linear-time radiosity algorithm for very complex environments. The new algorithm is based on a progressive refinement iteration process with stochastic instead of deterministic convergence. Each iteration step simulates one interreflection step for all patches similar to Jacobi iteration, but with an approximate interreflection matrix rather than with the exact one. The stochastic shooting method is described, which computes such approximate interreflection matrices at very low computational cost. The efficiency of the algorithm can be further increased by several variance reduction methods.
László Neumann, Martin Feda, Manfred Kopp, Werner Purgathofer

Constructing Solvers for Radiosity Equation Systems

Abstract
In computer graphics, many approaches to determining the illumination in an environment of light emitters and Lambertian reflectors employ a piece wise constant approximation to the radiosity function, leading to a linear system with n 2 coefficients (the form factors) given n surface elements over which the function is assumed constant. Approaches of this type have collectively been called classical radiosity [Go1]. In the past ten years, several iterative techniques for solving such systems have been developed. While initially, Gauss-Seidel iteration was used to solve such systems[Gol], more recent work employs relaxation techniques which can improve the efficiency of each step in the iteration, so that only O(n) computations are needed per iteration rather than the O(n 2) required by Gauss-Seidel and similar techniques[Co1, Fe1, Go2, Sh1, Xu1]. This efficiency has made classical radiosity practical for moderately complex environments. In this paper, we provide a common mathematical treatment of these more efficient iterative solvers, which we henceforth call linear iteration (LI) solvers. Note that linear in this context refers to the complexity of a single iteration in the solution of a classical radiosity system with O(n 2) coefficients and not to any reduction in the overall number of interactions required to achieve a given error bound as is done in hierarchical[Ha1] and wavelet[Go1] methods.
Wei Xu, Donald S. Fussell

New Efficient Algorithms with Positive Definite Radiosity Matrix

Abstract
New efficient algorithms will be presented for solving diffuse radiosity problems, involving advantages of progressive radiosity. Demonstration of the algorithms and of their convergence relies on the new form of the radiosity equations, with a positive definite matrix. The methods have been tested with a new error formula, the (area-weighted) average relative error. The form with a symmetric, positive definite matrix penetrates into the gist of the radiosity problem deeper than the former radiosity or power variable equations. At the same time this makes it possible to apply several algorithms well-known from numerical analysis. In general, the positive definite form leads to algorithms, which are mathematically handleable, and of proven convergence and effectiveness.
László Neumann, Robert F. Tobler

Adaptive Mesh Refinement with Discontinuities for the Radiosity Method

Abstract
The radiosity method simulates the interaction of light between diffuse reflecting surfaces, thereby accurately predicting global illumination effects. One of the main problems of the original algorithm is the inability to represent correctly the shadows cast onto surfaces. Adaptive subdivision techniques were tried but the results are not good enough for general purposes. The conceptually different discontinuity meshing algorithm produces exact pictures of shadow boundaries but is computationally expensive. The newly presented adaptive discontinuity meshing method combines the speed of adaptive subdivision with the quality of the discontinuity meshing method.
W. Stürzlinger

Optimizing Discontinuity Meshing Radiosity

Abstract
Discontinuity meshing radiosity is no longer new to the Computer Graphics community. When trying to closely model, with patches, the true radiance function over some surface, it is now well established that errors are inevitable if patch boundaries take no account of discontinuities in the true radiance function [16, 13, 5].
Neil Gatenby, W. T. Hewitt

Simplifying the Representation of Radiance from Multiple Emitters

Abstract
In recent work radiance function properties and discontinuity meshing have been used to construct high quality interpolants representing radiance. Such approaches do not consider the combined effect of multiple sources and thus perform unnecessary discontinuity meshing calculations and often construct interpolants with too fine subdivision. In this research we present an extended structured sampling algorithm that treats scenes with shadows and multiple sources. We then introduce an algorithm which simplifies the mesh based on the interaction of multiple sources. For unoccluded regions an a posteriori simplification technique is used. For regions in shadow, we first compute the maximal umbral/penumbral and penumbral/light boundaries. This construction facilitates the determination of whether full discontinuity meshing is required or whether it can be avoided due to the illumination from another source. An estimate of the error caused by potential simplification is used for this decision. Thus full discontinuity mesh calculation is only incurred in regions where it is necessary resulting in a more compact representation of radiance.
George Drettakis

Wavelets

Frontmatter

Haar Wavelet: A Solution to Global Illumination With General Surface Properties

Abstract
This paper presents a method for solving the problem of global illumination for general environments, using projection of the radiance function on a set of orthonormal basis functions. Wavelet scaling functions form this basis set. The highlights of the paper are: it (i) points out the difficulty associated with the straightforward projection of the integral operator associated with the radiance equation and proposes a method for overcoming this difficulty, (ii) gives the data structure and algorithm for illumination solution in environments containing diffuse and non-diffuse reflecting surfaces, and (iii) proposes the use of bi-orthogonal wavelet for the radiance function reconstruction at the time of rendering. Actual implementation has been carried out using the Haar wavelet basis. The main reason for using Haar basis is that it makes the projection of the integral operator, as well as the computation of the inner product of the integral kernel with its basis functions much simpler. However, the algorithm and data structures presented are not restricted to the Haar basis alone.
Sumanta N. Pattanaik, Kadi Bouatouch

Wavelet Radiance

Abstract
In this paper, we show how wavelet analysis can be used to provide an efficient solution method for global illumination with glossy and diffuse reflections. Wavelets are used to sparsely represent radiance distribution functions and the transport operator. In contrast to previous wavelet methods (for radiosity), our algorithm transports light directly among wavelets, and eliminates the pushing and pulling procedures.
The framework we describe supports curved surfaces and spatially-varying anisotropic BRDFs. We use importance to make the global illumination problem tractable for complex scenes, and a final gathering step to improve the visual quality of the solution.
Per Christensen, Eric Stollnitz, David Salesin, Tony DeRose

Wavelet Methods for Radiance Computations

Abstract
This paper describes a new algorithm to compute radiance in a synthetic environment. Motivated by the success of wavelet methods for radiosity computations we have applied multi wavelet bases to the computation of radiance in the presence of glossy reflectors. We have implemented this algorithm and report on some experiments performed with it. In particular we show that the convergence properties of basis functions with 1–4 vanishing moments are in accordance with theoretical predictions. As in the case of wavelet radiosity we find higher order bases to have advantages. However, the cost scaling due to the higher dimensionality of the problem is such that the higher order bases only become competitive for very high precision requirements. In practice we rarely go beyond piecewise linear functions.
Peter Schröder, Pat Hanrahan

Dynamic Solutions and Walkthroughs

Frontmatter

Efficient Radiosity in Dynamic Environments

Abstract
A method of determining radiosity in an environment containing moving objects, is described. This method uses the hierarchical techniques of Hanrahan et al. to obtain a static solution. Hanrahan’s techniques efficiently create a hierarchical meshing of the environments geometry, and create links from element to element based on the magnitude of the form-factor between the elements. These ideas extend naturally to a dynamic environment, as only three atomic editing operations are required to update a hierarchy when an object moves: a link can be moved up or down the hierarchy, or a link can be occluded. Our algorithm exploits these simple editing processes to maintain the hierarchy, and then uses an iterative technique to solve the resulting linear system. The approach is extremely efficient, requiring little work between frames.
David Forsyth, Chien Yang, Kim Teo

Fast Radiosity Repropagation For Interactive Virtual Environments Using A Shadow-Form-Factor-List

Abstract
The radiosity method became a very important tool in order to enable photorealistic rendering in virtual reality systems. Based on the geometric description of a scene, the view-independent illumination is computed in a preprocess and colors are assigned to each patch vertex. These virtual environments look very impressive, but any interaction with the scene geometry or its materials results in a time expensive recalculation of the radiosity simulation. This leads to the common phrase:
Radiosity scenes are like museums, you may look around, but do not touch anything!
In this paper, a new algorithm is presented to overcome this problem. The algorithm is based on the fact that most of the information needed for the radiosity repropagation after any scene modification was already computed during the radiosity preprocess. Therefore, the radiosity method is extended by storing shadow- and form-factor-information in an efficient data structure, the so-called shadow-form-factorlist (SFFL). We describe how the SFFL can be used to minimize the recomputation time after any scene modification. Moreover, very important information about scene coherence is included within the SFFL. Thus, an efficient traversal of the SFFL helps to repropagate radiosity only in those parts of the scene, that are affected by the model change.
Stefan Müller, Frank Schöffel

An Efficient Progressive Refinement Strategy for Hierarchical Radiosity

Abstract
A detailed study of the performance of hierarchical radiosity is presented, which confirms that visibility computation is the most expensive operation. Based on the analysis of the algorithm’s behavior, two improvements are suggested. Lazy evaluation of the top-level links suppresses most of the initial linking cost, and is consistent with a progressive refinement strategy. In addition, the reduction of the number of links for mutually visible areas is made possible by the use of an improved subdivision criterion. Results show that initial linking can be avoided and the number of links significantly reduced without noticeable image degradation, making useful images available more quickly.
Nicolas Holzschuch, François Sillion, George Drettakis

Efficient Re-rendering of Naturally Illuminated Environments

Abstract
We present a method for the efficient re-rendering of a scene under a directional illuminant at an arbitrary orientation. We take advantage of the linearity of the rendering operator with respect to illumination for a fixed scene and camera geometry. Re-rendering is accomplished via linear combination of a set of pre-rendered “basis” images. The theory of steerable functions provides the machinery to derive an appropriate set of basis images. We demonstrate the technique on both simple and complex scenes illuminated by an approximation to natural skylight. We show re-rendering simulations under conditions of varying sun position and cloudiness.
Jeffry S. Nimeroff, Eero Simoncelli, Julie Dorsey

Texture Mapping as an Alternative for Meshing During Walkthrough Animation

Abstract
Mesh-based radiosity calculation requires many mesh elements to reconstruct subtle details of shading. On the other hand, the excessive number of polygons slows down rendering, impairing the sensation of interactivity when a user-navigated walkthrough in complex environment is performed. When distribution of illumination over a scene is to be quickly rendered, then the Gouraud shaded polygon becomes an inefficient drawing primitive, which can be successfully replaced by texture mapping.
This paper proposes an application of texture mapping to reconstruct the shading of surfaces in the scene regions where distribution of illumination is extremely complex. Mesh-based Gouraud shading is used to visualize the remaining surfaces, exhibiting simple illumination, usually constituting the majority of the scene. As a result, many mesh elements can be eliminated, compared to traditional approaches, and image display can be done significantly faster. Also, the improvement of shading quality is possible by recalculating illumination and storing the results as textures in scene regions where a mesh-based approach produces shading artifacts. Experiments performed have shown that application of this idea pays off on high-end workstations, when hardware supported texture mapping is available.
Karol Myszkowski, Tosiyasu L. Kunii

BRUSH as a Walkthrough System for Architectural Models

Abstract
Brush provides an interactive environment for the real-time visualization and inspection of very large mechanical and architectural CAD databases. It supports immersive and non-immersive virtual reality walkthrough applications (for example, when validating or demonstrating to a customer an architectural concept) and detailed design reviews of complex mechanical assemblies such as engines, plants, airplanes, or ships.
Brush achieves interactive response times by selecting from multiple-resolution representations for each object, computed automatically by simplifying the original data. Simplified models reduce the cost of displaying small details that do not significantly affect the image, allowing navigation through models comprising hundreds of thousands of triangles.
A natural gesture-driven interface allows mouse or space-ball control of the camera for intuitive walkthrough in architectural scenes. Simple facilities for editing and sequencing camera positions along with automatic animation of camera trajectories between key-frames enable the construction, demonstration, and archive of pre-programmed walkthrough sequences.
Bengt-Olaf Schneider, Paul Borrel, Jai Menon, Josh Mittleman, Jarek Rossignac

Environment Mapping for Efficient Sampling of the Diffuse Interreflection

Abstract
Environment mapping is a technique to compute specular reflections for a glossy object. Originally proposed as a cheap alternative for ray tracing, the method is well suited to be incorporated in a hybrid rendering algorithm. In this paper environment mapping is introduced to reduce the amount of computations involved in tracing secondary rays. During rendering, instead of tracing the secondary rays all through the scene, values are taken from the maps for the rays that would otherwise hit distant objects. This way the quality of the image is retained while providing a cheap alternative to stochastic brute force sampling methods. An additional advantage is that due to the local representation of the entire 3D scene in a map, parallelising this algorithm should result in a good speed-up and high efficiency.
Erik Reinhard, Lucas U. Tijssen, Frederik W. Jansen

Backmatter

Weitere Informationen