Skip to main content

Über dieses Buch

This book contains the proceedings of the International Workshop on Volume Graphics 200 1 (VG'O I) which took place on June 21 and June 22 at Stony Brook, New York. This year's event was the second in the series, following a successful premiere in Swansea, Wales, in March 1999, and was co-sponsored by the IEEE Technical Committee on Visualization and Graphics (TC-VG) as well as EUROGRAPHICS. The Volume Graphics Workshop is held bi-annually and has been created to pro­ vide a forum for the exploration and advancement of volume-based techniques, beyond the scope of just volume visualization. It brings together researchers and practitioners both from academia and industry, from many parts of the world. Volume graphics is in the process of evolving into a general graphics technology, and the papers included in these proceedings are testimonial to the wide spectrum of unique applications and solu­ tions that volumetric representations are able to offer.



Volume Rendering


Refraction in Discrete Ray Tracing

Refraction is an important graphics feature for synthesizing photorealistic images. This paper presents a study on refraction rendering in volume graphics using discrete ray tracing. We describe four basic approaches for determining the relative refractive index at each sampling position, and examine their relative merits. We discuss two types of anomalies associated with some approaches and three different mechanisms for controlling sampling intervals. We apply the refraction rendering to objects with uniform as well as non-uniform optical density, and objects built upon mathematical scalar fields as well as volumetric datasets. In particular, the study shows that the normal estimation plays a critical role in synthesizing aesthetically pleasing images. The paper also includes the results of various tests, and our quantitative and qualitative analysis.
David Rodgman, Min Chen

Data Level Comparison of Surface Classification and Gradient Filters

Surface classification and shading of three dimensional scalar data sets are important enhancements for direct volume rendering (DVR). However, unlike conventional surface rendering, DVR algorithms do not have explicit geometry to shade, making it difficult to perform comparisons. Furthermore, DVR, in general, involves a complex set of parameters whose effects on a rendered image are hard to compare. Previous work uses analytical estimations of the quality of interpolation, gradient filters, and classification. Typical comparisons are done using side-by-side examination of rendered images. However, non-linear processes are involved in the rendering pipeline and thus the comparison becomes particularly difficult. In this paper, we present a data level methodology for analyzing volume surface classification and gradient filters. Users can more effectively estimate algorithmic differences by using intermediate information. Based on this methodology, we also present new data level metrics and examples of analyzing differences in surface classification and gradient calculation. Please refer to for a full color version of this paper.
Kwansik Kim, Craig M. Wittenbrink, Alex Pang

Splatting With Shadows

In this paper we describe an efficient approach to add shadows to volumetric scenes. The light emitted by the lightsource is properly attenuated by the intervening volumetric structures before it is reflected towards the eye. Both parallel and perspective lightsources can be efficiently and accurately modeled. We use a two-stage splatting approach. In the first stage, a light volume is constructed in O(N3) time, which is about the same time it takes to render a regular image. This light volume stores the volumetric attenuated light arriving at each grid voxel and only needs to be recomputed if the light source is moved. If only diffuse shading is required, then the contribution of any number of lightsources can be stored in the same space. The second stage is formed by the usual rendering pipeline. The only difference is that the light contributions are interpolated from the light volume, instead of using the constant light source intensity. Once the light volume is computed, the actual rendering is only marginally more expensive than in the unshadowed case. The rendered images, however, convey three-dimensional relationships much better and look considerably more realistic, which is clearly needed if volume graphics is to become a mainstream technology.
Manjushree Nulkar, Klaus Mueller

A Study of Transfer Function Generation for Time-Varying Volume Data

The proper usage and creation of transfer functions for time-varying data sets is an often ignored problem in volume visualization. Although methods and guidelines exist for time-invariant data, little formal study for the time-varying case has been performed. This paper examines this problem, and reports the study that we have conducted to determine how the dynamic behavior of time-varying data may be captured by a single or small set of transfer functions. The criteria which dictate when more than one transfer function is needed were also investigated. Four data sets with different temporal characteristics were used for our study. Results obtained using two different classes of methods are discussed, along with lessons learned. These methods, including a new multi-resolution opacity map approach, can be used for semi-automatic generation of transfer functions to explore large-scale time-varying data sets.
T. J. Jankun-Kelly, Kwan-Liu Ma

Volume-Based Modeling


Volume Graphics Modeling of Ice Thawing

Image synthesis of natural phenomena is one of the fundamental research areas in volume graphics. Since the pursuit of photoreality requires physically-based computation at the cost of interactivity, much more attention has been paid to phenomenological models, which are intended to produce the same visual effect as physically-based models, without highly-complex computations. This paper focuses on ice thawing as a quite common phenomenon in our daily life, and presents volume modeling of the phenomenon using mathematical morphology and cellular automaton.
Issei Fujishiro, Etsuko Aoki

A Survey of Methods for Volumetric Scene Reconstruction from Photographs

Scene reconstruction, the task of generating a 3D model of a scene given multiple 2D photographs taken of the scene, is an old and difficult problem in computer vision. Since its introduction, scene reconstruction has found application in many fields, including robotics, virtual reality, and entertainment. Volumetric models are a natural choice for scene reconstruction. Three broad classes of volumetric reconstruction techniques have been developed based on geometric intersections, color consistency, and pair-wise matching. Some of these techniques have spawned a number of variations and undergone considerable refinement. This paper is a survey of techniques for volumetric scene reconstruction.
Greg Slabaugh, Ron Schafer, Tom Malzbender, Bruce Culbertson

A Volume Modeling Component of CAD

This paper presents a framework we have developed for modeling and manipulation of volumetric data sets. It is a set of new functions integrated into a CAD system. We shall address freeform modeling through voxelization of NURBS, voxel-based sculpting and the interface to CAD and Rapid Prototyping systems.
Zhou Jianwen, Lin Feng, Seah Hock Soon

A Technique for Volumetric CSG based on Morphology

In this paper, a new technique for volumetric CSG is presented. The technique requires the input volumes to correspond to solids which fulfill a voxelization suitability criterion. Assume the CSG operation is union. The volumetric union of two such volumes is defined in terms of the voxelization of the union of the two original solids.
The theory behind the new technique is discussed, the algorithm and implementation are presented. Finally, we present images and timings.
Andreas Bærentzen, Niels Jørgen Christensen

Hardware, Architectures, and API’s for Volume Rendering


vlib: A Volume Graphics API

This paper describes vlib, a generic application programming interface for volume graphics which supports many of the significant developments in the field to date. We present an overview of the interface and describe how its novel object modeling framework is able to facilitate a variety of modeling and rendering features, including scene graphs allowing constructive object representations, normal perturbation, spatial deformations, hypertexturing and more. We also discuss a volumetric ray-tracing algorithm for producing high quality images with a minimal memory overhead. The paper closes with some comments on our Open Source implementation of the interface and its underlying graphics system.
Andrew S. Winter, Min Chen

Efficient Space Leaping for Ray casting Architectures

One of the most severe problems for ray casting architectures is the waste of computation cycles and I/O bandwidth, due to redundant sampling of empty space. While several techniques exist for software implementations to skip these empty regions, few are suitable for hardware implementation. The few which have been presented either require a tremendous amount of logic or are not feasible for high frequency designs (i.e. running at 100 MHz) where latency is the one of the biggest issues.
In this paper, we present an efficient space leaping mechanism which requires only a small amount of SRAM (4 Kbit for a 2563 volume) and can be easily integrated into ray casting architectures. For each sub-cube of the volume, a bit is stored in an occupancy map, which can be generated in real-time, using the VIZARD II architecture. Hence, space leaping can be classification dependent achieving yet another significant speed-up over skipping only the empty space (voxel = 0). Using a set of real-world datasets, we show that frame-rates well above 15 frames per second can be accomplished for the VIZARD II architecture.
M. Meißner, M. Doggett, J. Hirche, U. Kanus

An Architecture For Interactive Tetrahedral Volume Rendering

We present a new architecture for interactive unstructured volume rendering. Our system moves all the computations necessary for order-independent transparency and volume scan conversion from the CPU to the graphics hardware, and it makes a software sorting pass unnecessary. It therefore provides the same advantages for volume data that triangle-processing hardware provides for surfaces. To address a remaining bottleneck — the bandwidth between main memory and the graphics processor — we introduce two new primitives, tetrahedral strips and tetrahedral fans. These primitives allow performance improvements in rendering tetrahedral meshes similar to the improvements triangle strips and fans allow in rendering triangle meshes. We provide new techniques for generating tetrahedral strips that achieve, on the average, strip lengths of 17 on repiesentative darasets. The combined effect of our architecture and new primitives is a 72 to 85 times increase in performance over triangle graphics hardware approaches. These improvements make it possible to use volumetric tetrahedral meshes in interactive applications.
Davis King, Craig M. Wittenbrink, Hans J. Wolters

Parallelizing the ZSWEEP Algorithm for Distributed-Shared Memory Architectures

In this paper we describe a simple parallelization of the ZSWEEP algorithm for rendering unstructured volumetric grids on distributed-shared memory machines, and study its performance on three generations of SGI multiprocessors, including the new Origin 3000 series.
The main idea of the ZSWEEP algorithm is very simple; it is based on sweeping the data with a plane parallel to the viewing plane, in order of increasing z, projecting the faces of cells that are incident to vertices as they are encountered by the sweep plane. Our parallel extension of the basic algorithm makes use of an image-based task partitioning scheme. Essentially, the screen is divided in more tiles than the number of processors, then each processor performs the sweep independently on the next available tile, until no more tiles are available to render. Here, we detail the modifications necessary to efficiently extend the sequential algorithm to work on shared-memory machines. We report on the performance of our implementation, and show that the tile-based ZSWEEP is naturally cache friendly, achieves fast rendering times, and substantial speedups on all the machines we used for testing. On one processor of our Origin 3000, we measure the L2 data cache hit rate of the tile-based ZSWEEP to be over 99%; a parallel efficiency of 83% on 16 processors; and rendering rates of about 300 thousand tetrahedra per second for a 1024 × 1024 image.
Ricardo Farias, Cláudio T. Silva

Data Acquisition


Hybrid Distance Field Computation

Distance fields are a widely investigated area within the area of Volume Graphics. Research is divided between applications; such as — skeletonisation, hypertexture, voxelisation, acceleration of rendering techniques, correlation and collision detection; and the fundamental algorithmic calculation of the distance fields. This paper concentrates on the latter by presenting a new method for calculating distance fields and comparing it with the current best approximate method and the true Euclidean distance field. Details are given of the algorithm, and the acceleration methods that are used for calculating the true distance field. Brief descriptions of applications for these accurate distance fields are given at the end of the paper.
Richard Satherley, Mark W. Jones

Visualization of Labeled Segments Cross-Contour Surfaces

Cross contour surfaces are composed of sets of planar contours. They are the natural output of surface extraction algorithms based on contouring features in parallel image slices of volume models. They are also suitable for the representation of CAD objects with tubular elongated shapes such as pipes and tools. Rendering these surfaces consists of tiling between successive contours, which is mainly a problem of establishing correspondences: between successive contours (branching) and also between vertices of consecutive contours (triangles definition). Most of the existing algorithms solve these problems by minimizing a distance function between vertices. However, contours are generally composed of segments belonging to different semantic regions that should not be mixed during tiling, as for instance, functional regions of the brain and types of terrain in elevation maps. A drawback of the existing distance based approaches is that they may establish correspondences between points of different segments. This paper proposes a representation model for surfaces from cross contours composed of labeled segments. In addition, a rendering algorithm of this model is described, that removes undesirable tiles between segments of different labels. The proposed method allows the tiling to be done on the fly, avoiding thus a double representation of the surface (contours plus triangle mesh). It also allows adaptive levels of resolution in the rendering.
Dani Tost, Anna Puig

Topology-Guided Downsampling

We present a new downsampling method for structured volume grids, which preserves much more of the topology of a scalar field than existing downsampling methods by preferably selecting scalar values of critical points. In particular, many critical points can be preserved which are lost by traditional downsampling methods. Our method is named “topology-guided downsampling” as topology-preserving downsampling is impossible in general. However, we show that even an approximate preservation of topology is highly desirable if isosurfaces are extracted from the downsampled volume grid, e.g. for interactive previewing, because many topological features of the isosurfaces, e.g. the number of components, tunnels, and holes, are preserved. We illustrate the benefits of our method with examples from medical and technical applications of volume visualization.
Martin Kraus, Thomas Ertl

Extracting Boundary Surface of Arbitrary Topology from Volumetric Datasets

This paper presents a novel, powerful reconstruction algorithm that can recover correct shape geometry as well as its unknown topology from arbitrarily complicated volumetric datasets. The algorithm starts from a simple seed model (of genus zero) that can be initialized automatically without user intervention. The deformable behavior of the model is then governed by a locally defined objective function associated with each vertex of the model. Through the numerical computation of function optimization, the algorithm can adaptively subdivide the model geometry, automatically detect self-collision of the model, properly modify its topology (because of the occurrence of self-collision), continuously evolve the model towards the object boundary, and reduce fitting error and improve fitting quality via global subdivision.
Ye Duan, Hong Qin

Segmentation of Biological Volume Datasets Using a Level-Set Framework

This paper presents a framework for extracting surface models from a broad variety of volume datasets. These datasets are produced from standard 3D imaging devices, and are all noisy samplings of complex biological structures with boundaries that have low and often varying contrasts. The level set segmentation method, which is well documented in the literature, creates a new volume from the input data by solving an initial-value partial differential equation (PDE) with user-defined feature-extracting terms. However, level set deformations alone are not sufficient, they must be combined with powerful initialization techniques in order to produce successful segmentations. Our level set segmentation approach consists of defining a set of suitable pre-processing techniques for initialization and selecting/tuning different feature-extracting terms in the level set algorithm. This collection of techniques forms a toolkit that can be applied, under the guidance of a user, to segment a variety of volumetric data.
Ross Whitaker, David Breen, Ken Museth, Neha Soni

Correction of Voxelization Artifacts by Revoxelization

Earlier proposed antialiasing techniques for voxelization of geometric objects in some cases do not result in completely alias-free data and image renditions. This is often the case for some implicit solids and CSG trees. In this paper we propose a set of operations, which can correct such corrupted data sets and subsequently lead to alias-free image renditions.
Miloš Šrámek, Leonid I. Dimitrov, J. Andreas Bærentzen

Acceleration Methods for Volume Rendering


Image-Based Rendering of Surfaces from Volume Data

We present an image-based rendering technique to accelerate rendering of surfaces from volume data. We cache the fully volume rendered image (called keyview) and use it to generate novel views without ray-casting every pixel. This is achieved by first constructing an underlying surface model of the volume and then texture mapping the keyview onto the geometry. When the novel view moves slightly away from the keyview, most of the original visible regions in the keyview are still visible in the novel view. Therefore, we only need to cast rays for pixels in the newly visible regions, which usually occupy only a small portion of the whole image, resulting in a substantial speedup. We have applied our technique to a virtual colonoscopy system and have obtained an interactive navigation speed through a 5123 size patient colon. Our experiments demonstrate an average of an order of magnitude speedup over that of traditional volume rendering, compromising very little on image quality.
Baoquan Chen, Arie Kaufman, Qingyu Tang

Accelerating Voxel-Based Terrain Rendering with Keyframe-Free Image-Based Rendering

We propose a voxel-based terrain rendering method which incorporates a novel keyframe-free image-based rendering algorithm and a new heuristic ray coherence raycasting algorithm. The current image is generated by warping the previous image with a revised 3D warping algorithm and filling holes by raycasting, accelerated by ray coherence and multiresolution ray traversal. This method not only achieves good performance, but also allows arbitrary viewing directions. We further accelerate the rendering with multiprocessor parallelism and have achieved a real-time rendering rate of 30Hz on a 16-processor SGI Power Challenge.
Jiafa Qin, Ming Wan, Huamin Qu, Arie Kaufman

Hierarchical Perspective Volume Rendering Using Triangle Fans

We present a method of accelerated perspective volume rendering using cell projection, triangle fans, and a data hierarchy. The hierarchy allows mixed resolution rendering, greatly increasing speed. We utilize triangle fans for additional speed and texture mapped opacity for accuracy.
Greg Schussman, Nelson Max

Two-Pass Image and Volume Rotation

We present a novel two-pass approach for both 2D image and 3D volume rotation. Each pass is a pseudo shear. However, it has a similar regularity as a pure shear in that a beam remains rigid while being sheared. Furthermore, the 3D pseudo shear guarantees that beams within one major axis slice remain in the same directional plane after the shearing. These properties make it feasible to implement the pseudo shears on a multi-pipelined hardware or a massively parallel machine. Compared with the existing decompositions, ours offer a minimum number of shears to realize an arbitrary 3D rotation. Our decomposition also preserves the image/volume quality by guaranteeing no minification for the first pass shear.
Baoquan Chen, Arie Kaufman

Applications and Case Studies


Volume Visualization of Payoff Regions for Derivatives Risk Management

Volume visualization of derivatives helps us discover risks, which hitherto have been elusive with traditional surface plots. In this paper, we would like to address the volatility visualization issue, which is one of the critical components in Option pricing, by incorporating volume visualization for better risk management. By enabling the visualization of volatility changes in risk profiling, combining with another two Option’s value determinants (i.e. the underlying asset spot price and days to maturity), a much better understanding about the risk involved in a portfolio can be achieved, particularly when the fluctuation of the asset is highly uncertain.
Tan Toh Fei, Edmond Cyril Prakash

EXOMIO: A 3D Simulator for External Beam Radiotherapy

Simulators are medical devices used in the oncology clinics to perform the simulation procedure for the external beam radiotherapy treatment. Unlikely for a clinic to obtain a real Simulator is a high investment in terms of money, space and personnel. The alternative here can be a Virtual Simulator (VS). The VSs are system-software that can perform the simulation process using the Computed Tomography (CT) data set of the patient, including the external patient’s skin landmarks, instead of the physical patient. In this paper we present EXOMIO, a 3D VS which supports high-end visualization techniques. As a result we can simulate every function of the real Simulator including component movement, light field projection and fluoroscopy. Further more we can provide the physicians with ergonomic volume definition and navigation tools.
Grigorios Karangelis, Nikolaos Zamboglou, Dimos Baltas, Georgios Sakas

Real-Time Volume Rendering for Virtual Colonoscopy

We present a volume rendering system that is capable of generating high-quality images of large volumetric data (e.g., 5123) in real time (30 frames or more per second). The system is particularly suitable for applications that generate densely occluded scenes of large data sets, such as virtual colonoscopy. The central idea is to divide the volume into sets of axis-aligned slabs. The union of the slabs approximates the shape of a colon. We render sub-volumes enclosed by the slabs and blend the slab images. We use the slab structure to accelerate volume rendering in various aspects. First, empty voxels outside the slabs are skipped. Second, fast view-volume clipping and occlusion culling are applied based on the slabs. Third, slab images are reused for nearby viewpoints. In addition, the slabs can be created very efficiently and they can be used to approximate perspective rendering with parallel projection, so that our system can benefit from fast parallel projection hardware and algorithms. We use image-warping to reduce the artifacts due to the approximation.
Wei Li, Arie Kaufman, Kevin Kreeger

Translucent and Opaque Direct Volume Rendering for Virtual Endoscopy Applications

Virtual endoscopy applications frequently require the visual representation of several material interfaces to show the relevant data feature to the user. This requires the specification of complex transfer function which classify the various materials and color them appropriately.
In this paper, we explore the use of the direct volume rendering for virtual endoscopy. We specifically look into the visual representation of different anatomical features of various volume datasets, which are located below the inner surface of the orgawof interest. Furthermore, we present how interactivity can be accomplished with the VIZARD II ray casting accelerator board.
Michael Meißner, Dirk Bartz

A Framework to Visualize and Interact with Multimodal Medical Images

The simultaneous use of images obtained from different sources is common in medical diagnosis. However, even though the quality of these images has been improving, the integration of multimodality data into a unique 3D representation is still non-trivial. To overcome this problem, multimodal visualization techniques provide better insight by fording suitable strategies to integrate different characteristics of multiple data sets into a single visual representation. This paper describes a framework for interactive multimodal visualization of 3D medical images, focusing on the multimodal visualization model and requirements for developing such systems. A short overview of multimodal visualization systems and techniques is also presented.
Isabel Manssour, Sérgio Furuie, Luciana Nedel, Carla Freitas


Weitere Informationen