Skip to main content
main-content

Über dieses Buch

Data visualization is currently a very active and vital area of research, teaching and development. The term unites the established field of scientific visualization and the more recent field of information visualization. The success of data visualization is due to the soundness of the basic idea behind it: the use of computer-generated images to gain insight and knowledge from data and its inherent patterns and relationships. A second premise is the utilization of the broad bandwidth of the human sensory system in steering and interpreting complex processes, and simulations involving data sets from diverse scientific disciplines and large collections of abstract data from many sources.

These concepts are extremely important and have a profound and widespread impact on the methodology of computational science and engineering, as well as on management and administration. The interplay between various application areas and their specific problem solving visualization techniques is emphasized in this book. Reflecting the heterogeneous structure of Data Visualization, emphasis was placed on these topics:

-Visualization Algorithms and Techniques;
-Volume Visualization;
-Information Visualization;
-Multiresolution Techniques;
-Interactive Data Exploration.

Data Visualization: The State of the Art presents the state of the art in scientific and information visualization techniques by experts in this field. It can serve as an overview for the inquiring scientist, and as a basic foundation for developers. This edited volume contains chapters dedicated to surveys of specific topics, and a great deal of original work not previously published illustrated by examples from a wealth of applications. The book will also provide basic material for teaching the state of the art techniques in data visualization.

Data Visualization: The State of the Art is designed to meet the needs of practitioners and researchers in scientific and information visualization. This book is also suitable as a secondary text for graduate level students in computer science and engineering.

Inhaltsverzeichnis

Frontmatter

Visualization Algorithms and Techniques

Frontmatter

Efficient Occlusion Culling for Large Model Visualization

Abstract
Occlusion and visibility culling is one of the major techniques to reduce the geometric complexity of large polygonal models. Since the introduction of hardware assisted-occlusion culling in OpenGL (as an extension), solely software-based approaches are losing relevance rapidly for applications which cannot exploit specific knowledge of the scene geometry. However, there are still several issues open on the software side for significant performance improvements. In this paper, we discuss several of these techniques.
Dirk Bartz, Michael Meißner, Gordon Müller

Localizing Vector Field Topology

Abstract
The topology of vector fields offers a well known way to show a “condensed” view of the stream line behavior of a vector field. The global structure of a field can be shown without time-consuming user interaction. With regard to large data visualization, one encounters a major drawback: the necessity to analyze a whole data set, even when interested in only a small region. We show that one can localize the topology concept by including the boundary in the topology analysis. The idea is demonstrated for a turbulent swirling jet simulation example. Our concept works for all planar, piecewise analytic vector fields on bounded domains.
Gerik Scheuermann, Bernd Hamann, Kenneth I. Joy, Wolfgang Kollmann

Feature Tracking with Skeleton Graphs

Abstract
A way to analyse large time-dependent data sets is by visualization of the evolution of features in these data. The process consists of four steps: feature extraction, feature tracking, event detection, and visualization.
In earlier work, we described the execution of the tracking process by means of basic attributes like position and size, gathered in ellipsoid feature descriptions. Although these basic attributes are accurate and provide good tracking results, they provide little shape information about the features. In other work, we presented a better way to describe the shape of the features by skeleton attributes.
In this paper, we investigate the role that the skeleton graphs can play in feature tracking and event detection. The extra shape information allows detection of certain events much more accurately, and also allows detection of new types of events: changes in the topology of the feature.
Benjamin Vrolijk, Freek Reinders, Frits H. Post

Correspondence Analysis

Visualizing property-profiles of time dependent 3D datasets
Abstract
The article presents a method to perform an analysis of correspondence between sets of points in three-dimensional euclidean space E 3. Application-specific spatial data structures like the minimum (euclidean) spanning tree and several kinds of histograms assessing different transformations combined with quantities characterizing geometrical and topological qualities of point clusters are used to compute scores for point-to-point identification. These ratings are accumulated in a so-called match matrix, which is finally employed to extract a 1:1 match. The method is used to track individual fluorescent spots (synapses) in a volume of tissue which undergoes uneven spatial distortion (swelling and shrinkage). This enables the creation and analysis of cell property-profiles.
Karsten Fries, Jörg Meyer, Hans Hagen, Bernd Lindemann

Specializing Visualization Algorithms

Abstract
In this paper we look at the potential of program specialization techniques in the context of visualization. In particular we look at partial evaluation and pass separation and how these have been used to automatically produce more efficient implementations and how they can be used to design new algorithms. We conclude by discussing what we think are the applications were program specialization is most promising in visualization.
Stephan Diehl

Isosurface Extraction for Large-Scale Data Sets

Abstract
Isosurface extraction is an important technique for visualizing large scale three-dimensional scalar fields. Recent years have seen many acceleration methods for isosurface extraction including the space space representation and view dependent methods. In this paper, we provide an overview of isosurface extraction and describe two methods for view dependent isosurface extraction; building upon our previous research to develop a superior method. We describe the differences between the two techniques and their relative advantages and disadvantages. These methods are particularly useful for remote visualization of very large datasets.
Yarden Livnat, Charles Hansen, Christopher R. Johnson

Volume Visualization

Frontmatter

Topologically-Accentuated Volume Rendering

Abstract
In spite of many attempts at making volume visualization popular, it is still a hard task for novice users to adjust rendering-related parameter values for generating informative images. In our previous study, we took advantage of a 3D field topology analysis for semi-automatic design of transfer functions. In this paper, we address four specific issues to adapt the method for dealing with real world datasets. A medical CT-scanned dataset is used to prove the feasibility of the extended method.
Issei Fujishiro, Yuriko Takeshima, Shigeo Takahashi, Yumi Yamaguchi

Reconstruction Issues in Volume Visualization

Abstract
Although volume visualization has already grown out of its infancy, the most commonly used reconstruction techniques are still trilinear interpolation for function reconstruction and central differences (most often in conjunction with trilinear interpolation) for gradient reconstruction. Nevertheless, quite some research in the last few years was devoted to improve this situation. This paper surveys the more important methods, emphasizing selected work in function and gradient reconstruction, and gives an overview over the rather new development of exploiting second-order derivatives for volume visualization purposes.
Thomas Theußl, Torsten Möller, Jiří Hladůvka, M. Eduard Gröller

High Quality Splatting and Volume Synthesis

Abstract
Our recent improvements to volume rendering using splatting allow for an accurate functional reconstruction and volume integration, occlusion-based acceleration, post-rendering classification and shading, 3D texture mapping, bump mapping, anti-aliasing and gaseous animation. A pure software implementation requiring low storage and using simple operations has been implemented to easily support these operations. This paper presents a unified framework and discussion of these enhancements, as well as extensions to support hypertextures efficiently. Discussions on appropriate volume models for effective usage of these techniques is also presented.
Roger Crawfis, Jian Huang

CellFast: Interactive Unstructured Volume Rendering and Classification

Abstract
CellFast is an interactive system for unstructured volume visualization. Our CellFast system uses optimizations of OpenGL triangle fans, customized quicksort, memory organization for cache efficiency, display lists, tetrahedral culling, and multithreading. The optimizations improve the performance of an approach similar to Shirley and Tuchman’s projected tetrahedra rendering to provide 1 frame/second for 240,122 tetrahedral cells, 3 frames/second for 70,125 tetrahedral cells, and 15 frames/second for 12,936 tetrahedral cells. CellFast also performs fully automated classification to assist rendering. Demonstration of fluid flow, medical simulation, and medical imaging datasets are provided. CellFast provides for very high resolutions (up to 3840x1024), at frame rates that are orders of magnitude higher in performance compared to those in the literature. The CellFast system combines an interactive renderer and automatic classification to make unstructured volume rendering more useful.
Craig M. Wittenbrink, Hans J. Wolters, Mike Goss

Cell Projection of Meshes with Non-Planar Faces

Abstract
We review the cell projection method of volume rendering, discussing back-to-front cell sorting, and approximations involved in hardware color computation and interpolation. We describe how the method accommodates cells with non-planar faces using view dependent subdivision into tetrahedra.
Nelson Max, Peter Williams, Claudio Silva

Segmentation and Texture-Based Hierarchical Rendering Techniques for Large-Scale Real-Color Biomedical Image Data

Abstract
Hierarchical, texture-based rendering is a key technology for exploring large scale datasets. We describe a framework for an interactive rendering system based on a client/server model. The system supports various output media from immersive 3-D environments to desktop based rendering systems. It uses web-based transport mechanisms to transfer the data between the server and the client application. This allows us to access and explore large-scale data sets from remote locations over the Internet. Hierarchical space-subdivision, wavelet compression, and progressive data transmission are used to visualize the data on the client side.
Joerg Meyer, Ragnar Borg, Ikuko Takanashi, Eric B. Lum, Bernd Hamann

Information Visualization

Frontmatter

Ebusiness Click Stream Analysis

Abstract
Click stream data represents a rich source of information for understanding web site activity, browsing patterns to purchasing decisions. Standard tools produce hundreds of reports that are not particularly useful. The problem with fixed web reports, and report-based analysis in general, is that reports only answer specific, predefined questions that are insufficient for today’s highly competitive and rapidly changing businesses. To overcome this problem, we have developed a click stream analysis tool called eBizInsights. eBizInsights consists of a web log parser, a click stream warehouse, a reporting engine, and a rich visual interactive workspace. Using the workspace, analysts perform ad hoc analysis, discover patterns, and identify correlations that are impossible to find using fixed reports.
Stephen G. Eick

Hierarchical Exploration of Large Multivariate Data Sets

Abstract
Multivariate data visualization techniques are often limited in terms of the number of data records that can be simultaneously displayed in a manner that allows ready interpretation. Due to the size of the screen and number of pixels available, visualizing more than a few thousand data points generally leads to clutter and occlusion. This in turn restricts our ability to detect, classify, and measure phenomena of interest, such as clusters, anomalies, trends, and patterns. In this paper we describe our experiences in the development of multi-resolution visualization techniques for large multivariate data sets. By hierarchically clustering the data and displaying aggregation information for each cluster, we can examine the data set at multiple levels of abstraction. In addition, by providing powerful navigation and filtering operations, we can create an environment suitable for interactive exploration without overloading the user with dense information displays. In this paper, we illustrate that our hierarchical displays are general by successfully applying them to four popular yet non-scalable visualizations, namely parallel coordinates, glyphs, scatterplot matrices and dimensional stacking.
Jing Yang, Matthew O. Ward, Elke A. Rundensteiner

Visualization of Multi Dimensional Data Using Structure Preserving Projection Methods

Abstract
Multi Dimensional Scaling is a structure preserving projection method that allows for the visualization of multidimensional data. In this paper we discuss our practical experience in using MDS as a projection method in three different application scenarios. Various reasons are given why structure preserving projection methods are useful for the analysis of multidimensional data. We discuss two visual forms (glyphs, heightfields) which can be used to represent the output of the projection methods.
Wim de Leeuw, Robert van Liere

Visualizing Process Information and the Health Status of Wastewater Treatment Plants

A Case Study of the ESPRIT-Project WaterCIME
Abstract
In this case study we present an approach to support the operators of wastewater treatment plants by supervising the health status of the plant with an online diagnosis software and by visualizing the process data and the diagnosis results. In addition to measuring the available on-line data, we simulate the behavior of the biological processes in order to retrieve data on-line that normally only can be determined by analyzing water samples in the laboratory. Based on the automated analysis of these data we provide to the operator a view of the states of the biological processes within his plant. This information allows an early reaction in the case of deviations from an optimal process behavior. Thus the plant can always be operated in an optimal state so that we can be sure that the treated water always meets the environmental standards that are required.
Peter Dannenmann, Hans Hagen

Multiresolution Methods

Frontmatter

Data Structures for 3D Multi-Tessellations: An Overview

Abstract
Multiresolution models support the interactive visualization of large volumetric data through selective refinement, an operation which permits to focus resolution only on the most relevant portions of the domain, or in the proximity of interesting field values. A 3D Multi-Tessellation (MT) is a multiresolution model, consisting of a coarse tetrahedral mesh at low resolution, and of a set of updates refining such a mesh, arranged as a partial order. In this paper, we describe and compare different data structures which permit to encode a 3D MT and to support selective refinement.
Emanuele Danovaro, Leila De Floriani, Paola Magillo, Enrico Puppo

A Data Model for Adaptive Multi-Resolution Scientific Data

Abstract
Representing data using multiresolution is a valuable tool for the interactive exploration of very large datasets. Current multiresolution tools are written specifically for a single kind of multiresolution data. As a step toward developing general purpose multiresolution tools, we present here a model that represents a wide range of multiresolution data within a single paradigm. In addition, our model provides support for working with multiresolution data in a distributed environment.
Philip J. Rhodes, R. Daniel Bergeron, Ted M. Sparr

Multiresolution Representation of Datasets with Material Interfaces

Abstract
We present a new method for constructing multiresolution representations of data sets that contain material interfaces. Material interfaces embedded in the meshes of computational data sets are often a source of error for simplification algorithms because they represent discontinuities in the scalar or vector field over mesh elements. By representing material interfaces explicitly, we are able to provide separate field representations for each material over a single cell. Multiresolution representations utilizing separate field representations can accurately approximate datasets that contain discontinuities without placing a large percentage of cells around the discontinuous regions. Our algorithm uses a multiresolution tetrahedral mesh supporting fast coarsening and refinement capabilities; error bounds for feature preservation; explicit representation of discontinuities within cells; and separate field representations for each material within a cell.
Benjamin Gregorski, Kenneth I. Joy, David E. Sigeti, John Ambrosiano, Gerald Graham, Murray Wolinski, Mark Duchaineau

Generalizing Lifted Tensor-Product Wavelets to Irregular Polygonal Domains

Abstract
We present a new construction approach for symmetric lifted B-spline wavelets on irregular polygonal control meshes defining two-manifold topologies. Polygonal control meshes are recursively refined by stationary subdivision rules and converge to piecewise polynomial limit surfaces. At every subdivision level, our wavelet transforms provide an efficient way to add geometric details that are expanded from wavelet coefficients. Both wavelet decomposition and reconstruction operations are based on local lifting steps and have linear-time complexity.
Martin Bertram, Mark A. Duchaineau, Bernd Hamann, Kenneth I. Joy

Ranked Representation of Vector Fields

Abstract
Browsing and visualizing large datasets is often a tedious chore. Locating features, especially in a wavelet transform domain is usually offered as a possible solution. Wavelet transforms decorrelate data and facilitate progressive access through streaming. The work reported here describes a scheme that allows the user to first visualize regions containing significant features. Various region and coefficient ranking strategies can be incorporated into this approach so that a progressively encoded bitstream can be constructed. We examine four wavelet ranking schemes and demonstrate the usefulness of the feature-based schemes for a 2D oceanographic dataset.
Balakrishna Nakshatrala, David Thompson, Raghu Machiraju

Modelling Techniques

Frontmatter

Procedural Volume Modeling, Rendering, and Visualization

Abstract
Volume visualization techniques have advanced dramatically over the past fifteen years. However, the increase in scale of visualization tasks has been increasing at even a faster rate. Today, many problems require the visualization of gigabytes to terabytes of data. Additionally, the number of variables and dimensionality of many scientific simulations and observations has increased, while the resolution of computer graphics displays has not changed substantially (still a few million pixels). These factors present a significant challenge to current visualization techniques and approaches. We propose a new approach to visualization to solve these problems and provide flexibility and extensibility for visualization challenges over the next decade: procedural visualization. In this approach, we encode and abstract datasets to a more manageable level, while also developing more effective visualization and rendering techniques.
David Ebert, Penny Rheingans

Surface Approximation to Point Cloud Data Using Volume Modeling

Abstract
Given a collection of unorganised points in space, we present a new method of constructing a surface which approximates this point cloud. The surface is defined implicitly as the isosurface of a trivariate volume model. The volume model is piecewise linear and obtained as a least squares fit to data derived from the point cloud. The original point cloud input is assigned a zero value. Additional points are derived for the interior and exterior and assigned positive and negative values respectively.
Adam Huang, Gregory M. Nielson

Enriching Volume Modelling with Scalar Fields

Abstract
A scalar field is a generalisation of a surface function in dimension. Visualisation traditionally focuses on discrete specifications of scalar fields (e.g., volume datasets). This paper discusses the role of continuous and procedural field specifications in volume visualisation and volume graphics, and the inter-operations between continuous and discrete specifications. It demonstrates the different use of scalar fields through several modelling aspects, including constructive volume geometry and non-photorealistic textures, and presents our approaches to the creation of more photorealistic effects in direct volume rendering.
Min Chen, Andrew S. Winter, David Rodgman, Steve Treavett

Fast Methods for Computing Isosurface Topology with Betti Numbers

Abstract
Betti numbers can be used as a means for feature detection to aid in the exploration of complex large-scale data sets. We present a fast algorithm for the calculation of Betti numbers for triangulated isosurfaces, along with examples of their use. Once an isosurface is extracted from a data set, calculating Betti numbers only requires time and space proportional to the isosurfaces, not the data set. Because the overhead of obtaining Betti numbers is small, our algorithm can be used with large data.
Shirley F. Konkle, Patrick J. Moran, Bernd Hamann, Kenneth I. Joy

Surface Interpolation by Spatial Environment Graphs

Abstract
A new algorithm for the reconstruction of surfaces from three-dimensional point clouds is presented. Its particular features are reconstruction of open surfaces with boundaries from data sets with variable density, and treatment of sharp edges, that is, locations of infinite curvature. While these properties can be demonstrated only empirically, we outline formal arguments which explain why the algorithm works well for compact surfaces of limited curvature without boundary. They are based on a formal definition of ‘reconstruction’, and on demonstration of existence of sampling sets for which the algorithm is successful.
Robert Mencl, Heinrich Müller

Interaction Techniques and Architectures

Frontmatter

Preset Based Interaction with High Dimensional Parameter Spaces

Abstract
Many systems require the setting of a large number of parameters. This is often a difficult and time consuming task, especially for novice users. A framework is presented to simplify this task. Settings are defined as a weighted sum of a number of presets, thereby bridging the gap between the expert mode of setting all individual parameters and the novice user mode of selecting presets. Several methods are presented to set the weights of the presets. With an interactive graphical widget, the preset controller, the user can change many parameters simultaneously in an easy and natural way. Also, methods for automated scanning of the parameter space are described. Two applications are presented: morphing of drawings and the control of a sound synthesizer.
Jarke J. van Wijk, Cornelius W. A. M. van Overveld

Visual Interaction

For Solving Complex Optimization Problems
Abstract
Many real world problems can be described as complex optimization problems. Some of them can be easily formalized and are amenable to an automated solution using some (usually heuristic) optimization algorithm. Other complex problems can not be solved satisfactory by automated algorithms. The reason is that the problems and the corresponding optimization goals can either not be fully formalized or that they vary depending on the user and the task at hand. In both cases, there is no chance to obtain a fully automatic solution of the problem. The only possibility is to make the user an integral part of the process. In this article, we therefore propose an interactive optimization system based on visualization techniques to guide the optimization process of heuristic optimization algorithms. To show the usefulness of our ideas, we provide two example applications: First, we apply the idea in the framework of similarity search in multimedia databases. Since it is difficult to specify the search task, we use visualization techniques to allow an interactive specification. As basis for the automated optimization we use a genetic algorithm. Instead of having an a-priori fully-specified fitness function, however, we let the user interactively determine the fitness of intermediate results based on visualizations of the data. In this way, an optimization with user-dependent and changing optimization goals is possible. The second example is a typical complex optimization problem — the time tabling problem. In most instantiations of the problem, it is not possible to completely specify all constraints, especially the potentially very large number of dependencies and soft constraints. In this application example, we also use visualization techniques in combination with automated optimization to improve the obtained solutions.
Alexander Hinneburg, Daniel A. Keim

Visualizing Cosmological Time

Abstract
Time is a critical aspect of visualization systems that invoke dynamic simulations and animation. Dealing with time at the scales required to conceptualize astrophysical and cosmological data, however, introduces specialized problems that require unique approaches. In this paper, we extend our previous investigations on interactive visualization across extremely large scale ranges of space to incorporate dynamical processes with very large scale ranges of time. We focus on several issues: time scales that are too short or too long to animate in real time, those needing complex adjustment relative to the scale of space, time simulations that involve the constant finite velocity of light (special relativity) in an essential way, and those that depend upon the dynamics of the coordinate system of the universe itself (general relativity). We conclude that a basic strategy for time scaling should ordinarily track the scaling of space chosen for a particular problem; e.g., if we are adjusting the interactive space scale, we scale time in a similar way. At cosmological scales, this has the interesting consequence that the time scale adjusts to the size of each era of the universe. If we make a single tick of the viewer’s clock correspond to an enormous time when viewing an enormous space, we see motions in viewer’s time of increasingly larger, and usually appropriate, scales. Adding interactive time-scale controls then permits the user to switch the focus of attention among animations with distinctly different time features within a single spatial scale. Objects may have an entire time hierarchy of alternate icons, with different representations for different time-step scales, exactly analogous to the choice of spatial level-of-detail models.
Andrew J. Hanson, Chi-Wing Fu, Eric A. Wernert

Component-based Intelligent Visualization

Abstract
In this paper we propose a visualization system architecture combining component technology with multi agent technology. Component technology is used as a development platform on which visualization modules as well as agents are implemented as reusable software components. The agents in our approach are used to automatically satisfy individual user demands and to dynamically adapt to changing system loads and different hardware configurations
H. Hagen, H. Barthel, A. Ebert, M. Bender

Backmatter

Weitere Informationen