Skip to main content

About this book

This collection of peer-reviewed conference papers provides comprehensive coverage of cutting-edge research in topological approaches to data analysis and visualization. It encompasses the full range of new algorithms and insights, including fast homology computation, comparative analysis of simplification techniques, and key applications in materials and medical science. The volume also features material on core research challenges such as the representation of large and complex datasets and integrating numerical methods with robust combinatorial algorithms.

Reflecting the focus of the TopoInVis 2013 conference, the contributions evince the progress currently being made on finding experimental solutions to open problems in the sector. They provide an inclusive snapshot of state-of-the-art research that enables researchers to keep abreast of the latest developments and provides a foundation for future progress. With papers by some of the world’s leading experts in topological techniques, this volume is a major contribution to the literature in a field of growing importance with applications in disciplines that range from engineering to medicine.

Table of Contents


Robust Topological Analysis


Robust Detection of Singularities in Vector Fields

Recent advances in computational science enable the creation of massive datasets of ever increasing resolution and complexity. Dealing effectively with such data requires new analysis techniques that are provably robust and that generate reproducible results on any machine. In this context, combinatorial methods become particularly attractive, as they are not sensitive to numerical instabilities or the details of a particular implementation. We introduce a robust method for detecting singularities in vector fields. We establish, in combinatorial terms, necessary and sufficient conditions for the existence of a critical point in a cell of a simplicial mesh for a large class of interpolation functions. These conditions are entirely local and lead to a provably consistent and practical algorithm to identify cells containing singularities.
Harsh Bhatia, Attila Gyulassy, Hao Wang, Peer-Timo Bremer, Valerio Pascucci

Interpreting Feature Tracking Through the Lens of Robustness

A key challenge in the study of a time-varying vector fields is to resolve the correspondences between features in successive time steps and to analyze the dynamic behaviors of such features, so-called feature tracking. Commonly tracked features, such as volumes, areas, contours, boundaries, vortices, shock waves and critical points, represent interesting properties or structures of the data. Recently, the topological notion of robustness, a relative of persistent homology, has been introduced to quantify the stability of critical points. Intuitively, the robustness of a critical point is the minimum amount of perturbation necessary to cancel it. In this chapter, we offer a fresh interpretation of the notion of feature tracking, in particular, critical point tracking, through the lens of robustness. We infer correspondences between critical points based on their closeness in stability, measured by robustness, instead of just distance proximities within the domain. We prove formally that robustness helps us understand the sampling conditions under which we can resolve the correspondence problem based on region overlap techniques, and the uniqueness and uncertainty associated with such techniques. These conditions also give a theoretical basis for visualizing the piecewise linear realizations of critical point trajectories over time.
Primoz Skraba, Bei Wang

Simplification of Morse Decompositions Using Morse Set Mergers

A common problem of vector field topology algorithms is the large number of the resulting topological features. This chapter describes a method to simplify Morse decompositions by iteratively merging pairs of Morse sets that are adjacent in the Morse Connection Graph (MCG). When Morse sets A and B are merged, they are replaced by a single Morse set, that can be thought of as the union of A, B and all trajectories connecting A and B. Pairs of Morse sets to be merged can be picked based on a variety of criteria. For example, one can allow only pairs whose merger results in a topologically simple Morse set to be selected, and give preference to mergers leading to small Morse sets.
Levente Sipeki, Andrzej Szymczak

Toward the Extraction of Saddle Periodic Orbits

Saddle periodic orbits are an essential and stable part of the topological skeleton of a 3D vector field. Nevertheless, there is currently no efficient algorithm to robustly extract these features. In this chapter, we present a novel technique to extract saddle periodic orbits. Exploiting the analytic properties of such an orbit, we propose a scalar measure based on the finite-time Lyapunov exponent (FTLE) that indicates its presence. Using persistent homology, we can then extract the robust cycles of this field. These cycles thereby represent the saddle periodic orbits of the given vector field. We discuss the different existing FTLE approximation schemes regarding their applicability to this specific problem and propose an adapted version of FTLE called Normalized Velocity Separation. Finally, we evaluate our method using simple analytic vector field data.
Jens Kasten, Jan Reininghaus, Wieland Reich, Gerik Scheuermann

Efficient Computation of Topology


Computational Topology via Functional Programming: A Baseline Analysis

Computational topology is of interest in visualization because it summarizes useful global properties of a dataset. The greatest need for such abstractions is in massive data, and to date most implementations have opted for low-level languages to obtain space and time-efficient implementations. Such code is complex, and is becoming even more so with the need to operate efficiently on a range of parallel hardware. Motivated by rapid advances in functional programming and compiler technology, this chapter investigates whether a shift in programming paradigm could reduce the complexity of the task. Focusing on contour tree generation as a case study, the chapter makes three contributions. First, it sets out the development of a concise functional implementation of the algorithm. Second, it shows that the sequential functional code can be tuned to match the performance of an imperative implementation, albeit at some cost in code clarity. Third, it outlines new possiblilities for parallelisation using functional tools, and notes similarities between functional abstractions and emerging ideas in extreme-scale visualization.
David Duke, Hamish Carr

Distributed Contour Trees

Topological techniques provide robust tools for data analysis. They are used, for example, for feature extraction, for data de-noising, and for comparison of data sets. This chapter concerns contour trees, a topological descriptor that records the connectivity of the isosurfaces of scalar functions. These trees are fundamental to analysis and visualization of physical phenomena modeled by real-valued measurements.
We study the parallel analysis of contour trees. After describing a particular representation of a contour tree, called local–global representation, we illustrate how different problems that rely on contour trees can be solved in parallel with minimal communication.
Dmitriy Morozov, Gunther H. Weber

Clear and Compress: Computing Persistent Homology in Chunks

We present a parallel algorithm for computing the persistent homology of a filtered chain complex. Our approach differs from the commonly used reduction algorithm by first computing persistence pairs within local chunks, then simplifying the unpaired columns, and finally applying standard reduction on the simplified matrix. The approach generalizes a technique by Günther et al., which uses discrete Morse Theory to compute persistence; we derive the same worst-case complexity bound in a more general context. The algorithm employs several practical optimization techniques, which are of independent interest. Our sequential implementation of the algorithm is competitive with state-of-the-art methods, and we further improve the performance through parallel computation.
Ulrich Bauer, Michael Kerber, Jan Reininghaus

Parallel Computation of Nearly Recurrent Components of Piecewise Constant Vector Fields

We describe a simple way to parallelize the algorithm for computing Morse decompositions and nearly recurrent sets for piecewise constant vector fields. In this chapter, we focus on the most computationally expensive component of the algorithm, namely refinement of the transition graph, which often takes up over 90 % of the running time.
The original implementation of the refinement algorithm refines nodes of the graph one at a time, in an arbitrary order. As a node is refined, arcs out of and into the resulting node are updated to preserve the key property of the graph, i.e. its ability to represent every trajectory of the vector field as a path in the graph. A significant portion of time is spent on updating and maintaining the integrity of the graph for each node that undergoes refinement. This approach is attractive and elegant since it allows one to refine the nodes in arbitrary order and to an arbitrary depth while guaranteeing the correctness of the graph. However, its performance is suboptimal for most transition graph refinement scenarios, not only because of the cost of maintaining the integrity of the graph for each node refinement operation, but also because of uncontrollable memory access patterns.
The method described in this chapter is tailored for the special case of nodes of the graph refined to the same depth, which is sufficient for most applications described in prior work. The idea is to stream through the set of arcs of the coarse graph to generate a compact bitmap based representation of the set of arcs of the refined graph. In the subsequent pass over the bitmap, the refined transition graph’s arcs are constructed from scratch, using only arc additions. The simplicity of the two passes makes them easy to parallelize and results in better cache performance. The bitmap building pass can be offloaded to the GPU to further increase the performance.
Nicholas Brunhart-Lupo, Andrzej Szymczak

Simplification, Approximation, and Distance Measures


Notes on the Simplification of the Morse-Smale Complex

The Morse-Smale complex can be either explicitly or implicitly represented. Depending on the type of representation, the simplification of the Morse-Smale complex works differently. In the explicit representation, the Morse-Smale complex is directly simplified by explicitly reconnecting the critical points during the simplification. In the implicit representation, on the other hand, the Morse-Smale complex is given by a combinatorial gradient field. In this setting, the simplification changes the combinatorial flow, which yields an indirect simplification of the Morse-Smale complex. The topological complexity of the Morse-Smale complex is reduced in both representations. However, the simplifications generally yield different results. In this chapter, we emphasize properties of the two representations that cause these differences. We also provide a complexity analysis of the two schemes with respect to running time and memory consumption.
David Günther, Jan Reininghaus, Hans-Peter Seidel, Tino Weinkauf

Measuring the Distance Between Merge Trees

Merge trees represent the topology of scalar functions. To assess the topological similarity of functions, one can compare their merge trees. To do so, one needs a notion of a distance between merge trees, which we define. We provide examples of using our merge tree distance and compare this new measure to other ways used to characterize topological similarity (bottleneck distance for persistence diagrams) and numerical difference (L -norm of the difference between functions).
Kenes Beketayev, Damir Yeliussizov, Dmitriy Morozov, Gunther H. Weber, Bernd Hamann

Topological Integrity for Dynamic Spline Models During Visualization of Big Data

In computer graphics and scientific visualization, B-splines are common geometric representations. A typical display method is to render a piecewise linear (PL) approximation that lies within a prescribed tolerance of the curve. In dynamic applications it is necessary to perturb specified points on the displayed curve. The distance between the perturbed PL structure and the perturbed curve it represents can change significantly, possibly changing the underlying topology and introducing unwanted artifacts to the display. We give a strategy to perturb the curve smoothly and keep track of the error introduced by perturbations. This allows us to refine the PL curve when appropriate and avoid spurious topological changes. This work is motivated by applications to visualization of Big Data from simulations on high performance computing architectures.
Hugh P. Cassidy, Thomas J. Peters, Horea Ilies, Kirk E. Jordan

Time-Dependent Analysis


A Comparison of Finite-Time and Finite-Size Lyapunov Exponents

Finite-time and finite-size Lyapunov exponents are related concepts that have been used for the purpose of identifying transport structures in time-dependent flow. The preference for one or the other concept seems to be based more on a tradition within a scientific community than on proven advantages. In this study, we demonstrate that with the two concepts highly similar visualizations can be produced, by maximizing a simple similarity measure. Furthermore, we show that results depend crucially on the numerical implementation of the two concepts.
Ronald Peikert, Armin Pobitzer, Filip Sadlo, Benjamin Schindler

Development of an Efficient and Flexible Pipeline for Lagrangian Coherent Structure Computation

The computation of Lagrangian coherent structures (LCS) has become a standard tool for the analysis of advective transport in unsteady flow applications. LCS identification is primarily accomplished by evaluating measures based on the finite-time Cauchy Green (CG) strain tensor over the fluid domain. Sampling the CG tensor requires the advection of large numbers of fluid tracers, which can be computationally intensive, but presents a large degree of data parallelism. Processing can be specialized to parallel computing architectures, but on the other hand, there is compelling need for robust and flexible implementations for end users. Specifically, code that can accommodate analysis of wide-ranging fluid mechanics applications, while using a modular structure that is easily extended or modified, and facilitates visualization is desirable. We discuss the use of Visualization Toolkit (VTK) libraries as a foundation for object-oriented LCS computation, and how this framework can facilitate integration of LCS computation into flow visualization software such as ParaView. We also discuss the development of CUDA GPU kernels for efficient parallel spatial sampling of the flow map, including optimizing these kernels for better utilization.
Siavash Ameli, Yogin Desai, Shawn C. Shadden

Topological Features in Time-Dependent Advection-Diffusion Flow

Concepts from vector field topology have been successfully applied to a wide range of phenomena so far—typically to problems involving the transport of a quantity, such as in flow fields, or to problems concerning the instantaneous structure, such as in the case of electric fields. However, transport of quantities in time-dependent flows has so far been topologically analyzed in terms of advection only, restricting the approach to quantities that are solely governed by advection. Nevertheless, the majority of quantities transported in flows undergoes simultaneous diffusion, leading to advection-diffusion problems. By extending topology-based concepts with diffusion, we provide an approach for visualizing the mechanisms in advection-diffusion flow. This helps answering many typical questions in science and engineering that have so far not been amenable to adequate visualization. We exemplify the utility of our technique by applying it to simulation data of advection-diffusion problems from different fields.
Filip Sadlo, Grzegorz K. Karch, Thomas Ertl



Definition, Extraction, and Validation of Pore Structures in Porous Materials

An intuitive and sparse representation of the void space of porous materials supports the efficient analysis and visualization of interesting qualitative and quantitative parameters of such materials. We introduce definitions of the elements of this void space, here called pore space, based on its distance function, and present methods to extract these elements using the extremal structures of the distance function. The presented methods are implemented by an image-processing pipeline that determines pore centers, pore paths and pore constrictions. These pore space elements build a graph that represents the topology of the pore space in a compact way. The representations we derive from μCT image data of realistic soil specimens enable the computation of many statistical parameters and, thus, provide a basis for further visual analysis and application-specific developments. We introduced parts of our pipeline in previous work. In this chapter, we present additional details and compare our results with the analytic computation of the pore space elements for a sphere packing in order to show the correctness of our graph computation.
Ulrike Homberg, Daniel Baum, Alexander Wiebel, Steffen Prohaska, Hans-Christian Hege

Visualization of Two-Dimensional Symmetric Positive Definite Tensor Fields Using the Heat Kernel Signature

We propose a method for visualizing two-dimensional symmetric positive definite tensor fields using the Heat Kernel Signature (HKS). The HKS is derived from the heat kernel and was originally introduced as an isometry invariant shape signature. Each positive definite tensor field defines a Riemannian manifold by considering the tensor field as a Riemannian metric. On this Riemmanian manifold we can apply the definition of the HKS. The resulting scalar quantity is used for the visualization of tensor fields. The HKS is closely related to the Gaussian curvature of the Riemannian manifold and the time parameter of the heat kernel allows a multiscale analysis in a natural way. In this way, the HKS represents field related scale space properties, enabling a level of detail analysis of tensor fields. This makes the HKS an interesting new scalar quantity for tensor fields, which differs significantly from usual tensor invariants like the trace or the determinant. A method for visualization and a numerical realization of the HKS for tensor fields is proposed in this chapter. To validate the approach we apply it to some illustrating simple examples as isolated critical points and to a medical diffusion tensor data set.
Valentin Zobel, Jan Reininghaus, Ingrid Hotz

Topological Features in Glyph-Based Corotation Visualization

This chapter introduces a novel method for vortex detection in flow fields based on the corotation of line segments and glyph rendering. The corotation measure is defined as a point-symmetric scalar function on a sphere, suitable for direct representation in the form of a three-dimensional glyph. Appropriate placement of these glyphs in the domain of a flow field makes it possible to depict vortical features present in the flow. We demonstrate how topological analysis of this novel glyph-based representation of vortex features can reveal vortex characteristics that lie beyond the capabilities of visualization techniques that consider vortex direction and magnitude information only.
Sohail Shafii, Harald Obermaier, Bernd Hamann, Kenneth I. Joy


Additional information

Premium Partner

    Image Credits