Skip to main content

About this book

For some time, medicine has been an important driver for the development of data processing and visualization techniques. Improved technology offers the capacity to generate larger and more complex data sets related to imaging and simulation. This, in turn, creates the need for more effective visualization tools for medical practitioners to interpret and utilize data in meaningful ways. The first edition of Visualization in Medicine and Life Sciences (VMLS) emerged from a workshop convened to explore the significant data visualization challenges created by emerging technologies in the life sciences. The workshop and the book addressed questions of whether medical data visualization approaches can be devised or improved to meet these challenges, with the promise of ultimately being adopted by medical experts. Visualization in Medicine and Life Sciences II follows the second international VMLS workshop, held in Bremerhaven, Germany, in July 2009. Internationally renowned experts from the visualization and driving application areas came together for this second workshop. The book presents peer-reviewed research and survey papers which document and discuss the progress made, explore new approaches to data visualization, and assess new challenges and research directions.

Table of Contents


Feature Extraction


Discrete Distortion for 3D Data Analysis

We investigate a morphological approach to the analysis and understanding of three-dimensional scalar fields, and we consider applications to 3D medical and molecular images as examples.We consider a discrete model of the scalar field obtained by discretizing its 3D domain into a tetrahedral mesh. In particular, our meshes correspond to approximations at uniform or variable resolution extracted from a multi-resolution model of the 3D scalar field, that we call a hierarchy of diamonds. We analyze the images based on the concept of discrete distortion, that we have introduced in [26], and on segmentations based on Morse theory. Discrete distortion is defined by considering the graph of the discrete 3D field, which is a tetrahedral hypersurface in R 4, and measuring the distortion of the transformation which maps the tetrahedral mesh discretizing the scalar field domain into the mesh representing its graph in R 4. We describe a segmentation algorithm to produce Morse decompositions of a 3D scalar field which uses a watershed approach and we apply it to 3D images by using as scalar field both intensity and discrete distortion. We present experimental results by considering the influence of resolution on distortion computation. In particular, we show that the salient features of the distortion field appear prominently in lower resolution approximations to the dataset.
Leila De Floriani, Federico Iuricich, Paola Magillo, Mohammed Mostefa Mesmoudi, Kenneth Weiss

Interactive Visualization–A Key Prerequisite for Reconstruction and Analysis of Anatomically Realistic Neural Networks

Recent progress in large-volume microscopy, tissue-staining, as well as in image processing methods and 3D anatomy reconstruction allow neuroscientists to extract previously inaccessible anatomical data with high precision. For instance, determination of neuron numbers, 3D distributions and 3D axonal and dendritic branching patterns support recently started efforts to reconstruct anatomically realistic network models of many thousand neurons. Such models aid in understanding neural network structure, and, by numerically simulating electro-physiological signaling, also to reveal their function. We illustrate the impact of visual computing on neurobiology at the example of important steps that are required for the reconstruction of large neural networks. In our case, the network to be reconstructed represents a single cortical column in the rat brain, which processes sensory information from its associated facial whisker hair. We demonstrate how analysis and reconstruction tasks, such as neuron somata counting and tracing of neuronal branches, have been incrementally accelerated – finally leading to efficiency gains of orders of magnitude. We also show how steps that are difficult to automatize can now be solved interactively with visual support. Additionally, we illustrate how visualization techniques have aided computer scientists during algorithm development. Finally, we present visual analysis techniques allowing neuroscientists to explore morphology and function of 3D neural networks. Altogether, we demonstrate that visual computing techniques make an essential difference in terms of scientific output, both qualitatively, i.e., whether particular
Vincent J. Dercksen, Marcel Oberlaender, Bert Sakmann, Hans-Christian Hege

MRI-Based Visualisation and Quantification of Rheumatoid and Psoriatic Arthritis of the Knee

The overall goal of this project is to develop an application to quantify the level of synovial inflammation within patients suffering from Rheumatoid or Psoriatic Arthritis, based on automated synoviumsegmentation and 3D visualisation of MRI scan data. This paper discusses the direction we have taken during the development of the visualization components of the application, and an overview of the project in general. Current software methods of quantifying enhancement of inflamed synovial tissue by gadolinium contrast have several limitations. The clinician is required to classify individual regions of interest on a slice by slice basis which is a time-consuming process and suffers from user subjectivity. We propose a method of automating this process by reconstructing the slice information and performing quantification in three dimensions.
Ben Donlon, Douglas Veale, Patrick Brennan, Robert Gibney, Hamish Carr, Louise Rainford, ChinTeck Ng, Eliza Pontifex, Jonathan McNulty, Oliver FitzGerald, John Ryan

An Application for the Visualization and Quantification of HIV-Associated Lipodystrophy from Magnetic Resonance Imaging Datasets

HIV-associated lipodystrophy is a syndrome characterized by the abnormal distribution or degeneration of the body’s adipose tissue. Since it was first described in 1999, the complications associated with the condition have become a very real problem for many HIV patients using antiretroviral drug treatments. Accurate visualization and quantification of the body’s adipose tissue can aid in both the treatment and monitoring of the condition. Manual segmentation of adipose tissue from MRI data is a challenging and time consuming problem. It is for this reason that we have developed an application which allows users to automatically segment a sequence of MRI images and render the result in 3D.
Tadhg O’Sullivan, Patrick Brennan, Peter Doran, Paddy Mallon, Stephen J. Eustace, Eoin Kavannagh, Allison Mcgee, Louise Rainford, John Ryan



Semi-Automatic Rough Classification of Multichannel Medical Imaging Data

Rough set theory is an approach to handle vagueness or uncertainty. We propose methods that apply rough set theory in the context of segmentation (or partitioning) of multichannel medical imaging data. We put this approach into a semi-automatic framework, where the user specifies the classes in the data by selecting respective regions in 2D slices. Rough set theory provides means to compute lower and upper approximations of the classes. The boundary region between the lower and the upper approximations represents the uncertainty of the classification.We present an approach to automatically compute segmentation rules from the rough set classification using a k-means approach. The rule generation removes redundancies, which allows us to enhance the original feature space attributes with a number of further feature and object space attributes. The rules can be transferred from one 2D slice to the entire 3D data set to produce a 3D segmentation result. The result can be refined by the user by interactively adding more samples (from the same or other 2D slices) to the respective classes. Our system allows for a visualization of both the segmentation result and the uncertainty of the individual class representations. The methods can be applied to single- as well as multichannel (or multimodal) imaging data. As a proof of concept, we applied it to medical imaging data with RGB color channels.
Ahmed Elmoasry, Mohamed Sadek Maswadah, Lars Linsen

An Evaluation of Peak Finding for DVR Classification of Biological Data

In medicine and the life sciences, volume data are frequently entropic, containing numerous features at different scales as well as significant noise from the scan source. Conventional transfer function approaches for direct volume rendering have difficulty handling such data, resulting in poor classification or undersampled rendering. Peak finding addresses issues in classifying noisy data by explicitly solving for isosurfaces at desired peaks in a transfer function. As a result, one can achieve better classification and visualization with fewer samples and correspondingly higher performance. This paper applies peak finding to several medical and biological data sets, particularly examining its potential in directly rendering unfiltered and unsegmented data.
Aaron Knoll, Rolf Westerteiger, Hans Hagen

Volumes and Shapes


Vessel Visualization with Volume Rendering

Volume rendering allows the direct visualization of scanned volume data, and can reveal vessel abnormalitiesmore faithfully. In this overview, we will present a pipeline model for direct volume rendering systems, which focus on vascular structures. We will cover the fields of data pre-processing, classification of the volume via transfer functions, and finally rendering the volume in 2D and 3D. For each stage in the pipeline, different techniques are discussed to support the diagnosis of vascular diseases. Next to various general methods we will present two case studies, in which the systems are optimized for two different medical issues. At the end, we discuss current trends in volume rendering and their implications for vessel visualization.
Christoph Kubisch, Sylvia Glaßer, Mathias Neugebauer, Bernhard Preim

Efficient Selection of Representative Views and Navigation Paths for Volume Data Exploration

The visualization of volumetric datasets, quite common in medical image processing, has started to receive attention fromother communities such as scientific and engineering. The main reason is that it allows the scientists to gain important insights into the data. While the datasets are becoming larger and larger, the computational power does not always go hand to hand, because the requirements of using low-end PCs or mobile phones increase. As a consequence, the selection of an optimal viewpoint that improves user comprehension of the datasets is challenged with time consuming trial and error tasks. In order to facilitate the exploration process, informative viewpoints together with camera paths showing representative information on the model can be determined. In this paper we present amethod for representative viewselection and path construction, togetherwith some accelerations that make this process extremely fast on a modern GPU.
Eva Monclús, Pere-Pau Vázquez, Isabel Navazo

Feature Preserving Smoothing of Shapes Using Saliency Skeletons

We present a novel method that uses shape skeletons, and associated quantities, for feature-preserving smoothing of digital (black-and-white) binary shapes. We preserve, or smooth out, features based on a saliency measure that relates feature size to local object size, both computed using the shape’s skeleton. Low-saliency convex features (cusps) are smoothed out, and low-saliency concave features (dents) are filled in, respectively, by inflating simplified versions of the shape’s foreground and background skeletons. The method is simple to implement, works in real time, and robustly removes large-scale contour and binary speckle noise while preserving salient features. We demonstrate the method with several examples on datasets from the shape analysis application domain.
Alexandru Telea

Tensor Visualization


Enhanced DTI Tracking with Adaptive Tensor Interpolation

A novel tensor interpolation method is introduced that allows Diffusion Tensor Imaging (DTI) streamlining to overcome low-anisotropy regions and permits branching of trajectories using information gathered from the neighbourhood of low-anisotropy voxels met during the tracking. The interpolation method is performed in Log-Euclidean space and collects directional information in a spherical neighbourhood of the voxel in order to reconstruct a tensor with a higher linear diffusion coefficient than the original. The weight of the contribution of a certain neighbouring voxel is proportional to its linear diffusion coefficient and inversely proportional to a power of the spatial Euclidean distance between the two voxels. This inverse power law provides our method with robustness against noise. In order to resolve multiple fiber orientations, we divide the neighbourhood of a lowanisotropy voxel in sectors, and compute an interpolated tensor in each sector. The tracking then continues along the main eigenvector of the reconstructed tensors.
We test our method on artificial, phantom and brain data, and compare it with (a) standard streamline tracking, (b) the Tensorlines method, (c) streamline tracking after an interpolationmethod based on bilateral filtering, and (d) streamline tracking using moving least square regularisation. It is shown that the new method compares favourably with these methods in artificial datasets. The proposed approach gives the possibility to explore a DTI dataset to locate singularities as well as to enhance deterministic tractography techniques. In this way it allows to immediately obtain results more similar to those provided by more powerful but computationally much more demanding methods that are intrinsically able to solve crossing fibers, such as probabilistic tracking or high angular resolution diffusion imaging.
Alessandro Crippa, Andrei C. Jalba, Jos B. T. M. Roerdink

Image-Space Tensor Field Visualization Using a LIC-like Method

Tensors are of great interest to many applications in engineering and in medical imaging, but a proper analysis and visualization remains challenging. Physics-based visualization of tensor fields has proven to show the main features of symmetric second-order tensor fields, while still displaying the most important information of the data, namely the main directions in medical diffusion tensor data using texture and additional attributes using color-coding, in a continuous representation. Nevertheless, its application and usability remains limited due to its computational expensive and sensitive nature.
We introduce a novel approach to compute a fabric-like texture pattern from tensor fields motivated by image-space line integral convolution (LIC). Although, our approach can be applied to arbitrary, non-selfintersecting surfaces, we are focusing on special surfaces following neural fibers in the brain.We employ a multipass rendering approach whose main focus lies on regaining three-dimensionality of the data under user interaction as well as being able to have a seamless transition between local and global structures including a proper visualization of degenerated points.
Sebastian Eichelbaum, Mario Hlawitschka, Bernd Hamann, Gerik Scheuermann

Towards a High-quality Visualization of Higher-order Reynold’s Glyphs for Diffusion Tensor Imaging

Recent developments in magnetic resonance imaging (MRI) have shown that displaying second-order tensor information reconstructed from diffusionweighted MRI does not display the full structure information acquired by the scanner. Therefore, higher-order methods have been developed. Besides the visualization of derived structures such as fiber tracts or tractography (directly related to stream lines in fluid flow data sets), an extension of Reynold’s glyph for secondorder tensor fields is widely used to display local information. At the same time, fourth-order data becomes increasingly important in engineering as novel models focus on the change in materials under repeated application of stresses. Due to the complex structure of the glyph, a proper discrete geometrical approximation, e.g., a tessellation using triangles or quadrilaterals, requires the generation of many such primitives and, therefore, is not suitable for interactive exploration. It has previously been shown that those glyphs defined in spherical harmonic coordinates can be rendered using hardware acceleration. We show how tensor data can be rendered efficiently using a similar algorithm and demonstrate and discuss the use of alternative high-accuracy rendering algorithms.
Mario Hlawitschka, Younis Hijazi, Aaron Knoll, Bernd Hamann

Visualizing Genes, Proteins, and Molecules


VENLO: Interactive Visual Exploration of Aligned Biological Networks and Their Evolution

To understand life, it is fundamental to elucidate the evolution and function of biological networks in multiple species. Recently it has become possible to reconstruct the evolution of specific biological networks for several species. The data resulting from these reconstructions consists of ancestral networks and gene trees. To analyze such data, interactive visual methods are needed. We present a system that is able to visualize the evolution of biological networks in many species. We start with providing a comprehensible overview of the entire data set and provide details of the data upon demand via interaction mechanisms to select interesting subsets of the data. The selected subsets can be visualized using two main visualization types: (a) as network alignments in 2.5D (or other known) layouts or (b) as an animation of evolving networks. We developed a graph layout algorithm supporting the comparison of networks across both species and time steps without changing the graph layoutwhile switching between the overview, the animated view, and the alignment view.We evaluate our system by applying it to real-world data.
Steffen Brasch, Georg Fuellen, Lars Linsen

Embedding Biomolecular Information in a Scene Graph System

We present the Bio Scene Graph (BioSG) for visualization of biomolecular structures based on the scene graph system OpenSG. The hierarchical model of primary, secondary and tertiary structures of molecules used in the organic chemistry is mapped to a graph of nodes when loading molecular files.
We show that using BioSG, displaying molecules can be integrated in other applications, for example in medical applications. Additionally, existing algorithms and programs can be easily adapted to display the results with BioSG.
Andreas Halm, Eva Eggeling, Dieter W. Fellner

Linking Advanced Visualization and MATLAB for the Analysis of 3D Gene Expression Data

Three-dimensional gene expression PointCloud data generated by the Berkeley Drosophila TranscriptionNetwork Project (BDTNP) provides quantitative information about the spatial and temporal expression of genes in early Drosophila embryos at cellular resolution. The BDTNP team visualizes and analyzes Point- Cloud data using the software application PointCloudXplore (PCX). To maximize the impact of novel, complex data sets, such as PointClouds, the data needs to be accessible to biologists and comprehensible to developers of analysis functions.We address this challenge by linking PCX and Matlab® via a dedicated interface, thereby providing biologists seamless access to advanced data analysis functions and giving bioinformatics researchers the opportunity to integrate their analysis directly into the visualization application. To demonstrate the usefulness of this approach, we computationally model parts of the expression pattern of the gene even skipped using a genetic algorithm implemented in Matlab and integrated into PCX via our Matlab interface.
Oliver Rübel, Soile V. E. Keränen, Mark Biggin, David W. Knowles, Gunther H. Weber, Hans Hagen, Bernd Hamann, E. Wes Bethel


Additional information

Premium Partner

    Image Credits