Skip to main content

2009 | Buch

Medical Image Computing and Computer-Assisted Intervention – MICCAI 2009

12th International Conference, London, UK, September 20-24, 2009, Proceedings, Part II

herausgegeben von: Guang-Zhong Yang, David Hawkes, Daniel Rueckert, Alison Noble, Chris Taylor

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

The two-volume set LNCS 5761 and LNCS 5762 constitute the refereed proceedings of the 12th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2009, held in London, UK, in September 2009. Based on rigorous peer reviews, the program committee carefully selected 259 revised papers from 804 submissions for presentation in two volumes. The second volume includes 134 papers divided in topical sections on shape modelling and analysis; motion analyysis, physical based modelling and image reconstruction; neuro, cell and multiscale image analysis; image analysis and computer aided diagnosis; and image segmentation and analysis.

Inhaltsverzeichnis

Frontmatter

Shape Modelling and Analysis

Building Shape Models from Lousy Data

Statistical shape models have gained widespread use in medical image analysis. In order for such models to be statistically meaningful, a large number of data sets have to be included. The number of available data sets is usually limited and often the data is corrupted by imaging artifacts or missing information. We propose a method for building a statistical shape model from such “lousy” data sets. The method works by identifying the corrupted parts of a shape as statistical outliers and excluding these parts from the model. Only the parts of a shape that were identified as outliers are discarded, while all the intact parts are included in the model. The model building is then performed using the EM algorithm for probabilistic principal component analysis, which allows for a principled way to handle missing data. Our experiments on 2D synthetic and real 3D medical data sets confirm the feasibility of the approach. We show that it yields superior models compared to approaches using robust statistics, which only downweight the influence of outliers.

Marcel Lüthi, Thomas Albrecht, Thomas Vetter
Statistical Location Model for Abdominal Organ Localization

Initial placement of the models is an essential pre-processing step for model-based organ segmentation. Based on the observation that organs move along with the spine and their relative locations remain relatively stable, we built a statistical location model (SLM) and applied it to abdominal organ localization. The model is a point distribution model which learns the pattern of variability of organ locations relative to the spinal column from a training set of normal individuals. The localization is achieved in three stages: spine alignment, model optimization and location refinement. The SLM is optimized through maximum a posteriori estimation of a probabilistic density model constructed for each organ. Our model includes five organs: liver, left kidney, right kidney, spleen and pancreas. We validated our method on 12 abdominal CTs using leave-one-out experiments. The SLM enabled reduction in the overall localization error from 62.0±28.5 mm to 5.8±1.5 mm. Experiments showed that the SLM was robust to the reference model selection.

Jianhua Yao, Ronald M. Summers
Volumetric Shape Model for Oriented Tubular Structure from DTI Data

In this paper, we describe methods for constructing shape priors using orientation information to model white matter tracts from magnetic resonance diffusion tensor images (DTI). Shape Normalization is needed for the construction of a shape prior using statistical methods. Moving beyond shape normalization using boundary-only or orientation-only information, our method combines the idea of sweeping and inverse-skeletonization to parameterize 3D volumetric shape, which provides point correspondence and orientations over the whole volume in a continuous fashion. Tangents from this continuous model can be treated as a de-noised reconstruction of the original structural orientation inside a shape. We demonstrate the accuracy of this technique by reconstructing synthetic data and the 3D cingulum tract from brain DTI data and manually drawn 2D contours for each tract. Our output can also serve as the input for subsequent boundary finding or shape analysis.

Hon Pong Ho, Xenophon Papademetris, Fei Wang, Hilary P. Blumberg, Lawrence H. Staib
A Generic Probabilistic Active Shape Model for Organ Segmentation

Probabilistic models are extensively used in medical image segmentation. Most of them employ parametric representations of densities and make idealizing assumptions, e.g. normal distribution of data. Often, such assumptions are inadequate and limit a broader application. We propose here a novel probabilistic active shape model for organ segmentation, which is entirely built upon non-parametric density estimates. In particular, a nearest neighbor boundary appearance model is complemented by a cascade of boosted classifiers for region information and combined with a shape model based on Parzen density estimation. Image and shape terms are integrated into a single level set equation. Our approach has been evaluated for 3-D liver segmentation using a public data base originating from a competition (

http://sliver07.org

). With an average surface distance of 1.0 mm and an average volume overlap error of 6.5 %, it outperforms other automatic methods and provides accuracy close to interactive ones. Since no adaptions specific to liver segmentation have been made, our probabilistic active shape model can be applied to other segmentation tasks easily.

Andreas Wimmer, Grzegorz Soza, Joachim Hornegger
Organ Segmentation with Level Sets Using Local Shape and Appearance Priors

Organ segmentation is a challenging problem on which recent progress has been made by incorporation of local image statistics that model the heterogeneity of structures outside of an organ of interest. However, most of these methods rely on landmark based segmentation, which has certain drawbacks. We propose to perform organ segmentation with a novel level set algorithm that incorporates local statistics via a highly efficient point tracking mechanism. Specifically, we compile statistics on these tracked points to allow for a local intensity profile outside of the contour and to allow for a local surface area penalty, which allows us to capture fine detail where it is expected. The local intensity and curvature models are learned through landmarks automatically embedded on the surface of the training shapes. We use Parzen windows to model the internal organ intensities as one distribution since this is sufficient for most organs. In addition, since the method is based on level sets, we are able to naturally take advantage of recent work on global shape regularization. We show state-of-the-art results on the challenging problems of liver and kidney segmentation.

Timo Kohlberger, M. Gökhan Uzunbaş, Christopher Alvino, Timor Kadir, Daniel O. Slosman, Gareth Funka-Lea
Liver Segmentation Using Automatically Defined Patient Specific B-Spline Surface Models

This paper presents a novel liver segmentation algorithm. This is a model-driven approach; however, unlike previous techniques which use a statistical model obtained from a training set, we initialize patient-specific models directly from their own pre-segmentation. As a result, the non-trivial problems such as landmark correspondences, model registration etc. can be avoided. Moreover, by dividing the liver region into three sub-regions, we convert the problem of building one complex shape model into constructing three much simpler models, which can be fitted independently, greatly improving the computation efficiency. A robust graph-based narrow band optimal surface fitting scheme is also presented. The proposed approach is evaluated on 35 CT images. Compared to contemporary approaches, our approach has no training requirement and requires significantly less processing time, with an RMS error of 2.44±0.53mm against manual segmentation.

Yi Song, Andy J. Bulpitt, Ken W. Brodlie
Airway Tree Extraction with Locally Optimal Paths

This paper proposes a method to extract the airway tree from CT images by continually extending the tree with locally optimal paths. This is in contrast to commonly used region growing based approaches that only search the space of the immediate neighbors. The result is a much more robust method for tree extraction that can overcome local occlusions. The cost function for obtaining the optimal paths takes into account of an airway probability map as well as measures of airway shape and orientation derived from multi-scale Hessian eigen analysis on the airway probability. Significant improvements were achieved compared to a region growing based method, with up to 36% longer trees at a slight increase of false positive rate.

Pechin Lo, Jon Sporring, Jesper Johannes Holst Pedersen, Marleen de Bruijne
A Deformable Surface Model for Vascular Segmentation

Inspired by the motion of a solid surface under liquid pressure, this paper proposes a novel deformable surface model to segment blood vessels in medical images. In the proposed model, the segmented region and the background region are respectively considered as liquid and an elastic solid. The surface of the elastic solid experiences various forces derived from the second order intensity statistics and the surface geometry. These forces cause the solid surface to deform in order to segment vascular structures in an image. The proposed model has been studied in the experiments on synthetic data and clinical data acquired by different imaging modalities. It is experimentally shown that the new model is robust to intensity contrast changes inside blood vessels and thus very suitable to perform vascular segmentation.

Max W. K. Law, Albert C. S. Chung
A Deformation Tracking Approach to 4D Coronary Artery Tree Reconstruction

This paper addresses reconstruction of a temporally deforming 3D coronary vessel tree,

i.e.

, 4D reconstruction from a sequence of angiographic X-ray images acquired by a rotating C-arm. Our algorithm starts from a 3D coronary tree that was reconstructed from images of one cardiac phase. Driven by gradient vector flow (GVF) fields, the method then estimates deformation such that projections of deformed models align with X-ray images of corresponding cardiac phases. To allow robust tracking of the coronary tree, the deformation estimation is regularized by smoothness and cyclic deformation constraints. Extensive qualitative and quantitative tests on clinical data sets suggest that our algorithm reconstructs accurate 4D coronary trees and regularized estimation significantly improves robustness. Our experiments also suggest that a hierarchy of deformation models with increasing complexities are desirable when input data are noisy or when the quality of the 3D model is low.

Yanghai Tsin, Klaus J. Kirchberg, Guenter Lauritsch, Chenyang Xu
Automatic Extraction of Mandibular Nerve and Bone from Cone-Beam CT Data

The exact localization of the mandibular nerve with respect to the bone is important for applications in dental implantology and maxillofacial surgery. Cone beam computed tomography (CBCT), often also called digital volume tomography (DVT), is increasingly utilized in maxillofacial or dental imaging. Compared to conventional CT, however, soft tissue discrimination is worse due to a reduced dose. Thus, small structures like the alveolar nerves are even harder recognizable within the image data. We show that it is nonetheless possible to accurately reconstruct the 3D bone surface and the course of the nerve in a fully automatic fashion, with a method that is based on a combined statistical shape model of the nerve and the bone and a Dijkstra-based optimization procedure. Our method has been validated on 106 clinical datasets: the average reconstruction error for the bone is 0.5±0.1 mm, and the nerve can be detected with an average error of 1.0±0.6 mm.

Dagmar Kainmueller, Hans Lamecker, Heiko Seim, Max Zinser, Stefan Zachow
Conditional Variability of Statistical Shape Models Based on Surrogate Variables

We propose to increment a statistical shape model with surrogate variables such as anatomical measurements and patient-related information, allowing conditioning the shape distribution to follow prescribed anatomical constraints. The method is applied to a shape model of the human femur, modeling the joint density of shape and anatomical parameters as a kernel density. Results show that it allows for a fast, intuitive and anatomically meaningful control on the shape deformations and an effective conditioning of the shape distribution, allowing the analysis of the remaining shape variability and relations between shape and anatomy. The approach can be further employed for initializing elastic registration methods such as Active Shape Models, improving their regularization term and reducing the search space for the optimization.

Rémi Blanc, Mauricio Reyes, Christof Seiler, Gábor Székely
Surface/Volume-Based Articulated 3D Spine Inference through Markov Random Fields

This paper presents a method towards inferring personalized 3D spine models to intraoperative CT data acquired for corrective spinal surgery. An accurate 3D reconstruction from standard X-rays is obtained before surgery to provide the geometry of vertebrae. The outcome of this procedure is used as basis to derive an articulated spine model that is represented by consecutive sets of intervertebral articulations relative to rotation and translation parameters (6 degrees of freedom). Inference with respect to the model parameters is then performed using an integrated and interconnected Markov Random Field graph that involves singleton and pairwise costs. Singleton potentials measure the support from the data (surface or image-based) with respect to the model parameters, while pairwise constraints encode geometrical dependencies between vertebrae. Optimization of model parameters in a multi-modal context is achieved using efficient linear programming and duality. We show successful image registration results from simulated and real data experiments aimed for image-guidance fusion.

Samuel Kadoury, Nikos Paragios
A Shape Relationship Descriptor for Radiation Therapy Planning

In this paper we address the challenge of matching patient geometry to facilitate the design of patient treatment plans in radiotherapy. To this end we propose a novel shape descriptor, the Overlap Volume Histogram, which provides a rotation and translation invariant representation of a patient’s organs at risk relative to the tumor volume. Using our descriptor, it is possible to accurately identify database patients with similar constellations of organ and tumor geometries, enabling the transfer of treatment plans between patients with similar geometries. We demonstrate the utility of our method for such tasks by outperforming state of the art shape descriptors in the retrieval of patients with similar treatment plans. We also preliminarily show its potential as a quality control tool by demonstrating how it is used to identify an organ at risk whose dose can be significantly reduced.

Michael Kazhdan, Patricio Simari, Todd McNutt, Binbin Wu, Robert Jacques, Ming Chuang, Russell Taylor
Feature-Based Morphometry

This paper presents

feature-based morphometry

(FBM), a new, fully data-driven technique for identifying group-related differences in volumetric imagery. In contrast to most morphometry methods which assume one-to-one correspondence between all subjects, FBM models images as a collage of distinct, localized image features which may not be present in all subjects. FBM thus explicitly accounts for the case where the same anatomical tissue cannot be reliably identified in all subjects due to disease or anatomical variability. A probabilistic model describes features in terms of their appearance, geometry, and relationship to sub-groups of a population, and is automatically learned from a set of subject images and group labels. Features identified indicate group-related anatomical structure that can potentially be used as disease biomarkers or as a basis for computer-aided diagnosis. Scale-invariant image features are used, which reflect generic, salient patterns in the image. Experiments validate FBM clinically in the analysis of normal (NC) and Alzheimer’s (AD) brain images using the freely available OASIS database. FBM automatically identifies known structural differences between NC and AD subjects in a fully data-driven fashion, and obtains an equal error classification rate of 0.78 on new subjects.

Matthew Toews, William M. Wells III, D. Louis Collins, Tal Arbel
Constructing a Dictionary of Human Brain Folding Patterns

Brain imaging provides a wealth of information that computers can explore at a massive scale. Categorizing the patterns of the human cortex has been a challenging issue for neuroscience. In this paper, we propose a data mining approach leading to the construction of the first computerized dictionary of cortical folding patterns, from a database of 62 brains. The cortical folds are extracted using BrainVisa open software. The standard sulci are manually identified among the folds. 32 sets of sulci covering the cortex are selected. Clustering techniques are further applied to identify in each set the different patterns observed in the population. After affine global normalization, the geometric distance between sulci of two subjects is calculated using the Iterative Closest Point (ICP) algorithm. The dimension of the resulting distance matrix is reduced using Isomap algorithm. Finally, a dedicated hierarchical clustering algorithm is used to extract out the main patterns. This algorithm provides a score which evaluates the strengths of the patterns found. The score is used to rank the patterns for setting up a dictionary to characterize the variability of cortical anatomy.

Zhong Yi Sun, Matthieu Perrot, Alan Tucholka, Denis Rivière, Jean-François Mangin
Topological Correction of Brain Surface Meshes Using Spherical Harmonics

A brain surface reconstruction allows advanced analysis of structural and functional brain data that is not possible using volumetric data alone. However, the generation of a brain surface mesh from MRI data often introduces topological defects and artifacts that must be corrected. We show that it is possible to accurately correct these errors using spherical harmonics. Our results clearly demonstrate that brain surface meshes reconstructed using spherical harmonics are free from topological defects and large artifacts that were present in the uncorrected brain surface. Visual inspection reveals that the corrected surfaces are of very high quality. The spherical harmonic surfaces are also quantitatively validated by comparing the surfaces to an “ideal” brain based on a manually corrected average of twelve scans of the same subject. In conclusion, the spherical harmonics approach is a direct, computationally fast method to correct topological errors.

Rachel Aine Yotter, Robert Dahnke, Christian Gaser
Teichmüller Shape Space Theory and Its Application to Brain Morphometry

Here we propose a novel method to compute Teichmüller shape space based shape index to study brain morphometry. Such a shape index is intrinsic, and invariant under conformal transformations, rigid motions and scaling. We conformally map a genus-zero open boundary surface to the Poincaré disk with the Yamabe flow method. The shape indices that we compute are the lengths of a special set of geodesics under hyperbolic metric. Tests on longitudinal brain imaging data were used to demonstrate the stability of the derived feature vectors. In leave-one-out validation tests, we achieved 100% accurate classification (versus only 68% accuracy for volume measures) in distinguishing 11 HIV/AIDS individuals from 8 healthy control subjects, based on Teichmüller coordinates for lateral ventricular surfaces extracted from their 3D MRI scans.

Yalin Wang, Wei Dai, Xianfeng Gu, Tony F. Chan, Shing-Tung Yau, Arthur W. Toga, Paul M. Thompson
A Tract-Specific Framework for White Matter Morphometry Combining Macroscopic and Microscopic Tract Features

Diffusion tensor imaging plays a key role in our understanding of white matter (WM) both in normal populations and in populations with brain disorders. Existing techniques focus primarily on using diffusivity-based quantities derived from diffusion tensor as surrogate measures of microstructural tissue properties of WM. In this paper, we describe a novel tract-specific framework that enables the examination of WM morphometry at both the macroscopic and microscopic scales. The framework leverages the skeleton-based modeling of sheet-like WM fasciculi using the continuous medial representation, which gives a natural definition of thickness and supports its comparison across subjects. The thickness measure provides a macroscopic characterization of WM fasciculi that complements existing analysis of microstructural features. The utility of the framework is demonstrated in quantifying WM atrophy in Amyotrophic Lateral Sclerosis, a severe neurodegenerative disease of motor neurons. We show that, compared to using microscopic features alone, combining the macroscopic and microscopic features gives a more holistic characterization of the disease.

Hui Zhang, Suyash P. Awate, Sandhitsu R. Das, John H. Woo, Elias R. Melhem, James C. Gee, Paul A. Yushkevich
Shape Modelling for Tract Selection

Probabilistic tractography provides estimates of the probability of a structural connection between points or regions in a brain volume, based on information from diffusion MRI. The ability to estimate the uncertainty associated with reconstructed pathways is valuable, but noise in the image data leads to premature termination or erroneous trajectories in sampled streamlines. In this work we describe automated methods, based on a probabilistic model of tract shape variability between individuals, which can be applied to select seed points in order to maximise consistency in tract segmentation; and to discard streamlines which are unlikely to belong to the tract of interest. Our method is shown to ameliorate false positives and remove the widely observed falloff in connection probability with distance from the seed region due to noise, two important problems in the tractography literature. Moreover, the need to apply an arbitrary threshold to connection probability maps is entirely obviated by our approach, thus removing a significant user-specified parameter from the tractography pipeline.

Jonathan D. Clayden, Martin D. King, Chris A. Clark
Topological Characterization of Signal in Brain Images Using Min-Max Diagrams

We present a novel computational framework for characterizing signal in brain images

via

nonlinear pairing of critical values of the signal. Among the astronomically large number of different pairings possible, we show that representations derived from specific pairing schemes provide concise representations of the image. This procedure yields a “min-max diagram” of the image data. The representation turns out to be especially powerful in discriminating image scans obtained from different clinical populations, and directly opens the door to applications in a variety of learning and inference problems in biomedical imaging. It is noticed that this strategy significantly departs from the standard image analysis paradigm – where the ‘mean’ signal is used to characterize an ensemble of images. This offers robustness to noise in subsequent statistical analyses, for example; however, the attenuation of the signal content due to averaging makes it rather difficult to identify subtle variations. The proposed topologically oriented method seeks to address these limitations by characterizing and encoding

topological

features or attributes of the image. As an application, we have used this method to characterize cortical thickness measures along brain surfaces in classifying autistic subjects. Our promising experimental results provide evidence of the power of this representation.

Moo K. Chung, Vikas Singh, Peter T. Kim, Kim M. Dalton, Richard J. Davidson
Particle Based Shape Regression of Open Surfaces with Applications to Developmental Neuroimaging

Shape regression promises to be an important tool to study the relationship between anatomy and underlying clinical or biological parameters, such as age. In this paper we propose a new method to building shape models that incorporates regression analysis in the process of optimizing correspondences on a set of open surfaces. The statistical significance of the dependence is evaluated using permutation tests designed to estimate the likelihood of achieving the observed statistics under numerous rearrangements of the shape parameters with respect to the explanatory variable. We demonstrate the method on synthetic data and provide a new results on clinical MRI data related to early development of the human head.

Manasi Datar, Joshua Cates, P. Thomas Fletcher, Sylvain Gouttard, Guido Gerig, Ross Whitaker
Setting Priors and Enforcing Constraints on Matches for Nonlinear Registration of Meshes

We show that a simple probabilistic modelling of the registration problem for surfaces allows to solve it by using standard clustering techniques. In this framework, point-to-point correspondences are hypothesized between the two free-form surfaces, and we show how to specify priors and to enforce global constraints on these matches with only minor changes to the optimisation algorithm. The purpose of these two modifications is to increase its capture range and to obtain more realistic geometrical transformations between the surfaces. We conclude with some validation experiments and results on synthetic and real data.

Benoît Combès, Sylvain Prima
Parametric Representation of Cortical Surface Folding Based on Polynomials

The development of folding descriptors as an effective approach for describing geometrical complexity and variation of the human cerebral cortex has been of great interests. This paper presents a parametric representation of cortical surface patches using polynomials, that is, the primitive cortical patch is compactly and effectively described by four parametric coefficients. By this parametric representation, the patterns of cortical patches can be classified by either model-driven approach or data-driven clustering approach. In the model-driven approach, any patch of the cortical surface is classified into one of eight primitive shape patterns including peak, pit, ridge, valley, saddle ridge, saddle valley, flat and inflection, corresponding to eight sub-spaces of the four parameters. The major advantage of this polynomial representation of cortical folding pattern is its compactness and effectiveness, while being rich in shape information. We have applied this parametric representation for segmentation of cortical surface and promising results are obtained.

Tuo Zhang, Lei Guo, Gang Li, Jingxin Nie, Tianming Liu
Intrinsic Regression Models for Manifold-Valued Data

In medical imaging analysis and computer vision, there is a growing interest in analyzing various manifold-valued data including 3D rotations, planar shapes, oriented or directed directions, the Grassmann manifold, deformation field, symmetric positive definite (SPD) matrices and medial shape representations (m-rep) of subcortical structures. Particularly, the scientific interests of most population studies focus on establishing the associations between a set of covariates (e.g., diagnostic status, age, and gender) and manifold-valued data for characterizing brain structure and shape differences, thus requiring a regression modeling framework for manifold-valued data. The aim of this paper is to develop an intrinsic regression model for the analysis of manifold-valued data as responses in a Riemannian manifold and their association with a set of covariates, such as age and gender, in Euclidean space. Because manifold-valued data do not form a vector space, directly applying classical multivariate regression may be inadequate in establishing the relationship between manifold-valued data and covariates of interest, such as age and gender, in real applications. Our intrinsic regression model, which is a semiparametric model, uses a link function to map from the Euclidean space of covariates to the Riemannian manifold of manifold data. We develop an estimation procedure to calculate an intrinsic least square estimator and establish its limiting distribution. We develop score statistics to test linear hypotheses on unknown parameters. We apply our methods to the detection of the difference in the morphological changes of the left and right hippocampi between schizophrenia patients and healthy controls using medial shape description.

Xiaoyan Shi, Martin Styner, Jeffrey Lieberman, Joseph G. Ibrahim, Weili Lin, Hongtu Zhu
Gender Differences in Cerebral Cortical Folding: Multivariate Complexity-Shape Analysis with Insights into Handling Brain-Volume Differences

This paper presents a study of gender differences in adult human cerebral cortical folding patterns. The study employs a new multivariate statistical descriptor for analyzing folding patterns in a region of interest (ROI) and a rigorous nonparametric permutation-based scheme for hypothesis testing. Unlike typical ROI-based methods that summarize folding complexity or shape by single/few numbers, the proposed descriptor systematically constructs a unified description of

complexity and shape

in a

high-dimensional

space (thousands of numbers/dimensions). Furthermore, this paper presents new mathematical insights into the relationship of intra-cranial volume (ICV) with cortical complexity and shows that conventional complexity descriptors implicitly handle ICV differences in different ways, thereby lending

different meanings

to “complexity”. This paper describes two systematic methods for handling ICV changes in folding studies using the proposed descriptor. The clinical study in this paper exploits these theoretical insights to demonstrate that (i) the answer to which gender has higher/lower “complexity” depends on how a folding measure handles ICV differences and (ii) cortical folds in males and females differ significantly in shape as well.

Suyash P. Awate, Paul Yushkevich, Daniel Licht, James C. Gee
Cortical Shape Analysis in the Laplace-Beltrami Feature Space

For the automated analysis of cortical morphometry, it is critical to develop robust descriptions of the position of anatomical structures on the convoluted cortex. Using the eigenfunction of the Laplace-Beltrami operator, we propose in this paper a novel feature space to characterize the cortical geometry. Derived from intrinsic geometry, this feature space is invariant to scale and pose variations, anatomically meaningful, and robust across population. A learning-based sulci detection algorithm is developed in this feature space to demonstrate its application in cortical shape analysis. Automated sulci detection results with 10 training and 15 testing surfaces are presented.

Yonggang Shi, Ivo Dinov, Arthur W. Toga
Shape Analysis of Human Brain Interhemispheric Fissure Bending in MRI

This paper introduces a novel approach to analyze Yakovlevian torque by quantifying the bending of human brain interhemispheric fissure in three-dimensional magnetic resonance imaging. It extracts the longitudinal medial surface between the cerebral hemispheres, which are segmented with an accurate and completely automatic technique, as the shape representation of the interhemispheric fissure. The extracted medial surface is modeled with a polynomial surface through least-square fitting. Finally, curvature features, e.g. principal, Gaussian and mean curvatures, are computed at each point of the fitted medial surface to describe the local bending of the interhemispheric fissure. This method was applied to clinical images of healthy controls (12 males, 7 females) and never-medicated schizophrenic subjects (11 males, 7 females). The hypothesis of the normal interhemispheric fissure bending (rightward in the occipital region) was quantitatively demonstrated. Moreover, we found significant differences (

p

 < 0.05) between the male schizophrenics and healthy controls with respect to the interhemispheric fissure bending in the frontal and occipital regions. These results show that our method is applicable for studying abnormal Yakovlevian torque related to mental diseases.

Lu Zhao, Jarmo Hietala, Jussi Tohka
Subject-Matched Templates for Spatial Normalization

Spatial normalization of images from multiple subjects is a common problem in group comparison studies, such as voxel-based and deformation-based morphometric analyses. Use of a study-specific template for normalization may improve normalization accuracy over a study-independent standard template (Good et al., NeuroImage, 14(1):21-36, 2001). Here, we develop this approach further by introducing the concept of subject-matched templates. Rather than using a single template for the entire population, a different template is used for every subject, with the template matched to the subject in terms of age, sex, and potentially other parameters (e.g., disease). All subject-matched templates are created from a single generative regression model of atlas appearance, thus providing a priori template-to-template correspondence without registration. We demonstrate that such an approach is technically feasible and significantly improves spatial normalization accuracy over using a single template.

Torsten Rohlfing, Edith V. Sullivan, Adolf Pfefferbaum
Mapping Growth Patterns and Genetic Influences on Early Brain Development in Twins

Despite substantial progress in understanding the anatomical and functional development of the human brain, little is known on the spatial-temporal patterns and genetic influences on white matter maturation in twins. Neuroimaging data acquired from longitudinal twin studies provide a unique platform for scientists to investigate such issues. However, the interpretation of neuroimaging data from longitudinal twin studies is hindered by the lacking of appropriate image processing and statistical tools. In this study, we developed a statistical framework for analyzing longitudinal twin neuroimaging data, which is consisted of generalized estimating equation (GEE2) and a test procedure. The GEE2 method can jointly model imaging measures with genetic effect, environmental effect, and behavioral and clinical variables. The score test statistic is used to test linear hypothesis such as the association between brain structure and function with the covariates of interest. A resampling method is used to control the family-wise error rate to adjust for multiple comparisons. With diffusion tensor imaging (DTI), we demonstrate the application of our statistical methods in quantifying the spatiotemporal white matter maturation patterns and in detecting the genetic effects in a longitudinal neonatal twin study. The proposed approach can be easily applied to longitudinal twin data with multiple outcomes and accommodate incomplete and unbalanced data, i.e., subjects with different number of measurements.

Yasheng Chen, Hongtu Zhu, Dinggang Shen, Hongyu An, John Gilmore, Weili Lin
Tensor-Based Morphometry with Mappings Parameterized by Stationary Velocity Fields in Alzheimer’s Disease Neuroimaging Initiative

Tensor-based morphometry (TBM) is an analysis technique where anatomical information is characterized by means of the spatial transformations between a customized template and observed images. Therefore, accurate inter-subject non-rigid registration is an essential prerrequisite. Further statistical analysis of the spatial transformations is used to highlight some useful information, such as local statistical differences among populations. With the new advent of recent and powerful non-rigid registration algorithms based on the large deformation paradigm, TBM is being increasingly used. In this work we evaluate the statistical power of TBM using stationary velocity field diffeomorphic registration in a large population of subjects from Alzheimer’s Disease Neuroimaging Initiative project. The proposed methodology provided atrophy maps with very detailed anatomical resolution and with a high significance compared with results published recently on the same data set.

Matías Nicolás Bossa, Ernesto Zacur, Salvador Olmos

Motion Analysis, Physical Based Modelling and Image Reconstruction

High-Quality Model Generation for Finite Element Simulation of Tissue Deformation

In finite element simulation, size, shape, and placement of the elements in a model are significant factors that affect the interpolation and numerical errors of a solution. In medical simulations, such models are desired to have higher accuracy near features such as anatomical boundaries (surfaces) and they are often required to have element faces lying along these surfaces. Conventional modelling schemes consist of a segmentation step delineating the anatomy followed by a meshing step generating elements conforming to this segmentation. In this paper, a one-step energy-based model generation technique is proposed. An objective function is minimized when each element of a mesh covers similar image intensities while, at the same time, having desirable FEM characteristics. Such a mesh becomes essential for accurate models for deformation simulation, especially when the image intensities represent a mechanical feature of the tissue such as the elastic modulus. The use of the proposed mesh optimization is demonstrated on synthetic phantoms, 2D/3D brain MR images, and prostate ultrasound-elastography data.

Orcun Goksel, Septimiu E. Salcudean
Biomechanically-Constrained 4D Estimation of Myocardial Motion

We propose a method for the analysis of cardiac images with the goal of reconstructing the motion of the ventricular walls. The main feature of our method is that the inversion parameter field is the active contraction of the myocardial fibers. This is accomplished with a biophysically-constrained, four-dimensional (space plus time) formulation that aims to complement information that can be gathered from the images by

a priori

knowledge of cardiac mechanics.

Our main hypothesis is that by incorporating biophysical information, we can generate more informative priors and thus, more accurate predictions of the ventricular wall motion.

In this paper, we outline the formulation, discuss the computational methodology for solving the inverse motion estimation, and present preliminary validation using synthetic and tagged MR images. The overall method uses patient-specific imaging and fiber information to reconstruct the motion. In these preliminary tests, we verify the implementation and conduct a parametric study to test the sensitivity of the model to material properties perturbations, model errors, and incomplete and noisy observations.

Hari Sundar, Christos Davatzikos, George Biros
Predictive Simulation of Bidirectional Glenn Shunt Using a Hybrid Blood Vessel Model

This paper proposes a method for performing predictive simulation of cardiac surgery. It applies a hybrid approach to model the deformation of blood vessels. The hybrid blood vessel model consists of a reference Cosserat rod and a surface mesh. The reference Cosserat rod models the blood vessel’s global bending, stretching, twisting and shearing in a physically correct manner, and the surface mesh models the surface details of the blood vessel. In this way, the deformation of blood vessels can be computed efficiently and accurately. Our predictive simulation system can produce complex surgical results given a small amount of user inputs. It allows the surgeon to easily explore various surgical options and evaluate them. Tests of the system using bidirectional Glenn shunt (BDG) as an application example show that the results produced by the system are similar to real surgical results.

Hao Li, Wee Kheng Leow, Ing-Sh Chiu
Surgical Planning and Patient-Specific Biomechanical Simulation for Tracheal Endoprostheses Interventions

We have developed a system for computer-assisted surgical planning of tracheal surgeries. The system allows to plan the intervention based on CT images of the patient, and includes a virtual database of commercially available prostheses. Automatic segmentation of the trachea and apparent pathological structures is obtained using a modified region growing algorithm. A method for automatic adaptation of a finite element mesh allows to build a patient-specific biomechanical model for simulation of the expected performance of the implant under physiological movement (swallowing, sneezing). Laboratory experiments were performed to characterise the tissues present in the trachea, and movement models were obtained from fluoroscopic images of a patient. Results are reported on the planning and biomechanical simulation of two patients that underwent surgery at our hospital.

Miguel A. González Ballester, Amaya Pérez del Palomar, José Luís López Villalobos, Laura Lara Rodríguez, Olfa Trabelsi, Frederic Pérez, Ángel Ginel Cañamaque, Emilia Barrot Cortés, Francisco Rodríguez Panadero, Manuel Doblaré Castellano, Javier Herrero Jover
Mesh Generation from 3D Multi-material Images

The problem of generating realistic computer models of objects represented by 3D segmented images is important in many biomedical applications. Labelled 3D images impose particular challenges for meshing algorithms because multi-material junctions form features such as surface pacthes, edges and corners which need to be preserved into the output mesh. In this paper, we propose a feature preserving Delaunay refinement algorithm which can be used to generate high-quality tetrahedral meshes from segmented images. The idea is to explicitly sample corners and edges from the input image and to constrain the Delaunay refinement algorithm to preserve these features in addition to the surface patches. Our experimental results on segmented medical images have shown that, within a few seconds, the algorithm outputs a tetrahedral mesh in which each material is represented as a consistent submesh without gaps and overlaps. The optimization property of the Delaunay triangulation makes these meshes suitable for the purpose of realistic visualization or finite element simulations.

Dobrina Boltcheva, Mariette Yvinec, Jean-Daniel Boissonnat
Interactive Simulation of Flexible Needle Insertions Based on Constraint Models

This paper presents a new modeling method for the insertion of needles and more generally thin and flexible medical devices into soft tissues. Several medical procedures rely on the insertion of slender medical devices such as biopsy, brachytherapy, deep-brain stimulation. In this paper, the interactions between soft tissues and flexible instruments are reproduced using a set of dedicated complementarity constraints. Each constraint is positionned and applied to the deformable models without requiring any remeshing. Our method allows for the 3D simulation of different physical phenomena such as puncture, cutting, static and dynamic friction at interactive frame rate. To obtain realistic simulation, the model can be parametrized using experimental data. Our method is validated through a series of typical simulation examples and new more complex scenarios.

Christian Duriez, Christophe Guébert, Maud Marchal, Stephane Cotin, Laurent Grisoni
Real-Time Prediction of Brain Shift Using Nonlinear Finite Element Algorithms

Patient-specific biomechanical models implemented using specialized nonlinear (i.e. taking into account material and geometric nonlinearities) finite element procedures were applied to predict the deformation field within the brain for five cases of craniotomy-induced brain shift. The procedures utilize the Total Lagrangian formulation with explicit time stepping. The loading was defined by prescribing deformations on the brain surface under the craniotomy. Application of the computed deformation fields to register the preoperative images with the intraoperative ones indicated that the models very accurately predict the intraoperative positions and deformations of the brain anatomical structures for limited information about the brain surface deformations. For each case, it took less than 40 s to compute the deformation field using a standard personal computer, and less than 4 s using a Graphics Processing Unit (GPU). The results suggest that nonlinear biomechanical models can be regarded as one possible method of complementing medical image processing techniques when conducting non-rigid registration within the real-time constraints of neurosurgery.

Grand Roman Joldes, Adam Wittek, Mathieu Couton, Simon K. Warfield, Karol Miller
Model-Based Estimation of Ventricular Deformation in the Cat Brain

The estimation of ventricular deformation has important clinical implications related to neuro-structural disorders such as hydrocephalus. In this paper, a poroelastic model was used to represent deformation effects resulting from the ventricular system and was studied in 5 feline experiments. Chronic or acute hydrocephalus was induced by injection of kaolin into the cisterna magna or saline into the ventricles; a catheter was then inserted in the lateral ventricle to drain the fluid out of the brain. The measured displacement data which was extracted from pre-drainage and post-drainage MR images were incorporated into the model through the Adjoint Equations Method. The results indicate that the computational model of the brain and ventricular system captured 33% of the ventricle deformation on average and the model-predicted intraventricular pressure was accurate to 90% of the recorded value during the chronic hydrocephalus experiments.

Fenghong Liu, S. Scott Lollis, Songbai Ji, Keith D. Paulsen, Alexander Hartov, David W. Roberts
Contact Studies between Total Knee Replacement Components Developed Using Explicit Finite Elements Analysis

A pneumatic simulator of the knee joint with five DOF was developed to determine the correlation between the kinematics of the knee joint, and the wear of the polyethylene componenent of a TKR prosthesis. A physical model of the knee joint with total knee replacement (TKR) was built by rapid-prototyping based on CT images from a patient. A clinically-available prosthesis was mounted on the knee model. Using a video analysis system, and two force and contact pressure plates, the kinematics and kinetics data were recorded during normal walking of the patient. The quadriceps muscle force during movement was computed using the Anybody software. Joint loadings were generated by the simulator based on recorded and computed data. Using the video analysis system, the precise kinematics of the artificial joint from the simulator was recorded and used as input for an explicit dynamics FE analysis of the joint. The distribution of the contact stresses in the implant was computed during the walking cycle to analyze the prosthesis behavior. The results suggest that the combination of axial loading and anterior-posterior stress is responsible for the abrasive wear of the polyethylene component of the prosthesis.

Lucian G. Gruionu, Gabriel Gruionu, Stefan Pastrama, Nicolae Iliescu, Taina Avramescu
A Hybrid 1D and 3D Approach to Hemodynamics Modelling for a Patient-Specific Cerebral Vasculature and Aneurysm

In this paper we present a hybrid 1D/3D approach to haemodynamics modelling in a patient-specific cerebral vasculature and aneurysm. The geometric model is constructed from a 3D CTA image. A reduced form of the governing equations for blood flow is coupled with an empirical wall equation and applied to the arterial tree. The equation system is solved using a MacCormack finite difference scheme and the results are used as the boundary conditions for a 3D flow solver. The computed wall shear stress (WSS) agrees with published data.

Harvey Ho, Gregory Sands, Holger Schmid, Kumar Mithraratne, Gordon Mallinson, Peter Hunter
Incompressible Cardiac Motion Estimation of the Left Ventricle Using Tagged MR Images

Interpolation from sparse imaging data is typically required to achieve dense, three-dimensional quantification of left ventricular function. Although the heart muscle is known to be incompressible, this fact is ignored by most previous approaches that address this problem. In this paper, we present a method to reconstruct a dense representation of the three-dimensional, incompressible deformation of the left ventricle from tagged MR images acquired in both short-axis and long axis orientations. The approach applies a smoothing, divergence-free, vector spline to interpolate velocity fields at intermediate discrete times such that the collection of velocity fields integrate over time to match the observed displacement components. Through this process, the method yields a dense estimate of a displacement field that matches our observations and also corresponds to an incompressible motion.

Xiaofeng Liu, Khaled Z. Abd-Elmoniem, Jerry L. Prince
Vibro-Elastography for Visualization of the Prostate Region: Method Evaluation

We show that vibro-elastography, an ultrasound-based method that creates images of tissue viscoelasticity contrast, can be used for visualization and segmentation of the prostate. We use MRI as the gold standard and show that VE images yield more accurate 3D volumes of the prostate gland than conventional B-mode imaging. Furthermore, we propose two novel measures characterizing the strength and continuity of edges in noisy images. These measures, as well as contrast to noise ratio, demonstrate the utility of VE as a prostate imaging modality. The results of our study show that in addition to mapping the visco-elastic properties of tissue, VE can play a central role in improving the anatomic visualization of the prostate region and become an integral component of interventional procedures such as brachytherapy.

Seyedeh Sara Mahdavi, Mehdi Moradi, Xu Wen, William J. Morris, Septimiu E. Salcudean
Modeling Respiratory Motion for Cancer Radiation Therapy Based on Patient-Specific 4DCT Data

Prediction of respiratory motion has the potential to substantially improve cancer radiation therapy. A nonlinear finite element (FE) model of respiratory motion during full breathing cycle has been developed based on patient specific pressure-volume relationship and 4D Computed Tomography (CT) data. For geometric modeling of lungs and ribcage we have constructed intermediate CAD surface which avoids multiple geometric smoothing procedures. For physiologically relevant respiratory motion modeling we have used pressure-volume (PV) relationship to apply pressure loading on the surface of the model. A hyperelastic soft tissue model, developed from experimental observations, has been used. Additionally, pleural sliding has been considered which results in accurate deformations in the superior-inferior (SI) direction. The finite element model has been validated using 51 landmarks from the CT data. The average differences in position is seen to be 0.07 cm (SD = 0.20 cm), 0.07 cm (0.15 cm), and 0.22 cm (0.18 cm) in the left-right, anterior-posterior, and superior-inferior directions, respectively.

Jaesung Eom, Chengyu Shi, Xie George Xu, Suvranu De
Correlating Chest Surface Motion to Motion of the Liver Using ε-SVR – A Porcine Study

In robotic radiosurgery, the compensation of motion of internal organs is vital. This is currently done in two phases: an external surrogate signal (usually active optical markers placed on the patient’s chest) is recorded and subsequently correlated to an internal motion signal obtained using stereoscopic X-ray imaging. This internal signal is sampled very infrequently to minimise the patient’s exposure to radiation. We have investigated the correlation of the external signal to the motion of the liver in a porcine study using

ε

-support vector regression. IR LEDs were placed on the swines’ chest. Gold fiducials were placed in the swines’ livers and were recorded using a two-plane X-ray system. The results show that a very good correlation model can be built using

ε

-SVR, in this test clearly outperforming traditional polynomial models by at least 45 and as much as 74 %. Using multiple markers simultaneously can increase the new model’s accuracy.

Floris Ernst, Volker Martens, Stefan Schlichting, Armin Beširević, Markus Kleemann, Christoph Koch, Dirk Petersen, Achim Schweikard
Respiratory Motion Estimation from Cone-Beam Projections Using a Prior Model

Respiratory motion introduces uncertainties when planning and delivering radiotherapy for lung cancer patients. Cone-beam projections acquired in the treatment room could provide valuable information for building motion models, useful for gated treatment delivery or motion compensated reconstruction. We propose a method for estimating 3D+T respiratory motion from the 2D+T cone-beam projection sequence by including prior knowledge about the patient’s breathing motion. Motion estimation is accomplished by maximizing the similarity of the projected view of a patient specific model to observed projections of the cone-beam sequence. This is done semi-globally, considering entire breathing cycles. Using realistic patient data, we show that the method is capable of good prediction of the internal patient motion from cone-beam data, even when confronted with interfractional changes in the breathing motion.

Jef Vandemeulebroucke, Jan Kybic, Patrick Clarysse, David Sarrut
Heart Motion Abnormality Detection via an Information Measure and Bayesian Filtering

This study investigates heart wall motion abnormality detection with an information theoretic measure of heart motion based on the Shannon’s differential entropy (SDE) and recursive Bayesian filtering. Heart wall motion is generally analyzed using functional images which are subject to noise and segmentation inaccuracies, and incorporation of prior knowledge is crucial in improving the accuracy. The Kalman filter, a well known recursive Bayesian filter, is used in this study to estimate the left ventricular (LV) cavity points given incomplete and noisy data, and given a dynamic model. However, due to similarities between the statistical information of normal and abnormal heart motions, detecting and classifying abnormality is a challenging problem which we proposed to investigate with a global measure based on the SDE. We further derive two other possible information theoretic abnormality detection criteria, one is based on Rényi entropy and the other on Fisher information. The proposed method analyzes wall motion quantitatively by constructing distributions of the normalized radial distance estimates of the LV cavity. Using 269×20 segmented LV cavities of short-axis magnetic resonance images obtained from 30 subjects, the experimental analysis demonstrates that the proposed SDE criterion can lead to significant improvement over other features that are prevalent in the literature related to the LV cavity, namely, mean radial displacement and mean radial velocity.

Kumaradevan Punithakumar, Shuo Li, Ismail Ben Ayed, Ian Ross, Ali Islam, Jaron Chong
Automatic Image-Based Cardiac and Respiratory Cycle Synchronization and Gating of Image Sequences

We propose a novel method to detect the current state of the quasi-periodic system from image sequences which in turn will enable us to synchronize/gate the image sequences to obtain images of the organ system at similar configurations. The method uses the cumulated phase shift in the spectral domain of successive image frames as a measure of the net motion of objects in the scene. The proposed method is applicable to 2D and 3D time varying sequences and is not specific to the imaging modality. We demonstrate its effectiveness on X-Ray Angiographic and Cardiac and Liver Ultrasound sequences. Knowledge of the current (cardiac or respiratory) phase of the system, opens up the possibility for a purely image based cardiac and respiratory gating scheme for interventional and radiotherapy procedures.

Hari Sundar, Ali Khamene, Liron Yatziv, Chenyang Xu
Dynamic Cone Beam Reconstruction Using a New Level Set Formulation

This paper addresses an approach toward tomographic reconstruction from rotational angiography data as it is generated by C-arms in cardiac imaging. Since the rotational acquisition scheme forces a trade-off between consistency of the scene and reasonable baselines, most existing reconstruction techniques fail at recovering the 3D+

t

scene.

We propose a new reconstruction framework based on variational level sets including a new data term for symbolic reconstruction as well as a novel incorporation of motion into the level set formalism. The resulting simultaneous estimation of shape and motion proves feasible in the presented experiments. Since the proposed formulation offers a great flexibility in incorporating other data terms as well as hard or soft constraints, it allows an adaption to a wider range of problems and could be of interest to other reconstruction settings as well.

Andreas Keil, Jakob Vogel, Günter Lauritsch, Nassir Navab
Spatio-temporal Reconstruction of dPET Data Using Complex Wavelet Regularisation

Traditionally, dynamic PET studies reconstruct temporally contiguous PET images using algorithms which ignore the inherent consistency between frames. We present a method which imposes a regularisation constraint based on wavelet denoising. This is achieved efficiently using the Dual Tree – Complex Wavelet Transform (DT-CWT) of Kingsbury, which has many important advantages over the traditional discrete wavelet transform: shift invariance, implicit measure of local phase, and directional selectivity. In this paper, we apply the decomposition to the full spatio-temporal volume and use it for the reconstruction of dynamic (spatio-temporal) PET data.

Instead of using traditional wavelet thresholding schemes we introduce a locally defined and empirically-determined Cross Scale regularisation technique. We show that wavelet based regularisation has the potential to produce superior reconstructions and examine the effect various levels of boundary enhancement have on the overall images.

We demonstrate that wavelet-based spatio-temporally regularised reconstructions have superior performance over conventional Gaussian smoothing in simulated and clinical experiments. We find that our method outperforms conventional methods in terms of signal-to-noise ratio (SNR) and Mean Square Error (MSE), and removes the need to post-smooth the reconstruction.

Andrew McLennan, Michael Brady
Evaluation of q-Space Sampling Strategies for the Diffusion Magnetic Resonance Imaging

We address the problem of efficient sampling of the diffusion space for the Diffusion Magnetic Resonance Imaging (dMRI) modality. While recent scanner improvements enable the acquisition of more and more detailed images, it is still unclear which q-space sampling strategy gives the best performance. We evaluate several q-space sampling distributions by an approach based on the approximation of the MR signal by a series expansion of Spherical Harmonics and Laguerre-Gaussian functions. With the help of synthetic experiments, we identify a subset of sampling distributions which leads to the best reconstructed data.

Haz-Edine Assemlal, David Tschumperlé, Luc Brun
On the Blurring of the Funk–Radon Transform in Q–Ball Imaging

One known issue in Q–Ball imaging is the blurring in the radial integral defining the Orientation Distribution Function of fiber bundles, due to the computation of the Funk–Radon Transform (FRT). Three novel techniques to overcome this problem are presented, all of them based upon different assumptions about the behavior of the attenuation signal outside the sphere densely sampled from HARDI data sets. A systematic study with synthetic data has been carried out to show that the FRT blurring is not as important as the error introduced by some unrealistic assumptions, and only one of the three techniques (the one with the less restrictive assumption) improves the accuracy of Q–Balls.

Antonio Tristán-Vega, Santiago Aja-Fernández, Carl-Fredrik Westin
Multiple Q-Shell ODF Reconstruction in Q-Ball Imaging

Q-ball imaging (QBI) is a high angular resolution diffusion imaging (HARDI) technique which has been proven very successful in resolving multiple intravoxel fiber orientations in MR images. The standard computation of the orientation distribution function (ODF, the probability of diffusion in a given direction) from q-ball uses linear radial projection, neglecting the change in the volume element along the ray, thereby resulting in distributions different from the

true

ODFs. A new technique has been recently proposed that, by considering the solid angle factor, uses the mathematically correct definition of the ODF and results in a dimensionless and normalized ODF expression from a single q-shell. In this paper, we extend this technique in order to exploit HARDI data from multiple q-shells. We consider the more flexible multi-exponential model for the diffusion signal, and show how to efficiently compute the ODFs in constant solid angle. We describe our method and demonstrate its improved performance on both artificial and real HARDI data.

Iman Aganj, Christophe Lenglet, Guillermo Sapiro, Essa Yacoub, Kamil Ugurbil, Noam Harel

Neuro, Cell and Multiscale Image Analysis

Lossless Online Ensemble Learning (LOEL) and Its Application to Subcortical Segmentation

In this paper, we study the classification problem in the situation where large volumes of training data become available sequentially (online learning). In medical imaging, this is typical, e.g., a 3D brain MRI dataset may be gradually collected from a patient population, and not all of the data is available when the analysis begins. First, we describe two common ensemble learning algorithms, AdaBoost and bagging, and their corresponding online learning versions. We then show why each is ineffective for segmenting a gradually increasing set of medical images. Instead, we introduce a new ensemble learning algorithm, termed Lossless Online Ensemble Learning (LOEL). This algorithm is lossless in the online case, compared to its batch mode. LOEL outperformed online-AdaBoost and online-bagging when validated on a standardized dataset; it also performed better when used to segment the hippocampus from brain MRI scans of patients with Alzheimer’s Disease and matched healthy subjects. Among those tested, LOEL largely outperformed the alternative online learning algorithms and gave excellent error metrics that were consistent between the online and offline case; it also accurately distinguished AD subjects from healthy controls based on automated measures of hippocampal volume.

Jonathan H. Morra, Zhuowen Tu, Arthur W. Toga, Paul M. Thompson
Improved Maximum a Posteriori Cortical Segmentation by Iterative Relaxation of Priors

Thickness measurements of the cerebral cortex can aid diagnosis and provide valuable information about the temporal evolution of several diseases such as Alzheimer’s, Huntington’s, Schizophrenia, as well as normal ageing. The presence of deep sulci and ‘collapsed gyri’ (caused by the loss of tissue in patients with neurodegenerative diseases) complicates the tissue segmentation due to partial volume (PV) effects and limited resolution of MRI. We extend existing work to improve the segmentation and thickness estimation in a single framework. We model the PV effect using a maximum a posteriori approach with novel iterative modification of the prior information to enhance deep sulci and gyri delineation. We use a voxel based approach to estimate thickness using the Laplace equation within a Lagrangian-Eulerian framework leading to sub-voxel accuracy. Experiments performed on a new digital phantom and on clinical Alzheimer’s disease MR images show improvements in both accuracy and robustness of the thickness measurements, as well as a reduction of errors in deep sulci and collapsed gyri.

Manuel Jorge Cardoso, Matthew J. Clarkson, Gerard R. Ridgway, Marc Modat, Nick C. Fox, Sebastien Ourselin
Anatomically Informed Bayesian Model Selection for fMRI Group Data Analysis

A new approach for fMRI group data analysis is introduced to overcome the limitations of standard voxel-based testing methods, such as Statistical Parametric Mapping (SPM). Using a Bayesian model selection framework, the functional network associated with a certain cognitive task is selected according to the posterior probabilities of mean region activations, given a pre-defined anatomical parcellation of the brain. This approach enables us to control a Bayesian risk that balances false positives and false negatives, unlike the SPM-like approach, which only controls false positives. On data from a mental calculation experiment, it detected the functional network known to be involved in number processing, whereas the SPM-like approach either swelled or missed the different activation regions.

Merlin Keller, Marc Lavielle, Matthieu Perrot, Alexis Roche
A Computational Model of Cerebral Cortex Folding

Folding of the human cerebral cortex has intrigued many people for many years. Quantitative description of cortical folding pattern and understanding of the underlying mechanisms have emerged as an important research goal. This paper presents a computational 3D geometric model of cerebral cortex folding that is initialized by MRI data of human fetus brain and deformed under the governance of partial differential equations modeling the cortical growth. The simulations of this 3D geometric model provide computational experiment support to the following hypotheses: 1) Mechanical constraints of the brain skull regulate the cortical folding process. 2) The cortical folding pattern is dependent on the global cell growth rate in the whole cortex. 3) The cortical folding pattern is dependent on relative degrees of tethering of different cortical areas and the initial geometry.

Jingxin Nie, Gang Li, Lei Guo, Tianming Liu
Tensor-Based Morphometry of Fibrous Structures with Application to Human Brain White Matter

Tensor-based morphometry (TBM) is a powerful approach for examining shape changes in anatomy both across populations and in time. Our work extends the standard TBM for quantifying local volumetric changes to establish both rich and intuitive descriptors of shape changes in fibrous structures. It leverages the data from diffusion tensor imaging to determine local spatial configuration of fibrous structures and combines this information with spatial transformations derived from image registration to quantify fibrous structure-specific changes, such as local changes in fiber length and in thickness of fiber bundles. In this paper, we describe the theoretical framework of our approach in detail and illustrate its application to study brain white matter. Our results show that additional insights can be gained with the proposed analysis.

Hui Zhang, Paul A. Yushkevich, Daniel Rueckert, James C. Gee
A Fuzzy Region-Based Hidden Markov Model for Partial-Volume Classification in Brain MRI

We present a novel fuzzy region-based hidden Markov model (

f

rbHMM) for unsupervised partial-volume classification in brain magnetic resonance images (MRIs). The primary contribution is an efficient graphical representation of 3D image data in which irregularly-shaped image regions have memberships to a number of classes rather than one discrete class. Our model groups voxels into regions for efficient processing, but also refines the region boundaries to the voxel level for optimal accuracy. This strategy is most effective in data where partial-volume effects due to resolution-limited image acquisition result in intensity ambiguities. Our

f

rbHMM employs a forward-backward scheme for parameter estimation through iterative computation of region class likelihoods. We validate our proposed method on simulated and clinical brain MRIs of both normal and multiple sclerosis subjects. Quantitative results demonstrate the advantages of our fuzzy model over the discrete approach with significant improvements in classification accuracy (30% reduction in mean square error).

Albert Huang, Rafeef Abugharbieh, Roger Tam
Brain Connectivity Using Geodesics in HARDI

We develop an algorithm for brain connectivity assessment using geodesics in HARDI (high angular resolution diffusion imaging). We propose to recast the problem of finding fibers bundles and connectivity maps to the calculation of shortest paths on a Riemannian manifold defined from fiber ODFs computed from HARDI measurements. Several experiments on real data show that our method is able to segment fibers bundles that are not easily recovered by other existing methods.

Mickaël Péchaud, Maxime Descoteaux, Renaud Keriven
Functional Segmentation of fMRI Data Using Adaptive Non-negative Sparse PCA (ANSPCA)

We propose a novel method for functional segmentation of fMRI data that incorporates multiple functional attributes such as activation effects and functional connectivity, under a single framework. Similar to PCA, our method exploits the structure of the correlation matrix but with neighborhood information adaptively integrated to encourage detection of spatially contiguous clusters yet without falsely pooling non-active voxels near the functional boundaries. In addition, our method adaptively combines PCA and replicator dynamics, which we show to be equivalent to non-negative sparse PCA, based on the sparsity of the activation pattern. We validate our method quantitatively on synthetic data and demonstrate that it outperforms methods including replicator dynamics, PCA, Gaussian mixture models, and general linear models. Furthermore, when applied to real fMRI data, our method successfully segmented the Brodmann area 6 into its known functional sub-regions, whereas other conventional methods that we examined failed to attain such delineation.

Bernard Ng, Rafeef Abugharbieh, Martin J. McKeown
Genetics of Anisotropy Asymmetry: Registration and Sample Size Effects

Brain asymmetry has been a topic of interest for neuroscientists for many years. The advent of diffusion tensor imaging (DTI) allows researchers to extend the study of asymmetry to a microscopic scale by examining fiber integrity differences across hemispheres rather than the macroscopic differences in shape or structure volumes. Even so, the power to detect these microarchitectural differences depends on the sample size and how the brain images are registered and how many subjects are studied. We fluidly registered 4 Tesla DTI scans from 180 healthy adult twins (45 identical and fraternal pairs) to a geometrically-centered population mean template. We computed voxelwise maps of significant asymmetries (left/right hemisphere differences) for common fiber anisotropy indices (FA, GA). Quantitative genetic models revealed that 47-62% of the variance in asymmetry was due to genetic differences in the population. We studied how these heritability estimates varied with the type of registration target (T1- or T2-weighted) and with sample size. All methods consistently found that genetic factors strongly determined the lateralization of fiber anisotropy, facilitating the quest for specific genes that might influence brain asymmetry and fiber integrity.

Neda Jahanshad, Agatha D. Lee, Natasha Leporé, Yi-Yu Chou, Caroline C. Brun, Marina Barysheva, Arthur W. Toga, Katie L. McMahon, Greig I. de Zubicaray, Margaret J. Wright, Paul M. Thompson
Extending Genetic Linkage Analysis to Diffusion Tensor Images to Map Single Gene Effects on Brain Fiber Architecture

We extended genetic linkage analysis - an analysis widely used in quantitative genetics - to 3D images to analyze single gene effects on brain fiber architecture. We collected 4 Tesla diffusion tensor images (DTI) and genotype data from 258 healthy adult twins and their non-twin siblings. After high-dimensional fluid registration, at each voxel we estimated the genetic linkage between the single nucleotide polymorphism (SNP), Val66Met (dbSNP number rs6265), of the BDNF gene (brain-derived neurotrophic factor) with fractional anisotropy (FA) derived from each subject’s DTI scan, by fitting structural equation models (SEM) from quantitative genetics. We also examined how image filtering affects the effect sizes for genetic linkage by examining how the overall significance of voxelwise effects varied with respect to full width at half maximum (FWHM) of the Gaussian smoothing applied to the FA images. Raw FA maps with no smoothing yielded the greatest sensitivity to detect gene effects, when corrected for multiple comparisons using the false discovery rate (FDR) procedure. The BDNF polymorphism significantly contributed to the variation in FA in the posterior cingulate gyrus, where it accounted for around 90-95% of the total variance in FA. Our study generated the first maps to visualize the effect of the BDNF gene on brain fiber integrity, suggesting that common genetic variants may strongly determine white matter integrity.

Ming-Chang Chiang, Christina Avedissian, Marina Barysheva, Arthur W. Toga, Katie L. McMahon, Greig I. de Zubicaray, Margaret J. Wright, Paul M. Thompson
Vascular Territory Image Analysis Using Vessel Encoded Arterial Spin Labeling

Arterial Spin Labeling (ASL) permits the non-invasive assessment of cerebral perfusion, by magnetically labeling all the blood flowing into the brain. Vessel encoded (VE) ASL extends this concept by introducing spatial modulations of the labeling procedure, resulting in different patterns of label applied to the blood from different vessels. Here a Bayesian inference solution to the analysis of VE-ASL is presented based on a description of the relative locations of labeled vessels and a probabilistic classification of brain tissue to vessel source. In simulation and on real data the method is shown to reliably determine vascular territories in the brain, including the case where the number of vessels exceeds the number of independent measurements.

Michael A. Chappell, Thomas W. Okell, Peter Jezzard, Mark W. Woolrich
Predicting MGMT Methylation Status of Glioblastomas from MRI Texture

In glioblastoma (GBM), promoter methylation of the DNA repair gene MGMT is associated with benefit from chemotherapy. Because MGMT promoter methylation status can not be determined in all cases, a surrogate for the methylation status would be a useful clinical tool. Correlation between methylation status and magnetic resonance imaging features has been reported suggesting that non-invasive MGMT promoter methylation status detection is possible. In this work, a retrospective analysis of T2, FLAIR and T1-post contrast MR images in patients with newly diagnosed GBM is performed using L1-regularized neural networks. Tumor texture, assessed quantitatively was utilized for predicting the MGMT promoter methylation status of a GBM in 59 patients. The texture features were extracted using a space-frequency texture analysis based on the S-transform and utilized by a neural network to predict the methylation status of a GBM. Blinded classification of MGMT promoter methylation status reached an average accuracy of

87.7%

, indicating that the proposed technique is accurate enough for clinical use.

Ilya Levner, Sylvia Drabycz, Gloria Roldan, Paula De Robles, J. Gregory Cairncross, Ross Mitchell
Tumor Invasion Margin on the Riemannian Space of Brain Fibers

Gliomas are one of the most challenging tumors to treat or control locally. One of the main challenges is determining which areas of the apparently normal brain contain glioma cells, as gliomas are known to infiltrate for several centimeters beyond the clinically apparent lesion visualized on standard CT or MRI. To ensure that radiation treatment encompasses the whole tumour, including the cancerous cells not revealed by MRI, doctors treat a volume of brain extending 2cm out from the margin of the visible tumour. This expanded volume often includes healthy, non-cancerous brain tissue.

Knowing that glioma cells preferentially spread along nerve fibers, we propose the use of a geodesic distance on the Riemannian manifold of brain fibers to replace the Euclidean distance used in clinical practice and to correctly identify the tumor invasion margin. To compute the geodesic distance we use actual DTI data from patients with glioma and compare our predicted growth with follow-up MRI scans. Results show improvement in predicting the invasion margin when using the geodesic distance as opposed to the 2cm conventional Euclidean distance.

Dana Cobzas, Parisa Mosayebi, Albert Murtha, Martin Jagersand
A Conditional Random Field Approach for Coupling Local Registration with Robust Tissue and Structure Segmentation

We consider a general modelling strategy to handle in a unified way a number of tasks essential to MR brain scan analysis. Our approach is based on the explicit definition of a Conditional Random Field (CRF) model decomposed into components to be specified according to the targeted tasks. For a specific illustration, we define a CRF model that combines robust-to-noise and to nonuniformity Markovian tissue and structure segmentations with local affine atlas registration. The evaluation performed on both phantoms and real 3T images shows good results and, in particular, points out the gain in introducing registration as a model component. Besides, our modeling and estimation scheme provide general guidelines to deal with complex joint processes for medical image analysis.

Benoit Scherrer, Florence Forbes, Michel Dojat
Robust Atlas-Based Brain Segmentation Using Multi-structure Confidence-Weighted Registration

We present a robust and accurate atlas-based brain segmentation method which uses multiple initial structure segmentations to simultaneously drive the image registration and achieve anatomically constrained correspondence. We also derive segmentation confidence maps (SCMs) from a given manually segmented training set; these characterize the accuracy of a given set of segmentations as compared to manual segmentations. We incorporate these in our cost term to weight the influence of initial segmentations in the multi-structure registration, such that low confidence regions are given lower weight in the registration. To account for correspondence errors in the underlying registration, we use a supervised atlas correction technique and present a method for correcting the atlas segmentation to account for possible errors in the underlying registration. We applied our multi-structure atlas-based segmentation and supervised atlas correction to segment the amygdala in a set of 23 autistic patients and controls using leave-one-out cross validation, achieving a Dice overlap score of 0.84. We also applied our method to eight subcortical structures in MRI from the Internet Brain Segmentation Repository, with results better or comparable to competing methods.

Ali R. Khan, Moo K. Chung, Mirza Faisal Beg
Discriminative, Semantic Segmentation of Brain Tissue in MR Images

A new algorithm is presented for the automatic segmentation and classification of brain tissue from 3D MR scans. It uses discriminative Random Decision Forest classification and takes into account partial volume effects. This is combined with correction of intensities for the MR bias field, in conjunction with a learned model of spatial context, to achieve accurate voxel-wise classification. Our quantitative validation, carried out on existing labelled datasets, demonstrates improved results over the state of the art, especially for the cerebro-spinal fluid class which is the most difficult to label accurately.

Zhao Yi, Antonio Criminisi, Jamie Shotton, Andrew Blake
Use of Simulated Atrophy for Performance Analysis of Brain Atrophy Estimation Approaches

In this paper, we study the performance of popular brain atrophy estimation algorithms using a simulated gold standard. The availability of a gold standard facilitates a sound evaluation of the measures of atrophy estimation, which is otherwise complicated. Firstly, we propose an approach for the construction of a gold standard. It involves the simulation of a realistic brain tissue loss based on the estimation of a topology preserving B-spline based deformation fields. Using this gold standard, we present an evaluation of three standard brain atrophy estimation methods (SIENA, SIENAX and BSI) in the presence of bias field inhomogeneity and noise. The effect of brain lesion load on the measured atrophy is also evaluated. Our experiments demonstrate that SIENA, SIENAX and BSI show a deterioration in their performance in the presence of bias field inhomogeneity and noise. The observed mean absolute errors in the measured Percentage of Brain Volume Change (PBVC) are 0.35%±0.38, 2.03%±1.46 and 0.91%±0.80 for SIENA, SIENAX and BSI, respectively, for simulated whole brain atrophies in the range 0 − 1%.

Swati Sharma, Vincent Noblet, François Rousseau, Fabrice Heitz, Lucien Rumbach, Jean-Paul Armspach
Fast and Robust 3-D MRI Brain Structure Segmentation

We present a novel method for the automatic detection and segmentation of (sub-)cortical gray matter structures in 3-D magnetic resonance images of the human brain. Essentially, the method is a top-down segmentation approach based on the recently introduced concept of Marginal Space Learning (MSL). We show that MSL naturally decomposes the parameter space of anatomy shapes along decreasing levels of geometrical abstraction into subspaces of increasing dimensionality by exploiting parameter invariance. At each level of abstraction, i.e., in each subspace, we build strong discriminative models from annotated training data, and use these models to narrow the range of possible solutions until a final shape can be inferred. Contextual information is introduced into the system by representing candidate shape parameters with high-dimensional vectors of 3-D generalized Haar features and steerable features derived from the observed volume intensities. Our system allows us to detect and segment 8 (sub-)cortical gray matter structures in T1-weighted 3-D MR brain scans from a variety of different scanners in on average 13.9 sec., which is faster than most of the approaches in the literature. In order to ensure comparability of the achieved results and to validate robustness, we evaluate our method on two publicly available gold standard databases consisting of several T1-weighted 3-D brain MR scans from different scanners and sites. The proposed method achieves an accuracy better than most state-of-the-art approaches using standardized distance and overlap metrics.

Michael Wels, Yefeng Zheng, Gustavo Carneiro, Martin Huber, Joachim Hornegger, Dorin Comaniciu
Multiple Sclerosis Lesion Segmentation Using an Automatic Multimodal Graph Cuts

Graph Cuts have been shown as a powerful interactive segmentation technique in several medical domains. We propose to automate the Graph Cuts in order to automatically segment Multiple Sclerosis (MS) lesions in MRI. We replace the manual interaction with a robust EM-based approach in order to discriminate between MS lesions and the Normal Appearing Brain Tissues (NABT). Evaluation is performed in synthetic and real images showing good agreement between the automatic segmentation and the target segmentation. We compare our algorithm with the state of the art techniques and with several manual segmentations. An advantage of our algorithm over previously published ones is the possibility to semi-automatically improve the segmentation due to the Graph Cuts interactive feature.

Daniel García-Lorenzo, Jeremy Lecoeur, Douglas L. Arnold, D. Louis Collins, Christian Barillot
Towards Accurate, Automatic Segmentation of the Hippocampus and Amygdala from MRI

We describe progress towards fully automatic segmentation of the hippocampus (HC) and amygdala (AG) in human subjects from MRI data. Three methods are described and tested with a set of MRIs from 80 young normal controls, using manual labeling of the HC and AG as a gold standard. The methods include: 1) our ANIMAL atlas-based method that uses non-linear registration to a pre-labeled non-linear average template (ICBM152). HC and AG labels, defined on the template are mapped through the inverse transformation to segment these structures on the subject’s MRI; 2) template-based segmentation, where we select the most similar MRI from the set of 80 labeled datasets to use as a template in the standard ANIMAL segmentation scheme; 3) label fusion methods where we combine segmentations from the ‘n’ most similar templates. The label fusion technique yields the best results with median kappas of 0.886 and 0.826 for HC and AG, respectively.

D. Louis Collins, Jens C. Pruessner
An Object-Based Method for Rician Noise Estimation in MR Images

The estimation of the noise level in MR images is used to assess the consistency of statistical analysis or as an input parameter in some image processing techniques. Most of the existing Rician noise estimation methods are based on background statistics, and as such are sensitive to ghosting artifacts. In this paper, a new object-based method is proposed. This method is based on the adaptation of the Median Absolute Deviation (MAD) estimator in the wavelet domain for Rician noise. The adaptation for Rician noise is performed by using only the wavelet coefficients corresponding to the object and by correcting the estimation with an iterative scheme based on the SNR of the image. A quantitative validation on synthetic phantom with artefacts is presented and a new validation framework is proposed to perform quantitative validation on real data. The results show the accuracy and the robustness of the proposed method.

Pierrick Coupé, José V. Manjón, Elias Gedamu, Douglas Arnold, Montserrat Robles, D. Louis Collins
Cell Segmentation Using Front Vector Flow Guided Active Contours

Phase-contrast microscopy is a common approach for studying the dynamics of cell behaviors, such as cell migration. Cell segmentation is the basis of quantitative analysis of the immense cellular images. However, the complicated cell morphological appearance in phase-contrast microscopy images challenges the existing segmentation methods. This paper proposes a new cell segmentation method for cancer cell migration studies using phase-contrast images. Instead of segmenting cells directly based on commonly used low-level features, e.g. intensity and gradient, we first identify the leading protrusions, a high level feature, of cancer cells. Based on the identified cell leading protrusions, we introduce a front vector flow guided active contour, which guides the initial cell boundaries to the real boundaries. The experimental validation on a set of breast cancer cell images shows that the proposed method demonstrates fast, stable, and accurate segmentation for breast cancer cells with wide range of sizes and shapes.

Fuhai Li, Xiaobo Zhou, Hong Zhao, Stephen T. C. Wong
Segmentation and Classification of Cell Cycle Phases in Fluorescence Imaging

Current chemical biology methods for studying spatiotemporal correlation between biochemical networks and cell cycle phase progression in live-cells typically use fluorescence-based imaging of fusion proteins. Stable cell lines expressing fluorescently tagged protein GFP-PCNA produce rich, dynamically varying sub-cellular foci patterns characterizing the cell cycle phases, including the progress during the S-phase. Variable fluorescence patterns, drastic changes in SNR, shape and position changes and abundance of touching cells require sophisticated algorithms for reliable automatic segmentation and cell cycle classification. We extend the recently proposed graph partitioning active contours (GPAC) for fluorescence-based nucleus segmentation using regional density functions and dramatically improve its efficiency, making it scalable for high content microscopy imaging. We utilize surface shape properties of GFP-PCNA intensity field to obtain descriptors of foci patterns and perform automated cell cycle phase classification, and give quantitative performance by comparing our results to manually labeled data.

Ilker Ersoy, Filiz Bunyak, Vadim Chagin, M. Christina Cardoso, Kannappan Palaniappan
Steerable Features for Statistical 3D Dendrite Detection

Most state-of-the-art algorithms for filament detection in 3-D image-stacks rely on computing the Hessian matrix around individual pixels and labeling these pixels according to its eigenvalues. This approach, while very effective for clean data in which linear structures are nearly cylindrical, loses its effectiveness in the presence of noisy data and irregular structures.

In this paper, we show that using steerable filters to create rotationally invariant features that include higher-order derivatives and training a classifier based on these features lets us handle such irregular structures. This can be done reliably and at acceptable computational cost and yields better results than state-of-the-art methods.

Germán González, François Aguet, François Fleuret, Michael Unser, Pascal Fua
Graph-Based Pancreatic Islet Segmentation for Early Type 2 Diabetes Mellitus on Histopathological Tissue

It is estimated that in 2010 more than 220 million people will be affected by type 2 diabetes mellitus (T2DM). Early evidence indicates that specific markers for alpha and beta cells in pancreatic islets of Langerhans can be used for early T2DM diagnosis. Currently, the analysis of such histological tissues is manually performed by trained pathologists using a light microscope. To objectify classification results and to reduce the processing time of histological tissues, an automated computational pathology framework for segmentation of pancreatic islets from histopathological fluorescence images is proposed. Due to high variability in the staining intensities for alpha and beta cells, classical medical imaging approaches fail in this scenario.

The main contribution of this paper consists of a novel graph-based segmentation approach based on cell nuclei detection with randomized tree ensembles. The algorithm is trained via a cross validation scheme on a ground truth set of islet images manually segmented by 4 expert pathologists. Test errors obtained from the cross validation procedure demonstrate that the graph-based computational pathology analysis proposed is performing competitively to the expert pathologists while outperforming a baseline morphological approach.

Xenofon Floros, Thomas J. Fuchs, Markus P. Rechsteiner, Giatgen Spinas, Holger Moch, Joachim M. Buhmann
Detection of Spatially Correlated Objects in 3D Images Using Appearance Models and Coupled Active Contours

We consider the problem of segmenting 3

D

images that contain a dense collection of spatially correlated objects, such as fluorescent labeled cells in tissue. Our approach involves an initial modeling phase followed by a data-fitting segmentation phase. In the first phase, cell shape (membrane bound) is modeled implicitly using a parametric distribution of correlation function estimates. The nucleus is modeled for its shape as well as image intensity distribution inspired from the physics of its image formation. In the second phase, we solve the segmentation problem using a variational level-set strategy with coupled active contours to minimize a novel energy functional. We demonstrate the utility of our approach on multispectral fluorescence microscopy images.

Kishore Mosaliganti, Arnaud Gelas, Alexandre Gouaillard, Ramil Noche, Nikolaus Obholzer, Sean Megason
Intra-retinal Layer Segmentation in Optical Coherence Tomography Using an Active Contour Approach

Optical coherence tomography (OCT) is a non-invasive, depth resolved imaging modality that has become a prominent ophthalmic diagnostic technique. We present an automatic segmentation algorithm to detect intra-retinal layers in OCT images acquired from rodent models of retinal degeneration. We adapt Chan–Vese’s energy-minimizing active contours without edges for OCT images, which suffer from low contrast and are highly corrupted by noise. We adopt a multi-phase framework with a circular shape prior in order to model the boundaries of retinal layers and estimate the shape parameters using least squares. We use a contextual scheme to balance the weight of different terms in the energy functional. The results from various synthetic experiments and segmentation results on 20 OCT images from four rats are presented, demonstrating the strength of our method to detect the desired retinal layers with sufficient accuracy and average Dice similarity coefficient of 0.85, specifically 0.94 for the the ganglion cell layer, which is the relevant layer for glaucoma diagnosis.

Azadeh Yazdanpanah, Ghassan Hamarneh, Benjamin Smith, Marinko Sarunic
Mapping Tissue Optical Attenuation to Identify Cancer Using Optical Coherence Tomography

The lymphatic system is a common route for the spread of cancer and the identification of lymph node metastases is a key task during cancer surgery. This paper demonstrates the use of optical coherence tomography to construct parametric images of lymph nodes. It describes a method to automatically estimate the optical attenuation coefficient of tissue. By mapping the optical attenuation coefficient at each location in the scan, it is possible to construct a parametric image indicating variations in tissue type. The algorithm is applied to

ex vivo

samples of human axillary lymph nodes and validated against a histological gold standard. Results are shown illustrating the variation in optical properties between cancerous and healthy tissue.

Robert A. McLaughlin, Loretta Scolaro, Peter Robbins, Christobel Saunders, Steven L. Jacques, David D. Sampson
Analysis of MR Images of Mice in Preclinical Treatment Monitoring of Polycystic Kidney Disease

A common cause of kidney failure is autosomal dominant polycystic kidney disease (ADPKD). It is characterized by the growth of cysts in the kidneys and hence the growth of the entire kidneys with eventual failure in most cases by age 50. No preventive treatment for this condition is available. Preclinical drug treatment studies use an in vivo mouse model of the condition. The analysis of mice imaging data for such studies typically requires extensive manual interaction, which is subjective and not reproducible. In this work both untreated and treated mice have been imaged with a high field, 9.4

T

, MRI animal scanner and a reliable algorithm for the automated segmentation of the mouse kidneys has been developed. The algorithm first detects the region of interest (ROI) in the image surrounding the kidneys. A parameterized geometric shape for a kidney is registered to the ROI of each kidney. The registered shapes are incorporated as priors to the graph cuts algorithm used to extract the kidneys. The accuracy of the automated segmentation has been demonstrated by comparing it with a manual segmentation. The processing results are also consistent with the literature for previous techniques.

Stathis Hadjidemetriou, Wilfried Reichardt, Martin Buechert, Juergen Hennig, Dominik von Elverfeldt
Actin Filament Tracking Based on Particle Filters and Stretching Open Active Contour Models

We introduce a novel algorithm for actin filament tracking and elongation measurement. Particle Filters (PF) and Stretching Open Active Contours (SOAC) work cooperatively to simplify the modeling of PF in a one-dimensional state space while naturally integrating filament body constraints to tip estimation. Our algorithm reduces the PF state spaces to one-dimensional spaces by tracking filament bodies using SOAC and probabilistically estimating tip locations along the curve length of SOACs. Experimental evaluation on TIRFM image sequences with very low SNRs demonstrates the accuracy and robustness of this approach.

Hongsheng Li, Tian Shen, Dimitrios Vavylonis, Xiaolei Huang

Image Analysis and Computer Aided Diagnosis

Toward Early Diagnosis of Lung Cancer

Our long term research goal is to develop a fully automated, image-based diagnostic system for early diagnosis of pulmonary nodules that may lead to lung cancer. In this paper, we focus on generating new probabilistic models for the estimated growth rate of the detected lung nodules from Low Dose Computed Tomography (LDCT). We propose a new methodology for 3D LDCT data registration which is non-rigid and involves two steps: (i) global target-to-prototype alignment of one scan to another using the learned prior appearance model followed by (ii) local alignment in order to correct for intricate relative deformations. Visual appearance of these chest images is described using a Markov–Gibbs random field (MGRF) model with multiple pairwise interaction. An affine transformation that globally registers a target to a prototype is estimated by the gradient ascent-based maximization of a special Gibbs energy function. To handle local deformations, we displace each voxel of the target over evolving closed equi-spaced surfaces (iso-surfaces) to closely match the prototype. The evolution of the iso-surfaces is guided by a speed function in the directions that minimize distances between the corresponding voxel pairs on the iso-surfaces in both the data sets. Preliminary results show that the proposed accurate registration could lead to precise diagnosis and identification of the development of the detected pulmonary nodules.

Ayman El-Baz, Georgy Gimel’farb, Robert Falk, Mohamed Abou El-Ghar, Sabrina Rainey, David Heredia, Teresa Shaffer
Lung Extraction, Lobe Segmentation and Hierarchical Region Assessment for Quantitative Analysis on High Resolution Computed Tomography Images

Regional assessment of lung disease (such as chronic obstructive pulmonary disease) is a critical component to accurate patient diagnosis. Software tools than enable such analysis are also important for clinical research studies. In this work, we present an image segmentation and data representation framework that enables quantitative analysis specific to different lung regions on high resolution computed tomography (HRCT) datasets. We present an offline, fully automatic image processing chain that generates airway, vessel, and lung mask segmentations in which the left and right lung are delineated. We describe a novel lung lobe segmentation tool that produces reproducible results with minimal user interaction. A usability study performed across twenty datasets (inspiratory and expiratory exams including a range of disease states) demonstrates the tool’s ability to generate results within five to seven minutes on average. We also describe a data representation scheme that involves compact encoding of label maps such that both “regions” (such as lung lobes) and “types” (such as emphysematous parenchyma ) can be simultaneously represented at a given location in the HRCT.

James C. Ross, Raúl San José Estépar, Alejandro Díaz, Carl-Fredrik Westin, Ron Kikinis, Edwin K. Silverman, George R. Washko
Learning COPD Sensitive Filters in Pulmonary CT

The standard approaches to analyzing emphysema in computed tomography (CT) images are visual inspection and the relative area of voxels below a threshold (RA). The former approach is subjective and impractical in a large data set and the latter relies on a single threshold and independent voxel information, ignoring any spatial correlation in intensities. In recent years, supervised learning on texture features has been investigated as an alternative to these approaches, showing good results. However, supervised learning requires labeled samples, and these samples are often obtained via subjective and time consuming visual scoring done by human experts.

In this work, we investigate the possibility of applying supervised learning using texture measures on random CT samples where the labels are based on external, non-CT measures. We are not targeting emphysema directly, instead we focus on learning textural differences that discriminate subjects with chronic obstructive pulmonary disease (COPD) from healthy smokers, and it is expected that emphysema plays a major part in this. The proposed texture based approach achieves an 69% classification accuracy which is significantly better than RA’s 55% accuracy.

Lauge Sørensen, Pechin Lo, Haseem Ashraf, Jon Sporring, Mads Nielsen, Marleen de Bruijne
Automated Anatomical Labeling of Bronchial Branches Extracted from CT Datasets Based on Machine Learning and Combination Optimization and Its Application to Bronchoscope Guidance

This paper presents a method for the automated anatomical labeling of bronchial branches extracted from 3D CT images based on machine learning and combination optimization. We also show applications of anatomical labeling on a bronchoscopy guidance system. This paper performs automated labeling by using machine learning and combination optimization. The actual procedure consists of four steps: (a) extraction of tree structures of the bronchus regions extracted from CT images, (b) construction of AdaBoost classifiers, (c) computation of candidate names for all branches by using the classifiers, (d) selection of best combination of anatomical names. We applied the proposed method to 90 cases of 3D CT datasets. The experimental results showed that the proposed method can assign correct anatomical names to 86.9% of the bronchial branches up to the sub-segmental lobe branches. Also, we overlaid the anatomical names of bronchial branches on real bronchoscopic views to guide real bronchoscopy.

Kensaku Mori, Shunsuke Ota, Daisuke Deguchi, Takayuki Kitasaka, Yasuhito Suenaga, Shingo Iwano, Yosihnori Hasegawa, Hirotsugu Takabatake, Masaki Mori, Hiroshi Natori
Multi-level Ground Glass Nodule Detection and Segmentation in CT Lung Images

Early detection of Ground Glass Nodule (GGN) in lung Computed Tomography (CT) images is important for lung cancer prognosis. Due to its indistinct boundaries, manual detection and segmentation of GGN is labor-intensive and problematic. In this paper, we propose a novel multi-level learning-based framework for automatic detection and segmentation of GGN in lung CT images. Our main contributions are: firstly, a multi-level statistical learning-based approach that seamlessly integrates segmentation and detection to improve the overall accuracy for GGN detection (in a subvolume). The classification is done at two levels, both voxel-level and object-level. The algorithm starts with a three-phase voxel-level classification step, using volumetric features computed per voxel to generate a GGN class-conditional probability map. GGN candidates are then extracted from this probability map by integrating prior knowledge of shape and location, and the GGN object-level classifier is used to determine the occurrence of the GGN. Secondly, an extensive set of volumetric features are used to capture the GGN appearance. Finally, to our best knowledge, the GGN dataset used for experiments is an order of magnitude larger than previous work. The effectiveness of our method is demonstrated on a dataset of 1100 subvolumes (100 containing GGNs) extracted from about 200 subjects.

Yimo Tao, Le Lu, Maneesh Dewan, Albert Y. Chen, Jason Corso, Jianhua Xuan, Marcos Salganicoff, Arun Krishnan
Global and Local Multi-valued Dissimilarity-Based Classification: Application to Computer-Aided Detection of Tuberculosis

In many applications of computer-aided detection (CAD) it is not possible to precisely localize lesions or affected areas in images that are known to be abnormal. In this paper a novel approach to computer-aided detection is presented that can deal effectively with such weakly labeled data. Our approach is based on multi-valued dissimilarity measures that retain more information about underlying local image features than single-valued dissimilarities. We show how this approach can be extended by applying it locally as well as globally, and by merging the local and global classification results into an overall opinion about the image to be classified. The framework is applied to the detection of tuberculosis (TB) in chest radiographs. This is the first study to apply a CAD system to a large database of digital chest radiographs obtained from a TB screening program, including normal cases, suspect cases and cases with proven TB. The global dissimilarity approach achieved an area under the ROC curve of 0.81. The combination of local and global classifications increased this value to 0.83.

Yulia Arzhaeva, Laurens Hogeweg, Pim A. de Jong, Max A. Viergever, Bram van Ginneken
Noninvasive Imaging of Electrophysiological Substrates in Post Myocardial Infarction

The presence of injured tissues after myocardial infarction (MI) creates substrates responsible for fatal arrhythmia; understanding of its arrhythmogenic mechanism requires investigation of the correlation of local abnormality between phenomenal electrical functions and inherent electrophysiological properties during normal sinus rhythm. This paper presents a physiological-model-constrained framework for imaging post-MI electrophysiological substrates from noninvasive body surface potential measurements. Using

a priori

knowledge of general cardiac electrical activity as constraints, it simultaneously reconstruct transmembrane potential dynamics and tissue excitability inside the 3D myocardium, with the central goal to localize and investigate the abnormality in these two different electrophysiological quantities. It is applied to four post-MI patients with quantitative validations by gold standards and notable improvements over existent results.

Linwei Wang, Heye Zhang, Ken C. L. Wong, Huafeng Liu, Pengcheng Shi
Unsupervised Inline Analysis of Cardiac Perfusion MRI

In this paper we first discuss the technical challenges preventing an automated analysis of cardiac perfusion MR images and subsequently present a fully unsupervised workflow to address the problems. The proposed solution consists of key-frame detection, consecutive motion compensation, surface coil inhomogeneity correction using proton density images and robust generation of pixel-wise perfusion parameter maps. The entire processing chain has been implemented on clinical MR systems to achieve unsupervised inline analysis of perfusion MRI. Validation results are reported for 260 perfusion time series, demonstrating feasibility of the approach.

Hui Xue, Sven Zuehlsdorff, Peter Kellman, Andrew Arai, Sonia Nielles-Vallespin, Christophe Chefdhotel, Christine H. Lorenz, Jens Guehring
Pattern Recognition of Abnormal Left Ventricle Wall Motion in Cardiac MR

There are four main problems that limit application of pattern recognition techniques for recognition of abnormal cardiac left ventricle (LV) wall motion: 1) Normalization of the LV’s size, shape, intensity level and position; 2) defining a spatial correspondence between phases and subjects; 3) extracting features; 4) and discriminating abnormal from normal wall motion. Solving these four problems is required for application of pattern recognition techniques to classify the normal and abnormal LV wall motion. In this work, we introduce a normalization scheme to solve the first and second problems. With this scheme, LVs are normalized to the same position, size, and intensity level. Using the normalized images, we proposed an intra-segment classification criterion based on a correlation measure to solve the third and fourth problems. Application of the method to recognition of abnormal cardiac MR LV wall motion showed promising results.

Yingli Lu, Perry Radau, Kim Connelly, Alexander Dick, Graham Wright
Septal Flash Assessment on CRT Candidates Based on Statistical Atlases of Motion

In this paper, we propose a complete framework for the automatic detection and quantification of abnormal heart motion patterns using Statistical Atlases of Motion built from healthy populations. The method is illustrated on CRT patients with identified cardiac dyssynchrony and abnormal septal motion on 2D ultrasound (US) sequences. The use of the 2D US modality guarantees that the temporal resolution of the image sequences is high enough to work under a small displacements hypothesis. Under this assumption, the computed displacement fields can be directly considered as cardiac velocities. Comparison of subjects acquired with different spatiotemporal resolutions implies the reorientation and temporal normalization of velocity fields in a common space of coordinates. Statistics are then performed on the reoriented vector fields. Results show the ability of the method to correctly detect abnormal motion patterns and quantify their distance to normality. The use of local p-values for quantifying abnormal motion patterns is believed to be a promising strategy for computing new markers of cardiac dyssynchrony for better characterizing CRT candidates.

Nicolas Duchateau, Mathieu De Craene, Etel Silva, Marta Sitges, Bart H. Bijnens, Alejandro F. Frangi
Personalized Modeling and Assessment of the Aortic-Mitral Coupling from 4D TEE and CT

The anatomy, function and hemodynamics of the aortic and mitral valves are known to be strongly interconnected. An integrated quantitative and visual assessment of the aortic-mitral coupling may have an impact on patient evaluation, planning and guidance of minimal invasive procedures. In this paper, we propose a novel model-driven method for functional and morphological characterization of the entire aortic-mitral apparatus. A holistic physiological model is hierarchically defined to represent the anatomy and motion of the two left heart valves. Robust learning-based algorithms are applied to estimate the patient-specific spatial-temporal parameters from four-dimensional TEE and CT data. The piecewise affine location of the valves is initially determined over the whole cardiac cycle using an incremental search performed in marginal spaces. Consequently, efficient spectrum detection in the trajectory space is applied to estimate the cyclic motion of the articulated model. Finally, the full personalized surface model of the aortic-mitral coupling is constructed using statistical shape models and local spatial-temporal refinement. Experiments performed on 65 4D TEE and 69 4D CT sequences demonstrated an average accuracy of 1.45mm and speed of 60 seconds for the proposed approach. Initial clinical validation on model-based and expert measurement showed the precision to be in the range of the inter-user variability. To the best of our knowledge this is the first time a complete model of the aortic-mitral coupling estimated from TEE and CT data is proposed.

Razvan Ioan Ionasec, Ingmar Voigt, Bogdan Georgescu, Yang Wang, Helene Houle, Joachim Hornegger, Nassir Navab, Dorin Comaniciu
A New 3-D Automated Computational Method to Evaluate In-Stent Neointimal Hyperplasia in In-Vivo Intravascular Optical Coherence Tomography Pullbacks

Detection of stent struts imaged in vivo by optical coherence tomography (OCT) after percutaneous coronary interventions (PCI) and quantification of in-stent neointimal hyperplasia (NIH) are important. In this paper, we present a new computational method to facilitate the physician in this endeavor to assess and compare new (drug-eluting) stents. We developed a new algorithm for stent strut detection and utilized splines to reconstruct the lumen and stent boundaries which provide automatic measurements of NIH thickness, lumen and stent area. Our original approach is based on the detection of stent struts unique characteristics: bright reflection and shadow behind. Furthermore, we present for the first time to our knowledge a rotation correction method applied across OCT cross-section images for 3D reconstruction and visualization of reconstructed lumen and stent boundaries for further analysis in the longitudinal dimension of the coronary artery. Our experiments over OCT cross-sections taken from 7 patients presenting varying degrees of NIH after PCI illustrate a good agreement between the computer method and expert evaluations: Bland-Altmann analysis revealed a mean difference for lumen cross-section area of 0.11±0.70

mm

2

and for the stent cross-section area of 0.10±1.28

mm

2

.

Serhan Gurmeric, Gozde Gul Isguder, Stéphane Carlier, Gozde Unal
MKL for Robust Multi-modality AD Classification

We study the problem of classifying mild Alzheimer’s disease (AD) subjects from healthy individuals (controls) using

multi-modal

image data, to facilitate early identification of AD related pathologies. Several recent papers have demonstrated that such classification is possible with MR or PET images, using machine learning methods such as SVM and boosting. These algorithms learn the classifier using one

type

of image data. However, AD is not well characterized by one imaging modality alone, and analysis is typically performed using several image types – each measuring a different type of structural/functional characteristic. This paper explores the AD classification problem using multiple modalities

simultaneously

. The difficulty here is to assess the relevance of each modality (which cannot be assumed a priori), as well as to optimize the classifier. To tackle this problem, we utilize and adapt a recently developed idea called Multi-Kernel learning (MKL). Briefly, each imaging modality spawns one (or more kernels) and we simultaneously solve for the kernel weights and a maximum margin classifier. To make the model robust, we propose strategies to suppress the influence of a small subset of outliers on the classifier – this yields an alternative minimization based algorithm for robust MKL. We present promising

multi-modal

classification experiments on a large dataset of images from the ADNI project.

Chris Hinrichs, Vikas Singh, Guofan Xu, Sterling Johnson
A New Approach for Creating Customizable Cytoarchitectonic Probabilistic Maps without a Template

We present a novel technique for creating template-free probabilistic maps of the cytoarchitectonic areas using a groupwise registration. We use the technique to transform 10 human post-mortem structural MR data sets, together with their corresponding cytoarchitectonic information, to a common space. We have targeted the cytoarchitectonically defined subregions of the primary auditory cortex. Thanks to the template-free groupwise registration, the created maps are not macroanatomically biased towards a specific geometry/topology. The advantage of the groupwise versus pairwise registration in avoiding such anatomical bias is better revealed in studies with small number of subjects and a high degree of variability among the individuals such as the post-mortem data. A leave-one-out cross-validation method was used to compare the sensitivity, specificity and positive predictive value of the proposed and published maps. We observe a significant improvement in localization of cytoarchitectonically defined subregions in primary auditory cortex using the proposed maps. The proposed maps can be tailored to any subject space by registering the subject image to the average of the groupwise-registered post-mortem images.

Amir M. Tahmasebi, Purang Abolmaesumi, Xiujuan Geng, Patricia Morosan, Katrin Amunts, Gary E. Christensen, Ingrid S. Johnsrude
A Computer-Aided Diagnosis System of Nuclear Cataract via Ranking

A novel computer-aided diagnosis system of nuclear cataract via ranking is firstly proposed in this paper. The grade of nuclear cataract in a slit-lamp image is predicted based on its neighboring labeled images in a ranked images list, which is achieved using an optimal ranking function. A new ranking evaluation measure is proposed for learning the optimal ranking function via direct optimization. Our system has been tested by a large dataset composed of 1000 slit-lamp images from 1000 different cases. Both experimental results and comparison with several state-of-the-art methods indicate the superiority of our system.

Wei Huang, Huiqi Li, Kap Luk Chan, Joo Hwee Lim, Jiang Liu, Tien Yin Wong
Automated Segmentation of the Femur and Pelvis from 3D CT Data of Diseased Hip Using Hierarchical Statistical Shape Model of Joint Structure

Segmentation of the femur and pelvis from 3D data is prerequisite of patient specific planning and simulation for hip surgery. Separation of the femoral head and acetabulum is one of main difficulties in the diseased hip joint due to deformed shapes and extreme narrowness of the joint space. In this paper, we develop a hierarchical multi-object statistical shape model representing joint structure for automated segmentation of the diseased hip from 3D CT images. In order to represent shape variations as well as pose variations of the femur against the pelvis, both shape and pose variations are embedded in a combined pelvis and femur statistical shape model (SSM). Further, the whole combined SSM is divided into individual pelvis and femur SSMs and a partial combined SSM only including the acetabulum and proximal femur. The partial combined SSM maintains the consistency of the two bones by imposing the constraint that the shapes of the overlapped portions of the individual and partial combined SSMs are identical. The experimental results show that segmentation and separation accuracy of the femur and pelvis was improved using the proposed method compared with independent use of the pelvis and femur SSMs.

Futoshi Yokota, Toshiyuki Okada, Masaki Takao, Nobuhiko Sugano, Yukio Tada, Yoshinobu Sato
Computer-Aided Assessment of Anomalies in the Scoliotic Spine in 3-D MRI Images

The assessment of anomalies in the scoliotic spine using Magnetic Resonance Imaging (MRI) is an essential task during the planning phase of a patient’s treatment and operations. Due to the pathologic bending of the spine, this is an extremely time consuming process as an orthogonal view onto every vertebra is required. In this article we present a system for computer-aided assessment (CAA) of anomalies in 3-D MRI images of the spine relying on curved planar reformations (CPR). We introduce all necessary steps, from the pre-processing of the data to the visualization component. As the core part of the framework is based on a segmentation of the spinal cord we focus on this. The proposed segmentation method is an iterative process. In every iteration the segmentation is updated by an energy based scheme derived from Markov random field (MRF) theory. We evaluate the segmentation results on public available clinical relevant 3-D MRI data sets of scoliosis patients. In order to assess the quality of the segmentation we use the angle between automatically computed planes through the vertebra and planes estimated by medical experts. This results in a mean angle difference of less than six degrees.

Florian Jäger, Joachim Hornegger, Siegfried Schwab, Rolf Janka
Optimal Graph Search Segmentation Using Arc-Weighted Graph for Simultaneous Surface Detection of Bladder and Prostate

We present a novel method for globally optimal surface segmentation of multiple mutually interacting objects, incorporating both edge and shape knowledge in a 3-D graph-theoretic approach. Hard surface interacting constraints are enforced in the interacting regions, preserving the geometric relationship of those partially interacting surfaces. The soft smoothness

a priori

shape compliance is introduced into the energy functional to provide shape guidance. The globally optimal surfaces can be simultaneously achieved by solving a maximum flow problem based on an arc-weighted graph representation. Representing the segmentation problem in an arc-weighted graph, one can incorporate a wider spectrum of constraints into the formulation, thus increasing segmentation accuracy and robustness in volumetric image data. To the best of our knowledge, our method is the first attempt to introduce the arc-weighted graph representation into the graph-searching approach for simultaneous segmentation of multiple partially interacting objects, which admits a globally optimal solution in a low-order polynomial time. Our new approach was applied to the simultaneous surface detection of bladder and prostate. The result was quite encouraging in spite of the low saliency of the bladder and prostate in CT images.

Qi Song, Xiaodong Wu, Yunlong Liu, Mark Smith, John Buatti, Milan Sonka
Automated Calibration for Computerized Analysis of Prostate Lesions Using Pharmacokinetic Magnetic Resonance Images

The feasibility of an automated calibration method for estimating the arterial input function when calculating pharmacokinetic parameters from Dynamic Contrast Enhanced MRI is shown. In a previous study [1], it was demonstrated that the computer aided diagnoses (CADx) system performs optimal when per patient calibration was used, but required manual annotation of reference tissue. In this study we propose a fully automated segmentation method that tackles this limitation and tested the method with our CADx system when discriminating prostate cancer from benign areas in the peripheral zone.

A method was developed to automatically segment normal peripheral zone tissue (

PZ

). Context based segmentation using the Otsu histogram based threshold selection method and by Hessian based blob detection, was developed to automatically select

PZ

as reference tissue for the per patient calibration.

In 38 consecutive patients carcinoma, benign and normal tissue were annotated on MR images by a radiologist and a researcher using whole mount step-section histopathology as standard of reference. A feature set comprising pharmacokinetic parameters was computed for each ROI and used to train a support vector machine (SVM) as classifier.

In total 42 malignant, 29 benign and 37 normal regions were annotated. The diagnostic accuracy obtained for differentiating malignant from benign lesions using a conventional general patient plasma profile showed an accuracy of 0.65 (0.54-0.76). Using the automated segmentation per patient calibration method the diagnostic value improved to 0.80 (0.71-0.88), whereas the manual segmentation per patient calibration showed a diagnostic performance of 0.80 (0.70-0.90).

These results show that an automated per-patient calibration is feasible, a significant better discriminating performance compared to the conventional fixed calibration was obtained and the diagnostic accuracy is similar to using manual per-patient calibration.

Pieter C. Vos, Thomas Hambrock, Jelle O. Barenstz, Henkjan J. Huisman
Spectral Embedding Based Probabilistic Boosting Tree (ScEPTre): Classifying High Dimensional Heterogeneous Biomedical Data

The major challenge with classifying high dimensional biomedical data is in identifying the appropriate feature representation to (a) overcome the curse of dimensionality, and (b) facilitate separation between the data classes. Another challenge is to integrate information from two disparate modalities, possibly existing in different dimensional spaces, for improved classification. In this paper, we present a novel data representation, integration and classification scheme, Spectral Embedding based Probabilistic boosting Tree (ScEPTre), which incorporates Spectral Embedding (SE) for data representation and integration and a Probabilistic Boosting Tree classifier for data classification. SE provides an alternate representation of the data by non-linearly transforming high dimensional data into a low dimensional embedding space such that the relative adjacencies between objects are preserved. We demonstrate the utility of ScEPTre to classify and integrate Magnetic Resonance (MR) Spectroscopy (MRS) and Imaging (MRI) data for prostate cancer detection. Area under the receiver operating Curve (AUC) obtained via randomized cross validation on 15 prostate MRI-MRS studies suggests that (a) ScEPTre on MRS significantly outperforms a Haar wavelets based classifier, (b) integration of MRI-MRS via ScEPTre performs significantly better compared to using MRI and MRS alone, and (c) data integration via ScEPTre yields superior classification results compared to combining decisions from individual classifiers (or modalities).

Pallavi Tiwari, Mark Rosen, Galen Reed, John Kurhanewicz, Anant Madabhushi
Automatic Correction of Intensity Nonuniformity from Sparseness of Gradient Distribution in Medical Images

We propose to use the sparseness property of the gradient probability distribution to estimate the intensity nonuniformity in medical images, resulting in two novel automatic methods: a non-parametric method and a parametric method. Our methods are easy to implement because they both solve an iteratively re-weighted least squares problem. They are remarkably accurate as shown by our experiments on images of different imaged objects and from different imaging modalities.

Yuanjie Zheng, Murray Grossman, Suyash P. Awate, James C. Gee
Weakly Supervised Group-Wise Model Learning Based on Discrete Optimization

In this paper we propose a method for the weakly supervised learning of

sparse appearance models

from medical image data based on

Markov random fields (MRF)

. The models are learnt from a single annotated example and additional training samples without annotations. The approach formulates the model learning as solving a set of MRFs. Both the model training and the resulting model are able to cope with complex and repetitive structures. The weakly supervised model learning yields sparse MRF appearance models that perform equally well as those trained with manual annotations, thereby eliminating the need for tedious manual training supervision. Evaluation results are reported for hand radiographs and cardiac MRI slices.

René Donner, Horst Wildenauer, Horst Bischof, Georg Langs

Image Segmentation and Analysis

ECOC Random Fields for Lumen Segmentation in Radial Artery IVUS Sequences

The measure of lumen volume on radial arteries can be used to evaluate the vessel response to different vasodilators. In this paper, we present a framework for automatic lumen segmentation in longitudinal cut images of radial artery from Intravascular ultrasound sequences. The segmentation is tackled as a classification problem where the contextual information is exploited by means of

Conditional Random Fields

(CRFs). A multi-class classification framework is proposed, and inference is achieved by combining binary CRFs according to the

Error-Correcting-Output-Code

technique. The results are validated against manually segmented sequences. Finally, the method is compared with other state-of-the-art classifiers.

Francesco Ciompi, Oriol Pujol, Eduard Fernández-Nofrerías, Josepa Mauri, Petia Radeva
Dynamic Layer Separation for Coronary DSA and Enhancement in Fluoroscopic Sequences

This paper presents a new technique of coronary digital subtraction angiography which separates layers of moving background structures from dynamic fluoroscopic sequences of the heart and obtains moving layers of coronary arteries. A Bayeisan framework combines dense motion estimation, uncertainty propagation and statistical fusion to achieve reliable background layer estimation and motion compensation for coronary sequences. Encouraging results have been achieved on clinically acquired coronary sequences, where the proposed method considerably improves the visibility and perceptibility of coronary arteries undergoing breathing and cardiac movements. Perceptibility improvement is significant especially for very thin vessels. Clinical benefit is expected in the context of obese patients and deep angulation, as well as in the reduction of contrast dose in normal size patients.

Ying Zhu, Simone Prummer, Peng Wang, Terrence Chen, Dorin Comaniciu, Martin Ostermeier
An Inverse Scattering Algorithm for the Segmentation of the Luminal Border on Intravascular Ultrasound Data

Intravascular ultrasound (IVUS) is a catheter-based medical imaging technique that produces cross-sectional images of blood vessels and is particularly useful for studying atherosclerosis. In this paper, we present a novel method for segmentation of the luminal border on IVUS images using the radio frequency (RF) raw signal based on a scattering model and an inversion scheme. The scattering model is based on a random distribution of point scatterers in the vessel. The per-scatterer signal uses a differential backscatter cross-section coefficient (DBC) that depends on the tissue type. Segmentation requires two inversions: a calibration inversion and a reconstruction inversion. In the calibration step, we use a

single

manually segmented frame and then solve an inverse problem to recover the DBC for the lumen and vessel wall (

κ

l

and

κ

w

, respectively) and the width of the impulse signal

σ

. In the reconstruction step, we use the parameters from the calibration step to solve a new inverse problem: for each angle Θ

i

of the IVUS data, we reconstruct the lumen-vessel wall interface. We evaluated our method using three 40MHz IVUS sequences by comparing with manual segmentations. Our preliminary results indicate that it is possible to segment the luminal border by solving an inverse problem using the IVUS RF raw signal with the scatterer model.

E. Gerardo Mendizabal-Ruiz, George Biros, Ioannis A. Kakadiaris
Image-Driven Cardiac Left Ventricle Segmentation for the Evaluation of Multiview Fused Real-Time 3-Dimensional Echocardiography Images

Real-time 3-dimensional echocardiography (RT3DE) permits the acquisition and visualization of the beating heart in 3D. Despite a number of efforts to automate the left ventricle (LV) delineation from RT3DE images, this remains a challenging problem due to the poor nature of the acquired images usually containing missing anatomical information and high speckle noise. Recently, there have been efforts to improve image quality and anatomical definition by acquiring multiple single-view RT3DE images with small probe movements and fusing them together after alignment. In this work, we evaluate the quality of the multiview fused images using an image-driven semi-automatic LV segmentation method. The segmentation method is based on an edge-driven level set framework, where the edges are extracted using a local-phase inspired feature detector for low-contrast echocardiography boundaries. This totally image-driven segmentation method is applied for the evaluation of end-diastolic (ED) and end-systolic (ES) single-view and multiview fused images. Experiments were conducted on 17 cases and the results show that multiview fused images have better image segmentation quality, but large failures were observed on ED (88.2%) and ES (58.8%) single-view images.

Kashif Rajpoot, J. Alison Noble, Vicente Grau, Cezary Szmigielski, Harald Becher
Left Ventricle Segmentation via Graph Cut Distribution Matching

We present a discrete kernel density matching energy for segmenting the left ventricle cavity in cardiac magnetic resonance sequences. The energy and its graph cut optimization based on an original first-order approximation of the Bhattacharyya measure have not been proposed previously, and yield competitive results in nearly real-time. The algorithm seeks a region within each frame by optimization of two priors, one geometric (distance-based) and the other photometric, each measuring a distribution similarity between the region and a model learned from the first frame. Based on global rather than pixelwise information, the proposed algorithm does not require complex training and optimization with respect to geometric transformations. Unlike related active contour methods, it does not compute iterative updates of computationally expensive kernel densities. Furthermore, the proposed first-order analysis can be used for other intractable energies and, therefore, can lead to segmentation algorithms which share the flexibility of active contours and computational advantages of graph cuts. Quantitative evaluations over 2280 images acquired from 20 subjects demonstrated that the results correlate well with independent manual segmentations by an expert.

Ismail Ben Ayed, Kumaradevan Punithakumar, Shuo Li, Ali Islam, Jaron Chong
Combining Registration and Minimum Surfaces for the Segmentation of the Left Ventricle in Cardiac Cine MR Images

This paper describes a system to automatically segment the left ventricle in all slices and all phases of cardiac cine magnetic resonance datasets. After localizing the left ventricle blood pool using motion, thresholding and clustering, slices are segmented sequentially. For each slice, deformable registration is used to align all the phases, candidates contours are recovered in the average image using shortest paths, and a minimal surface is built to generate the final contours. The advantage of our method is that the resulting contours follow the edges in each phase and are consistent over time. We demonstrate using 19 patient examples that the results are very good. The RMS distance between ground truth and our segmentation is only 1.6 pixels (2.7 mm) and the Dice coefficient is 0.89.

Marie-Pierre Jolly, Hui Xue, Leo Grady, Jens Guehring
Left Ventricle Segmentation Using Diffusion Wavelets and Boosting

We propose a method for the segmentation of medical images based on a novel parameterization of prior shape knowledge and a search scheme based on classifying local appearance. The method uses diffusion wavelets to capture arbitrary and continuous interdependencies in the training data and uses them for an efficient shape model. The lack of classic visual consistency in complex medical imaging data, is tackled by a manifold learning approach handling optimal high-dimensional local features by Gentle Boosting. Appearance saliency is encoded in the model and segmentation is performed through the extraction and classification of the corresponding features in a new data set, as well as a diffusion wavelet based shape model constraint. Our framework supports hierarchies both in the model and the search space, can encode complex geometric and photometric dependencies of the structure of interest, and can deal with arbitrary topologies. Promising results are reported for heart CT data sets, proving the impact of the soft parameterization, and the efficiency of our approach.

Salma Essafi, Georg Langs, Nikos Paragios
3D Cardiac Segmentation Using Temporal Correlation of Radio Frequency Ultrasound Data

Semi-automatic segmentation of the myocardium in 3D echographic images may substantially support clinical diagnosis of heart disease. Particularly in children with congenital heart disease, segmentation should be based on the echo features solely since

a priori

knowledge on the shape of the heart cannot be used. Segmentation of echocardiographic images is challenging because of the poor echogenicity contrast between blood and the myocardium in some regions and the inherent speckle noise from randomly backscattered echoes. Phase information present in the radio frequency (rf) ultrasound data might yield useful, additional features in these regions. A semi-3D technique was used to determine maximum temporal cross-correlation values locally from the rf data. To segment the endocardial surface, maximum cross-correlation values were used as additional external force in a deformable model approach and were tested against and combined with adaptive filtered, demodulated rf data. The method was tested on full volume images (Philips, iE33) of four healthy children and evaluated by comparison with contours obtained from manual segmentation.

Maartje M. Nillesen, Richard G. P. Lopata, Henkjan J. Huisman, Johan M. Thijssen, Livia Kapusta, Chris L. de Korte
Improved Modelling of Ultrasound Contrast Agent Diminution for Blood Perfusion Analysis

Ultrasound contrast imaging is increasingly used to analyze blood perfusion in cases of ischemic or cancerous diseases. Among other imaging methods, the diminution harmonic imaging (DHI), which modells the diminution of contrast agent due to ultrasound pulses, is the most promising because of its speed. However, the current imaging quality of DHI is insufficient for reliable diagnoses.

In this paper, we extend the mathematical DHI model to include the part of the intensity signal which is due to tissue reflections and other effects not based on the contrast agent and its concentration in the blood. We show in a phantom experiment with available perfusion ground truth the vast improvements in accuracy of the new model. Our findings also strongly support the theory of a linear relationship between the perfusion speed and the determined perfusion coefficient, which is a large step towards quantitative perfusion measurements.

Christian Kier, Karsten Meyer-Wiethe, Günter Seidel, Alfred Mertins
A Novel 3D Joint Markov-Gibbs Model for Extracting Blood Vessels from PC–MRA Images

New techniques for more accurate segmentation of a 3D cerebrovascular system from phase contrast (PC) magnetic resonance angiography (MRA) data are proposed. In this paper, we describe PC–MRA images and desired maps of regions by a joint Markov-Gibbs random field model (MGRF) of independent image signals and interdependent region labels but focus on most accurate model identification. To better specify region borders, each empirical distribution of signals is precisely approximated by a Linear Combination of Discrete Gaussians (LCDG) with positive and negative components. We modified the conventional Expectation-Maximization (EM) algorithm to deal with the LCDG. The initial segmentation based on the LCDG-models is then iteratively refined using a MGRF model with analytically estimated potentials. Experiments with both the phantoms and real data sets confirm high accuracy of the proposed approach.

Ayman El-Baz, Georgy Gimel’farb, Robert Falk, Mohamed Abou El-Ghar, Vedant Kumar, David Heredia
Atlas-Based Improved Prediction of Magnetic Field Inhomogeneity for Distortion Correction of EPI Data

We describe a method for atlas-based segmentation of structural MRI for calculation of magnetic fieldmaps. CT data sets are used to construct a probabilistic atlas of the head and corresponding MR is used to train a classifier that segments soft tissue, air, and bone. Subject-specific fieldmaps are computed from the segmentations using a perturbation field model. Previous work has shown that distortion in echo-planar images can be corrected using predicted fieldmaps. We obtain results that agree well with acquired fieldmaps: 90% of voxel shifts from predicted fieldmaps show subvoxel disagreement with those computed from acquired fieldmaps. In addition, our fieldmap predictions show statistically significant improvement following inclusion of the atlas.

Clare Poynton, Mark Jenkinson, William Wells III
3D Prostate Segmentation in Ultrasound Images Based on Tapered and Deformed Ellipsoids

Prostate segmentation from trans-rectal transverse B-mode ultrasound images is required for radiation treatment of prostate cancer. Manual segmentation is a time-consuming task, the results of which are dependent on image quality and physicians’ experience. This paper introduces a semi-automatic 3D method based on super-ellipsoidal shapes. It produces a 3D segmentation in less than 15 seconds using a warped, tapered ellipsoid fit to the prostate. A study of patient images shows good performance and repeatability. This method is currently in clinical use at the Vancouver Cancer Center where it has become the standard segmentation procedure for low dose-rate brachytherapy treatment.

Seyedeh Sara Mahdavi, William J. Morris, Ingrid Spadinger, Nick Chng, Orcun Goksel, Septimiu E. Salcudean
An Interactive Geometric Technique for Upper and Lower Teeth Segmentation

Due to the complexity of the dental models in semantics of both shape and form, a fully automated method for the separation of the lower and upper teeth is unsuitable while manual segmentation requires painstakingly user interventions. In this paper, we present a novel interactive method to segment the upper and lower teeth. The process is performed on 3D triangular mesh of the skull and consists of four main steps: reconstruction of 3D model from teeth CT images, curvature estimation, interactive segmentation path planning using the shortest path finding algorithm, and performing actual geometric cut on 3D models using a graph cut algorithm. The accuracy and efficiency of our method were experimentally validated via comparisons with ground truth (manual segmentation) as well as the state of art interactive mesh segmentation algorithms. We show the presented scheme can dramatically save manual effort for users while retaining an acceptable quality (with an averaged 0.29 mm discrepancy from the ideal segmentation).

Binh Huy Le, Zhigang Deng, James Xia, Yu-Bing Chang, Xiaobo Zhou
Enforcing Monotonic Temporal Evolution in Dry Eye Images

We address the problem of identifying dry areas in the tear film as part of a diagnostic tool for dry-eye syndrome. The requirement is to identify and measure the growth of the dry regions to provide a time-evolving map of degrees of dryness. We segment dry regions using a multi-label graph-cut algorithm on the 3D spatio-temporal volume of frames from a video sequence. To capture the fact that dryness increases over the time of the sequence, we use a time-asymmetric cost function that enforces a constraint that the dryness of each pixel monotonically increases. We demonstrate how this increases our estimation’s reliability and robustness. We tested the method on a set of videos and suggest further research using a similar approach.

Tamir Yedidya, Peter Carr, Richard Hartley, Jean-Pierre Guillon
Ultrafast Localization of the Optic Disc Using Dimensionality Reduction of the Search Space

Optic Disc (OD) localization is an important pre-processing step that significantly simplifies subsequent segmentation of the OD and other retinal structures. Current OD localization techniques suffer from impractically-high computation times (

few minutes/image

). In this work, we present an ultrafast technique that requires

less than a second

to localize the OD. The technique is based on reducing the dimensionality of the search space by projecting the 2D image feature space onto two orthogonal (x- and y-) axes. This results in two 1D signals that can be used to determine the x- and y- coordinates of the OD. Image features such as retinal vessels orientation and the OD brightness and shape are used in the current method. Four publicly-available databases, including STARE and DRIVE, were used to evaluate the proposed technique. The OD was successfully located in 330 images out of 340 images (97%) with an average computation time of 0.65 seconds.

Ahmed Essam Mahfouz, Ahmed S. Fahmy
Using Frankenstein’s Creature Paradigm to Build a Patient Specific Atlas

Conformal radiotherapy planning needs accurate delineations of the critical structures. Atlas-based segmentation has been shown to be very efficient to delineate brain structures. It would therefore be very interesting to develop an atlas for the head and neck region where 7 % of the cancers arise. However, the construction of an atlas in this region is very difficult due to the high variability of the anatomies. This can generate segmentation errors and over-segmented structures in the atlas. To overcome this drawback, we present an alternative method to build a template locally adapted to the patient’s anatomy. This is done first by selecting in a database the images that are the most similar to the patient on predefined regions of interest, using on a distance between transformations. The first major contribution is that we do not compute every patient-to-image registration to find the most similar image, but only the registration of the patient towards an average image. This method is therefore computationally very efficient. The second major contribution is a novel method to use the selected images and the predefined regions to build a “Frankenstein’s creature” for segmentation. We present a qualitative and quantitative comparison between the proposed method and a classical atlas-based segmentation method. This evaluation is performed on a subset of 58 patients among a database of 105 head and neck CT images and shows a great improvement of the specificity of the results.

Olivier Commowick, Simon K. Warfield, Grégoire Malandain
Atlas-Based Automated Segmentation of Spleen and Liver Using Adaptive Enhancement Estimation

The paper presents the automated segmentation of spleen and liver from contrast-enhanced CT images of normal and hepato/splenomegaly populations. The method used 4 steps: (i) a mean organ model was registered to the patient CT; (ii) the first estimates of the organs were improved by a geodesic active contour; (iii) the contrast enhancements of liver and spleen were estimated to adjust to patient image characteristics, and an adaptive convolution refined the segmentations; (iv) lastly, a normalized probabilistic atlas corrected for shape and location for the precise computation of each organ’s volume and height (mid

-

hepatic liver height and cephalocaudal spleen height). Results from test data demonstrated the method’s ability to accurately segment the spleen (RMS error = 1.09mm; DICE/Tanimoto overlaps = 95.2/91) and liver (RMS error = 2.3mm, and DICE/Tanimoto overlaps = 96.2/92.7). The correlations (R

2

) with clinical/manual height measurements were 0.97 and 0.93 for the spleen and liver respectively.

Marius George Linguraru, Jesse K. Sandberg, Zhixi Li, John A. Pura, Ronald M. Summers
A Two-Level Approach Towards Semantic Colon Segmentation: Removing Extra-Colonic Findings

Computer aided detection (CAD) of colonic polyps in computed tomographic colonography has tremendously impacted colorectal cancer diagnosis using 3D medical imaging. It is a prerequisite for all CAD systems to extract the air-distended colon segments from 3D abdomen computed tomography scans. In this paper, we present a two-level statistical approach of first separating colon segments from small intestine, stomach and other extra-colonic parts by classification on a new geometric feature set; then evaluating the overall performance confidence using distance and geometry statistics over patients. The proposed method is fully automatic and validated using both the classification results in the first level and its numerical impacts on false positive reduction of extra-colonic findings in a CAD system. It shows superior performance than the state-of-art knowledge or anatomy based colon segmentation algorithms [1,2,3].

Le Lu, Matthias Wolf, Jianming Liang, Murat Dundar, Jinbo Bi, Marcos Salganicoff
Segmentation of Lumbar Vertebrae Using Part-Based Graphs and Active Appearance Models

The aim of the work is to provide a fully automatic method of segmenting vertebrae in spinal radiographs. This is of clinical relevance to the diagnosis of osteoporosis by vertebral fracture assessment, and to grading incident fractures in clinical trials. We use a parts based model of small vertebral patches (e.g. corners). Many potential candidates are found in a global search using multi-resolution normalised correlation. The ambiguity in the possible solution is resolved by applying a graphical model of the connections between parts, and applying geometric constraints. The resulting graph optimisation problem is solved using loopy belief propagation.

The minimum cost solution is used to initialise a second phase of active appearance model search. The method is applied to a clinical data set of computed radiography images of lumbar spines. The accuracy of this fully automatic method is assessed by comparing the results to a gold standard of manual annotation by expert radiologists.

Martin G. Roberts, Tim F. Cootes, Elisa Pacheco, Teik Oh, Judith E. Adams
Utero-Fetal Unit and Pregnant Woman Modeling Using a Computer Graphics Approach for Dosimetry Studies

Potential sanitary effects related to electromagnetic fields exposure raise public concerns, especially for fetuses during pregnancy. Human fetus exposure can only be assessed through simulated dosimetry studies, performed on anthropomorphic models of pregnant women. In this paper, we propose a new methodology to generate a set of detailed utero-fetal unit (UFU) 3D models during the first and third trimesters of pregnancy, based on segmented 3D ultrasound and MRI data. UFU models are built using recent geometry processing methods derived from mesh-based computer graphics techniques and embedded in a synthetic woman body. Nine pregnant woman models have been generated using this approach and validated by obstetricians, for anatomical accuracy and representativeness.

Jérémie Anquez, Tamy Boubekeur, Lazar Bibin, Elsa Angelini, Isabelle Bloch
Cross Modality Deformable Segmentation Using Hierarchical Clustering and Learning

Segmentation of anatomical objects is always a fundamental task for various clinical applications. Although many automatic segmentation methods have been designed to segment specific anatomical objects in a given imaging modality, a more generic solution that is directly applicable to different imaging modalities and different

deformable surfaces

is desired, if attainable. In this paper, we propose such a framework, which learns from examples the

spatially adaptive appearance

and

shape

of a 3D surface (either open or closed). The application to a new object/surface in a new modality requires

only

the annotation of training examples. Key contributions of our method include: (1) an automatic clustering and learning algorithm to capture the spatial distribution of appearance similarities/variations on the 3D surface. More specifically, the model vertices are hierarchically clustered into a set of anatomical primitives (sub-surfaces) using both geometric and appearance features. The appearance characteristics of each learned anatomical primitive are then captured through a cascaded boosting learning method. (2) To effectively incorporate non-Gaussian shape priors, we cluster the training shapes in order to build multiple statistical shape models. (3) To our best knowledge, this is the first time the same segmentation algorithm has been directly employed in two very diverse applications: a. Liver segmentation (closed surface) in PET-CT, in which CT has very low-resolution and low-contrast; b. Distal femur (condyle) surface (open surface) segmentation in MRI.

Yiqiang Zhan, Maneesh Dewan, Xiang Sean Zhou
3D Multi-branch Tubular Surface and Centerline Extraction with 4D Iterative Key Points

An innovative 3D multi-branch tubular structure and centerline extraction method is proposed in this paper. In contrast to classical minimal path techniques that can only detect a single curve between two pre-defined initial points, this method propagates outward from only one initial seed point to detect 3D multi-branch tubular surfaces and centerlines simultaneously. First, instead of only representing the trajectory of a tubular structure as a 3D curve, the surface of the entire structure is represented as a 4D curve along which every point represents a 3D sphere inside the tubular structure. Then, from any given sphere inside the tubular structure, a novel 4D iterative key point searching scheme is applied, in which the minimal action map and the Euclidean length map are calculated with a 4D freezing fast marching evolution. A set of 4D key points is obtained during the front propagation process. Finally, by sliding back from each key point to the previous one via the minimal action map until all the key points are visited, we are able to fully obtain global minimizing multi-branch tubular surfaces. An additional immediate benefit of this method is a natural notion of a multi-branch tube’s “central curve” by taking only the first three spatial coordinates of the detected 4D multi-branch curve. Experimental results on 2D/3D medical vascular images illustrate the benefits of this method.

Hua Li, Anthony Yezzi, Laurent Cohen
Multimodal Prior Appearance Models Based on Regional Clustering of Intensity Profiles

Model-based image segmentation requires prior information about the appearance of a structure in the image. Instead of relying on Principal Component Analysis such as in Statistical Appearance Models, we propose a method based on a regional clustering of intensity profiles that does not rely on an accurate pointwise registration. Our method is built upon the Expectation-Maximization algorithm with regularized covariance matrices and includes spatial regularization. The number of appearance regions is determined by a novel model order selection criterion. The prior is described on a reference mesh where each vertex has a probability to belong to several intensity profile classes.

François Chung, Hervé Delingette
3D Medical Image Segmentation by Multiple-Surface Active Volume Models

In this paper, we propose Multiple-Surface Active Volume Models (MSAVM) to extract 3D objects from volumetric medical images. Being able to incorporate spatial constraints among multiple objects, MSAVM is more robust and accurate than the original Active Volume Models [1]. The main novelty in MSAVM is that it has two surface-distance based functions to adaptively adjust the weights of contribution from the image-based region information and from spatial constraints among multiple interacting surfaces. These two functions help MSAVM not only overcome local minima but also avoid leakage. Because of the implicit representation of AVM, the spatial information can be calculated based on the model’s signed distance transform map with very low extra computational cost. The MSAVM thus has the efficiency of the original 3D AVM but produces more accurate results. 3D segmentation results, validation and comparison are presented for experiments on volumetric medical images.

Tian Shen, Xiaolei Huang
Model Completion via Deformation Cloning Based on an Explicit Global Deformation Model

Our main focus is the registration and visualization of a pre-built 3D model from preoperative images to the camera view of a minimally invasive surgery (MIS). Accurate estimation of soft-tissue deformations is key to the success of such a registration. This paper proposes an explicit statistical model to represent global non-rigid deformations. The deformation model built from a reference object is cloned to a target object to guide the registration of the pre-built model, which completes the deformed target object when only a part of the object is naturally visible in the camera view. The registered target model is then used to estimate deformations of its substructures. Our method requires a small number of landmarks to be reconstructed from the camera view. The registration is driven by a small set of parameters, making it suitable for real-time visualization.

Qiong Han, Stephen E. Strup, Melody C. Carswell, Duncan Clarke, Williams B. Seales
Supervised Nonparametric Image Parcellation

Segmentation of medical images is commonly formulated as a supervised learning problem, where manually labeled training data are summarized using a parametric atlas. Summarizing the data alleviates the computational burden at the expense of possibly losing valuable information on inter-subject variability. This paper presents a novel framework for Supervised Nonparametric Image Parcellation (SNIP). SNIP models the intensity and label images as samples of a joint distribution estimated from the training data in a non-parametric fashion. By capitalizing on recently developed fast and robust pairwise image alignment tools, SNIP employs the

entire

training data to segment a new image via Expectation Maximization. The use of multiple registrations increases robustness to occasional registration failures. We report experiments on 39 volumetric brain MRI scans with manual labels for the white matter, cortex and subcortical structures. SNIP yields better segmentation than state-of-the-art algorithms in multiple regions of interest.

Mert R. Sabuncu, B. T. Thomas Yeo, Koen Van Leemput, Bruce Fischl, Polina Golland
Thermal Vision for Sleep Apnea Monitoring

The present paper proposes a novel methodology to monitor sleep apnea through thermal imaging. First, the nostril region is segmented and it is tracked over time via a network of cooperating probabilistic trackers. Then, the mean thermal signal of the nostril region, carrying the breathing information, is analyzed through wavelet decomposition. The experimental set included 22 subjects (12 men and 10 women). The sleep-disordered incidents were detected by both thermal and standard polysomnographic methodologies. The high accuracy confirms the validity of the proposed approach, and brings non-obtrusive clinical monitoring of sleep disorders within reach.

Jin Fei, Ioannis Pavlidis, Jayasimha Murthy
Tissue Tracking in Thermo-physiological Imagery through Spatio-temporal Smoothing

Accurate tracking of facial tissue in thermal infrared imaging is challenging because it is affected not only by positional but also physiological (functional) changes. This article presents a particle filter tracker driven by a probabilistic template function with both spatial and temporal smoothing components, which is capable of adapting to abrupt positional and physiological changes. The method was tested on tracking facial regions of subjects under varying physiological and environmental conditions in 12 thermal clips. It demonstrated robustness and accuracy, outperforming other strategies. This new method promises improved performance in a host of biomedical applications that involve physiological measurements on the face, like unobtrusive sleep studies.

Yan Zhou, Panagiotis Tsiamyrtzis, Ioannis T. Pavlidis
Depth Data Improves Skin Lesion Segmentation

This paper shows that adding 3D depth information to RGB colour images improves segmentation of pigmented and non-pigmented skin lesion. A region-based active contour segmentation approach using a statistical model based on the level-set framework is presented. We consider what kinds of properties (e.g., colour, depth, texture) are most discriminative. The experiments show that our proposed method integrating chromatic and geometric information produces segmentation results for pigmented lesions close to dermatologists and more consistent and accurate results for non-pigmented lesions.

Xiang Li, Ben Aldridge, Lucia Ballerini, Robert Fisher, Jonathan Rees
A Fully Automatic Random Walker Segmentation for Skin Lesions in a Supervised Setting

We present a method for automatically segmenting skin lesions by initializing the random walker algorithm with seed points whose properties, such as colour and texture, have been learnt via a training set. We leverage the speed and robustness of the random walker algorithm and augment it into a fully automatic method by using supervised statistical pattern recognition techniques. We validate our results by comparing the resulting segmentations to the manual segmentations of an expert over 120 cases, including 100 cases which are categorized as difficult (i.e.: low contrast, heavily occluded, etc.). We achieve an F-measure of 0.95 when segmenting easy cases, and an F-measure of 0.85 when segmenting difficult cases.

Paul Wighton, Maryam Sadeghi, Tim K. Lee, M. Stella Atkins
Backmatter
Metadaten
Titel
Medical Image Computing and Computer-Assisted Intervention – MICCAI 2009
herausgegeben von
Guang-Zhong Yang
David Hawkes
Daniel Rueckert
Alison Noble
Chris Taylor
Copyright-Jahr
2009
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-04271-3
Print ISBN
978-3-642-04270-6
DOI
https://doi.org/10.1007/978-3-642-04271-3