Skip to main content
Top

2015 | Book

Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015

18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III

Editors: Nassir Navab, Joachim Hornegger, William M. Wells, Alejandro F. Frangi

Publisher: Springer International Publishing

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

The three-volume set LNCS 9349, 9350, and 9351 constitutes the refereed proceedings of the 18th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2015, held in Munich, Germany, in October 2015. Based on rigorous peer reviews, the program committee carefully selected 263 revised papers from 810 submissions for presentation in three volumes. The papers have been organized in the following topical sections: quantitative image analysis I: segmentation and measurement; computer-aided diagnosis: machine learning; computer-aided diagnosis: automation; quantitative image analysis II: classification, detection, features, and morphology; advanced MRI: diffusion, fMRI, DCE; quantitative image analysis III: motion, deformation, development and degeneration; quantitative image analysis IV: microscopy, fluorescence and histological imagery; registration: method and advanced applications; reconstruction, image formation, advanced acquisition - computational imaging; modelling and simulation for diagnosis and interventional planning; computer-assisted and image-guided interventions.

Table of Contents

Frontmatter

Quantitative Image Analysis I: Segmentation and Measurement

Frontmatter
Deep Convolutional Encoder Networks for Multiple Sclerosis Lesion Segmentation

We propose a novel segmentation approach based on deep convolutional encoder networks and apply it to the segmentation of multiple sclerosis (MS) lesions in magnetic resonance images. Our model is a neural network that has both convolutional and deconvolutional layers, and combines feature extraction and segmentation prediction in a single model. The joint training of the feature extraction and prediction layers allows the model to automatically learn features that are optimized for accuracy for any given combination of image types. In contrast to existing automatic feature learning approaches, which are typically patch-based, our model learns features from entire images, which eliminates patch selection and redundant calculations at the overlap of neighboring patches and thereby speeds up the training. Our network also uses a novel objective function that works well for segmenting underrepresented classes, such as MS lesions. We have evaluated our method on the publicly available labeled cases from the MS lesion segmentation challenge 2008 data set, showing that our method performs comparably to the state-of-theart. In addition, we have evaluated our method on the images of 500 subjects from an MS clinical trial and varied the number of training samples from 5 to 250 to show that the segmentation performance can be greatly improved by having a representative data set.

Tom Brosch, Youngjin Yoo, Lisa Y. W. Tang, David K. B. Li, Anthony Traboulsee, Roger Tam
Unsupervised Myocardial Segmentation for Cardiac MRI

Though unsupervised segmentation was a de-facto standard for cardiac MRI segmentation early on, recently cardiac MRI segmentation literature has favored fully supervised techniques such as Dictionary Learning and Atlas-based techniques. But, the benefits of unsupervised techniques e.g., no need for large amount of training data and better potential of handling variability in anatomy and image contrast, is more evident with emerging cardiac MR modalities. For example, CP-BOLD is a new MRI technique that has been shown to detect ischemia without any contrast at stress but also at rest conditions. Although CP-BOLD looks similar to standard CINE, changes in myocardial intensity patterns and shape across cardiac phases, due to the heart’s motion, BOLD effect and artifacts affect the underlying mechanisms of fully supervised segmentation techniques resulting in a significant drop in segmentation accuracy. In this paper, we present a fully unsupervised technique for segmenting myocardium from the background in both standard CINE MR and CP-BOLD MR. We combine appearance with motion information (obtained via Optical Flow) in a dictionary learning framework to sparsely represent important features in a low dimensional space and separate myocardium from background accordingly. Our fully automated method learns background-only models and one class classifier provides myocardial segmentation. The advantages of the proposed technique are demonstrated on a dataset containing CP-BOLD MR and standard CINE MR image sequences acquired in baseline and ischemic condition across 10 canine subjects, where our method outperforms state-of-the-art supervised segmentation techniques in CP-BOLD MR and performs at-par for standard CINE MR.

Anirban Mukhopadhyay, Ilkay Oksuz, Marco Bevilacqua, Rohan Dharmakumar, Sotirios A. Tsaftaris
Multimodal Cortical Parcellation Based on Anatomical and Functional Brain Connectivity

Reliable cortical parcellation is a crucial step in human brain network analysis since incorrect definition of nodes may invalidate the inferences drawn from the network. Cortical parcellation is typically cast as an unsupervised clustering problem on functional magnetic resonance imaging (fMRI) data, which is particularly challenging given the pronounced noise in fMRI acquisitions. This challenge manifests itself in rather inconsistent parcellation maps generated by different methods. To address the need for robust methodologies to parcellate the brain, we propose a multimodal cortical parcellation framework based on fused diffusion MRI (dMRI) and fMRI data analysis. We argue that incorporating anatomical connectivity information into parcellation is beneficial in suppressing spurious correlations commonly observed in fMRI analyses. Our approach adaptively determines the weighting of anatomical and functional connectivity information in a data-driven manner, and incorporates a neighborhood-informed affnity matrix that was recently shown to provide robustness against noise. To validate, we compare parcellations obtained via normalized cuts on unimodal vs. multimodal data from the Human Connectome Project. Results demonstrate that our proposed method better delineates spatially contiguous parcels with higher test-retest reliability and improves inter-subject consistency.

Chendi Wang, Burak Yoldemir, Rafeef Abugharbieh
Slic-Seg: Slice-by-Slice Segmentation Propagation of the Placenta in Fetal MRI Using One-Plane Scribbles and Online Learning

Segmentation of the placenta from fetal MRI is critical for planning of fetal surgical procedures. Unfortunately, it is made difficult by poor image quality due to sparse acquisition, inter-slice motion, and the widely varying position and orientation of the placenta between pregnant women. We propose a minimally interactive online learning-based method named Slic-Seg to obtain accurate placenta segmentations from MRI. An online random forest is first trained on data coming from scribbles provided by the user in one single selected start slice. This then forms the basis for a slice-by-slice framework that segments subsequent slices before incorporating them into the training set on the fly. The proposed method was compared with its offline counterpart that is with no retraining, and with two other widely used interactive methods. Experiments show that our method 1) has a high performance in the start slice even in cases where sparse scribbles provided by the user lead to poor results with the competitive approaches, 2) has a robust segmentation in subsequent slices, and 3) results in less variability between users.

Guotai Wang, Maria A. Zuluaga, Rosalind Pratt, Michael Aertsen, Anna L. David, Jan Deprest, Tom Vercauteren, Sebastien Ourselin
GPSSI: Gaussian Process for Sampling Segmentations of Images

Medical image segmentation is often a prerequisite for clinical applications. As an ill-posed problem, it leads to uncertain estimations of the region of interest which may have a significant impact on downstream applications, such as therapy planning. To quantify the uncertainty related to image segmentations, a classical approach is to measure the effect of using various plausible segmentations. In this paper, a method for producing such image segmentation samples from a single expert segmentation is introduced. A probability distribution of image segmentation boundaries is defined as a Gaussian process, which leads to segmentations that are spatially coherent and consistent with the presence of salient borders in the image. The proposed approach outperforms previous generative segmentation approaches, and segmentation samples can be generated efficiently. The sample variability is governed by a parameter which is correlated with a simple DICE score. We show how this approach can have multiple useful applications in the field of uncertainty quantification, and an illustration is provided in radiotherapy planning.

Matthieu Lê, Jan Unkelbach, Nicholas Ayache, Hervé Delingette
Multi-Level Parcellation of the Cerebral Cortex Using Resting-State fMRI

Cortical parcellation is one of the core steps for identifying the functional architecture of the human brain. Despite the increasing number of attempts at developing parcellation algorithms using resting-state fMRI, there still remain challenges to be overcome, such as generating reproducible parcellations at both single-subject and group levels, while sub-dividing the cortex into functionally homogeneous parcels. To address these challenges, we propose a three-layer parcellation framework which deploys a different clustering strategy at each layer. Initially, the cortical vertices are clustered into a relatively large number of supervertices, which constitutes a high-level abstraction of the rs-fMRI data. These supervertices are combined into a tree of hierarchical clusters to generate individual subject parcellations, which are, in turn, used to compute a groupwise parcellation in order to represent the whole population. Using data collected as part of the Human Connectome Project from 100 healthy subjects, we show that our algorithm segregates the cortex into distinctive parcels at different resolutions with high reproducibility and functional homogeneity at both single-subject and group levels, therefore can be reliably used for network analysis.

Salim Arslan, Daniel Rueckert
Interactive Multi-organ Segmentation Based on Multiple Template Deformation

We present a new method for the segmentation of multiple organs (2D or 3D) which enables user inputs for smart contour editing. By extending the work of [1] with user-provided hard constraints that can be optimized globally or locally, we propose an efficient and user-friendly solution that ensures consistent feedback to the user interactions. We demonstrate the potential of our approach through a user study with 10 medical imaging experts, aiming at the correction of 4 organ segmentations in 10 CT volumes. We provide quantitative and qualitative analysis of the users’ feedback.

Romane Gauriau, David Lesage, Mélanie Chiaradia, Baptiste Morel, Isabelle Bloch
Segmentation of Infant Hippocampus Using Common Feature Representations Learned for Multimodal Longitudinal Data

Aberrant development of the human brain during the first year after birth is known to cause critical implications in later stages of life. In particular, neuropsychiatric disorders, such as attention deficit hyperactivity disorder (ADHD), have been linked with abnormal early development of the hippocampus. Despite its known importance, studying the hippocampus in infant subjects is very challenging due to the significantly smaller brain size, dynamically varying image contrast, and large across-subject variation. In this paper, we present a novel method for effective hippocampus segmentation by using a multi-atlas approach that integrates the complementary multimodal information from longitudinal T1 and T2 MR images. In particular, considering the highly heterogeneous nature of the longitudinal data, we propose to learn their common feature representations by using hierarchical multi-set kernel canonical correlation analysis (CCA). Specifically, we will learn (1)

within-time-point common features

by projecting different modality features of each time point to its own modality-free common space, and (2)

across-time-point common features

by mapping all time-point-specific common features to a global common space for all time points. These final features are then employed in patch matching across different modalities and time points for hippocampus segmentation, via label propagation and fusion. Experimental results demonstrate the improved performance of our method over the state-of-the-art methods.

Yanrong Guo, Guorong Wu, Pew-Thian Yap, Valerie Jewells, Weili Lin, Dinggang Shen
Measuring Cortical Neurite-Dispersion and Perfusion in Preterm-Born Adolescents Using Multi-modal MRI

As a consequence of a global increase in rates of extremely preterm birth, predicting the long term impact of preterm birth has become an important focus of research. Cohorts of extremely preterm born subjects studied in the 1990s are now beginning to reach adulthood and the long term structural alterations of disrupted neurodevelopment in gestation can now be investigated, for instance with magnetic resonance (MR) imaging. Disruption to normal development as a result of preterm birth is likely to result in both cerebrovascular and microstructural differences compared to term-born controls. Of note, arterial spin labelled MRI provides a marker of cerebral blood flow, whilst multi-compartment diffusion models provide information on the cerebral microstructure, including that of the cortex. We apply these techniques to a cohort of 19 year-old adolescents consisting of both extremely-preterm and term-born individuals and investigate the structural and functional correlations of these MR modalities. Work of this type, revealing the long-term structural and functional differences in preterm cohorts, can help better inform on the likely outcomes of contemporary extremely preterm newborns and provides an insight into the lifelong effects of preterm birth.

Andrew Melbourne, Zach Eaton-Rosen, David Owen, Jorge Cardoso, Joanne Beckmann, David Atkinson, Neil Marlow, Sebastien Ourselin
Interactive Whole-Heart Segmentation in Congenital Heart Disease

We present an interactive algorithm to segment the heart chambers and epicardial surfaces, including the great vessel walls, in pediatric cardiac MRI of congenital heart disease. Accurate whole-heart segmentation is necessary to create patient-specific 3D heart models for surgical planning in the presence of complex heart defects. Anatomical variability due to congenital defects precludes fully automatic atlas-based segmentation. Our interactive segmentation method exploits expert segmentations of a small set of short-axis slice regions to automatically delineate the remaining volume using patch-based segmentation. We also investigate the potential of active learning to automatically solicit user input in areas where segmentation error is likely to be high. Validation is performed on four subjects with double outlet right ventricle, a severe congenital heart defect. We show that strategies asking the user to manually segment regions of interest within short-axis slices yield higher accuracy with less user input than those querying entire short-axis slices.

Danielle F. Pace, Adrian V. Dalca, Tal Geva, Andrew J. Powell, Mehdi H. Moghari, Polina Golland
Automatic 3D US Brain Ventricle Segmentation in Pre-Term Neonates Using Multi-phase Geodesic Level-Sets with Shape Prior

Pre-term neonates born with a low birth weight (< 1500

g

) are at increased risk for developing intraventricular hemorrhage (IVH). 3D ultrasound (US) imaging has been used to quantitatively monitor the ventricular volume in IVH neonates, instead of typical 2D US used clinically, which relies on linear measurements from a single slice and visually estimates to determine ventricular dilation. To translate 3D US imaging into clinical setting, an accurate segmentation algorithm would be desirable to automatically extract the ventricular system from 3D US images. In this paper, we propose an automatic multi-region segmentation approach for delineating lateral ventricles of pre-term neonates from 3D US images, which makes use of multi-phase geodesic level-sets (MP-GLS) segmentation technique via a variational region competition principle and a spatial shape prior derived from pre-segmented atlases. Experimental results using 15 IVH patient images show that the proposed GPU-implemented approach is accurate in terms of the Dice similarity coefficient (DSC), the mean absolute surface distance (MAD), and maximum absolute surface distance (MAXD). To the best of our knowledge, this paper reports the first study on automatic segmentation of ventricular system of premature neonatal brains from 3D US images.

Wu Qiu, Jing Yuan, Jessica Kishimoto, Yimin Chen, Martin Rajchl, Eranga Ukwatta, Sandrine de Ribaupierre, Aaron Fenster
Multiple Surface Segmentation Using Truncated Convex Priors

Multiple surface segmentation with mutual interaction between surface pairs is a challenging task in medical image analysis. In this paper we report a fast multiple surface segmentation approach with truncated convex priors for a segmentation problem, in which there exist abrupt surface distance changes between mutually interacting surface pairs. A 3-D graph theoretic framework based on

local range search

is employed. The use of truncated convex priors enables to capture the surface discontinuity and rapid changes of surface distances. The method is also capable to enforce a minimum distance between a surface pair. The solution for multiple surfaces is obtained by iteratively computing a maximum flow for a subset of the voxel domain at each iteration. The proposed method was evaluated on simultaneous intraretinal layer segmentation of optical coherence tomography images of normal eye and eyes affected by severe drusen due to age related macular degeneration. Our experiments demonstrated statistically significant improvement of segmentation accuracy by using our method compared to the optimal surface detection method using convex priors without truncation (OSDC). The mean unsigned surface positioning errors obtained by OSDC for normal eyes (4.47 ±1.10)

μ

m was improved to (4.29 ±1.02)

μ

m, and for eyes with drusen was improved from (7.98 ±4.02)

μ

m to (5.12 ±1.39)

μ

m using our method. The proposed approach with average computation time of 539 sec is much faster than 10014 sec taken by OSDC.

Abhay Shah, Junjie Bai, Zhihong Hu, Srinivas Sadda, Xiaodong Wu
Statistical Power in Image Segmentation: Relating Sample Size to Reference Standard Quality

Ideal reference standards for comparing segmentation algorithms balance trade-offs between the data set size, the costs of reference standard creation and the resulting accuracy. As reference standard quality impacts the likelihood of detecting significant improvements (i.e. the statistical power), we derived a sample size formula for segmentation accuracy comparison using an imperfect reference standard. We expressed this formula as a function of algorithm performance and reference standard quality (e.g. measured with a high quality reference standard on pilot data) to reveal the relationship between reference standard quality and statistical power, addressing key study design questions: (1) How many validation images are needed to compare segmentation algorithms? (2) How accurate should the reference standard be? The resulting formula predicted statistical power to within 2% of Monte Carlo simulations across a range of model parameters. A case study, using the PROMISE12 prostate segmentation data set, shows the practical use of the formula.

Eli Gibson, Henkjan J. Huisman, Dean C. Barratt
Joint Learning of Image Regressor and Classifier for Deformable Segmentation of CT Pelvic Organs

The segmentation of pelvic organs from CT images is an essential step for prostate radiation therapy. However, due to low tissue contrast and large anatomical variations, it is still challenging to accurately segment these organs from CT images. Among various existing methods, deformable models gain popularity as it is easy to incorporate shape priors to regularize the final segmentation. Despite this advantage, the sensitivity to the initialization is often a pain for deformable models. In this paper, we propose a novel way to guide deformable segmentation, which could greatly alleviate the problem caused by poor initialization. Specifically, random forest is adopted to jointly learn image regressor and classifier for each organ. The image regressor predicts the 3D displacement from any image voxel to the organ boundary based on the local appearance of this voxel. It is used as an external force to drive each vertex of deformable model (3D mesh) towards the target organ boundary. Once the deformable model is close to the boundary, the organ likelihood map, provided by the learned classifier, is used to further refine the segmentation. In the experiments, we applied our method to segmenting prostate, bladder and rectum from planning CT images. Experimental results show that our method can achieve competitive performance over existing methods, even with very rough initialization.

Yaozong Gao, Jun Lian, Dinggang Shen
Corpus Callosum Segmentation in MS Studies Using Normal Atlases and Optimal Hybridization of Extrinsic and Intrinsic Image Cues

The corpus callosum (CC) is a key brain structure and change in its size and shape is a focal point in the study of neurodegenerative diseases like multiple sclerosis (MS). A number of automatic methods have been proposed for CC segmentation in magnetic resonance images (MRIs) that can be broadly classified as intensity-based and template-based. Imaging artifacts and signal changes due to pathology often cause errors in intensity-based methods. Template-based methods have been proposed to alleviate these problems. However, registration inaccuracies (local mismatch) can occur when the template image has large intensity and morphological differences from the scan to be segmented, such as when using publicly available normal templates for a diseased population. Accordingly, we propose a novel hybrid segmentation framework that performs optimal, spatially variant fusion of multi-atlas-based and intensity-based priors. Our novel

coupled

graph-labeling formulation effectively optimizes, on a per-voxel basis, the weights that govern the choice of priors so that intensity priors derived from the subject image are emphasized when spatial priors derived from the registered templates are deemed less trustworthy. This stands in contrast to existing hybrid methods that either ignore local registration errors or alternate between the optimization of fusion weights and segmentation results in an expectation-maximization fashion. We evaluated our method using a public dataset and two large in-house MS datasets and found that it gave more accurate results than those achieved by existing methods for CC segmentation.

Lisa Y. W. Tang, Ghassan Hamarneh, Anthony Traboulsee, David Li, Roger Tam
Brain Tissue Segmentation Based on Diffusion MRI Using ℓ0 Sparse-Group Representation Classification

We present a method for automated brain tissue segmentation based on diffusion MRI. This provides information that is complementary to structural MRI and facilitates fusion of information between the two imaging modalities. Unlike existing segmentation approaches that are based on diffusion tensor imaging (DTI), our method explicitly models the coexistence of various diffusion compartments within each voxel owing to different tissue types and different fiber orientations. This results in improved segmentation in regions with white matter crossings and in regions susceptible to partial volume effects. For each voxel, we tease apart possible signal contributions from white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) with the help of diffusion exemplars, which are representative signals associated with each tissue type. Each voxel is then classified by determining which of the WM, GM, or CSF diffusion exemplar groups explains the signal better with the least fitting residual. Fitting is performed using ℓ

0

sparse-group approximation, circumventing various reported limitations of ℓ

1

fitting. In addition, to promote spatial regularity, we introduce a smoothing technique that is based on ℓ

0

gradient minimization, which can be viewed as the ℓ

0

version of total variation (TV) smoothing. Compared with the latter, our smoothing technique, which also incorporates multi-channel WM, GM, and CSF concurrent smoothing, yields marked improvement in preserving boundary contrast and consequently reduces segmentation bias caused by smoothing at tissue boundaries. The results produced by our method are in good agreement with segmentation based on

T

1

-weighted images.

Pew-Thian Yap, Yong Zhang, Dinggang Shen
A Latent Source Model for Patch-Based Image Segmentation

Despite the popularity and empirical success of patch-based nearest-neighbor and weighted majority voting approaches to medical image segmentation, there has been no theoretical development on when, why, and how well these nonparametric methods work. We bridge this gap by providing a theoretical performance guarantee for nearest-neighbor and weighted majority voting segmentation under a new probabilistic model for patch-based image segmentation. Our analysis relies on a new local property for how similar nearby patches are, and fuses existing lines of work on modeling natural imagery patches and theory for nonparametric classification. We use the model to derive a new patch-based segmentation algorithm that iterates between inferring local label patches and merging these local segmentations to produce a globally consistent image segmentation. Many existing patch-based algorithms arise as special cases of the new algorithm.

George H. Chen, Devavrat Shah, Polina Golland
Multi-organ Segmentation Using Shape Model Guided Local Phase Analysis

To improve the accuracy of multi-organ segmentation, we propose a model-based segmentation framework that utilizes the local phase information from paired quadrature filters to delineate the organ boundaries. Conventional local phase analysis based on local orientation has the drawback of outputting the same phases for black-to-white and white-to-black edges. This ambiguity could mislead the segmentation when two organs’ borders are too close. Using the gradient of the signed distance map of a statistical shape model, we could distinguish between these two types of edges and avoid the segmentation region leaking into another organ. In addition, we propose a level-set solution that integrates both the edge-based (represented by local phase) and region-based speed functions. Compared with previously proposed methods, the current method uses local adaptive weighting factors based on the confidence of the phase map (energy of the quadrature filters) instead of a global weighting factor to combine these two forces. In our preliminary studies, the proposed method outperformed conventional methods in terms of accuracy in a number of organ segmentation tasks.

Chunliang Wang, Örjan Smedby
Filling Large Discontinuities in 3D Vascular Networks Using Skeleton- and Intensity-Based Information

Segmentation of vasculature is a common task in many areas of medical imaging, but complex morphology and weak signal often lead to incomplete segmentations. In this paper, we present a new gap filling strategy for 3D vascular networks. The novelty of our approach is to combine both skeleton- and intensity-based information to fill

large

discontinuities. Our approach also does not make any hypothesis on the network topology, which is particularly important for tumour vasculature due to the chaotic arrangement of vessels within tumours. Synthetic results show that using intensity-based information, in addition to skeleton-based information, can make the detection of large discontinuities more robust. Our strategy is also shown to outperform a classic gap filling strategy on 3D Micro-CT images of preclinical tumour models.

Russell Bates, Laurent Risser, Benjamin Irving, Bartłomiej W. Papież, Pavitra Kannan, Veerle Kersemans, Julia A. Schnabel
A Continuous Flow-Maximisation Approach to Connectivity-Driven Cortical Parcellation

Brain connectivity network analysis is a key step towards understanding the processes behind the brain’s development through ageing and disease. Parcellation of the cortical surface into distinct regions is an essential step in order to construct such networks. Anatomical and random parcellations are typically used for this task, but can introduce a bias and may not be aligned with the brain’s underlying organisation. To tackle this challenge, connectivity-driven parcellation methods have received increasing attention. In this paper, we propose a flexible continuous flow maximisation approach for connectivity driven parcellation that iteratively updates the parcels’ boundaries and centres based on connectivity information and smoothness constraints. We evaluate the method on 25 subjects with diffusion MRI data. Quantitative results show that the method is robust with respect to initialisation (average overlap 82%) and significantly outperforms the state of the art in terms of information loss and homogeneity.

Sarah Parisot, Martin Rajchl, Jonathan Passerat-Palmbach, Daniel Rueckert
A 3D Fractal-Based Approach towards Understanding Changes in the Infarcted Heart Microvasculature

The structure and function of the myocardial microvasculature affect cardiac performance. Quantitative assessment of microvascular changes is therefore crucial to understanding heart disease. This paper proposes the use of 3D fractal-based measures to obtain quantitative insight into the changes of the microvasculature in infarcted and non-infarcted (remote) areas, at different time-points, following myocardial infarction. We used thick slices (~100

μ

m) of pig heart tissue, stained for blood vessels and imaged with high resolution microscope. Firstly, the cardiac microvasculature was segmented using a novel 3D multi-scale multi-thresholding approach. We subsequently calculated: i) fractal dimension to assess the complexity of the microvasculature; ii) lacunarity to assess its spatial organization; and iii) succolarity to provide an estimation of the microcirculation flow. The measures were used for statistical change analysis and classification of the distinct vascular patterns in infarcted and remote areas, demonstrating the potential of the approach to extract quantitative knowledge about infarction-related alterations.

Polyxeni Gkontra, Magdalena M. Żak, Kerri-Ann Norton, Andrés Santos, Aleksander S. Popel, Alicia G. Arroyo
Segmenting the Uterus in Monocular Laparoscopic Images without Manual Input

Automatically segmenting organs in monocular laparoscopic images is an important and challenging research objective in computer-assisted intervention. For the uterus this is difficult because of high inter-patient variability in tissue appearance and low-contrast boundaries with the surrounding peritoneum. We present a framework to segment the uterus which is completely automatic, requires only a single monocular image, and does not require a 3D model. Our idea is to use a patient-independent uterus detector to roughly localize the organ, which is then used as a supervisor to train a patient-specific organ segmenter. The segmenter uses a physically-motivated organ boundary model designed specifically for illumination in laparoscopy, which is fast to compute and gives strong segmentation constraints. Our segmenter uses a lightweight CRF that is solved quickly and globally with a single graphcut. On a dataset of 220 images our method obtains a mean DICE score of 92.9%.

Toby Collins, Adrien Bartoli, Nicolas Bourdel, Michel Canis
Progressive Label Fusion Framework for Multi-atlas Segmentation by Dictionary Evolution

Accurate segmentation of anatomical structures in medical images is very important in neuroscience studies. Recently, multi-atlas patch-based label fusion methods have achieved many successes, which generally represent each target patch from an atlas patch dictionary in the image domain and then predict the latent label by directly applying the estimated representation coefficients in the label domain. However, due to the large gap between these two domains, the estimated representation coefficients in the image domain may not stay optimal for the label fusion. To overcome this dilemma, we propose a novel label fusion framework to make the weighting coefficients eventually to be optimal for the label fusion by progressively constructing a dynamic dictionary in a layer-by-layer manner, where a sequence of intermediate patch dictionaries gradually encode the transition from the patch representation coefficients in image domain to the optimal weights for label fusion. Our proposed framework is general to augment the label fusion performance of the current state-of-the-art methods. In our experiments, we apply our proposed method to hippocampus segmentation on ADNI dataset and achieve more accurate labeling results, compared to the counterpart methods with single-layer dictionary.

Yantao Song, Guorong Wu, Quansen Sun, Khosro Bahrami, Chunming Li, Dinggang Shen
Multi-atlas Based Segmentation Editing with Interaction-Guided Constraints

We propose a novel multi-atlas based segmentation method to address the editing scenario, when given an incomplete segmentation along with a set of training label images. Unlike previous multi-atlas based methods, which depend solely on appearance features, we incorporate

interaction-guided constraints

to find appropriate training labels and derive their voting weights. Specifically, we divide user interactions, provided on erroneous parts, into multiple local interaction combinations, and then locally search for the training label patches well-matched with each interaction combination and also the previous segmentation. Then, we estimate the new segmentation through the label fusion of selected label patches that have their weights defined with respect to their respective distances to the interactions. Since the label patches are found to be from different combinations in our method, various shape changes can be considered even with limited training labels and few user interactions. Since our method does not need image information or expensive learning steps, it can be conveniently used for most editing problems. To demonstrate the positive performance, we apply our method to editing the segmentation of three challenging data sets: prostate CT, brainstem CT, and hippocampus MR. The results show that our method outperforms the existing editing methods in all three data sets.

Sang Hyun Park, Yaozong Gao, Dinggang Shen

Quantitative Image Analysis IV: Microscopy, Fluorescence and Histological Imagery

Frontmatter
Improving Convenience and Reliability of 5-ALA-Induced Fluorescent Imaging for Brain Tumor Surgery

This paper presents two features to make neurosurgery with 5-ALA-induced fluorescent imaging more convenient and more reliable. The first one is the concept for a system that switches between white light and excitation light rapidly and allows surgeons to easily locate the tumor region on the normal color image. The second one is the way for color signal processing that yields a stable fluorescent signal without depending on the lighting condition. We developed a prototype system and confirmed that the color image display with the fluorescent region worked well for both the brain of a dead swine and the resected tumor of a human brain. We also performed an experiment with physical phantoms of fluorescent objects and confirmed that the calculated flurophore density-related values were stably obtained for several lighting conditions.

Hiroki Taniguchi, Noriko Kohira, Takashi Ohnishi, Hiroshi Kawahira, Mikael von und zu Fraunberg, Juha E. Jääskeläinen, Markku Hauta-Kasari, Yasuo Iwadate, Hideaki Haneishi
Analysis of High-throughput Microscopy Videos: Catching Up with Cell Dynamics

We present a novel framework for high-throughput cell lineage analysis in time-lapse microscopy images. Our algorithm ties together two fundamental aspects of cell lineage construction, namely cell segmentation and tracking, via a Bayesian inference of dynamic models. The proposed contribution exploits the Kalman inference problem by estimating the time-wise cell shape uncertainty in addition to cell trajectory. These inferred cell properties are combined with the observed image measurements within a fast marching (FM) algorithm, to achieve posterior probabilities for cell segmentation and association. Highly accurate results on two different cell-tracking datasets are presented.

A. Arbelle, N. Drayman, M. Bray, U. Alon, A. Carpenter, T. Riklin Raviv
Neutrophils Identification by Deep Learning and Voronoi Diagram of Clusters

Neutrophils are a primary type of immune cells, and their identification is critical in clinical diagnosis of active inflammation. However, in H&E histology tissue slides, the appearances of neutrophils are highly variable due to morphology, staining and locations. Further, the noisy and complex tissue environment causes artifacts resembling neutrophils. Thus, it is challenging to design, in a hand-crafted manner, computerized features that help identify neutrophils effectively. To better characterize neutrophils, we propose to extract their features in a learning manner, by constructing a deep convolutional neural network (CNN). In addition, in clinical practice, neutrophils are identified not only based on their individual appearance, but also on the context formed by multiple related cells. It is not quite straightforward for deep learning to capture precisely the rather complex cell context. Hence, we further propose to combine deep learning with Voronoi diagram of clusters (VDC), to extract needed context. Experiments on clinical data show that (1) the learned hierarchical representation of features by CNN outperforms hand-crafted features on characterizing neutrophils, and (2) the combination of CNN and VDC significantly improves over the state-of-the-art methods for neutrophil identification on H&E histology tissue images.

Jiazhuo Wang, John D. MacKenzie, Rageshree Ramachandran, Danny Z. Chen
U-Net: Convolutional Networks for Biomedical Image Segmentation

There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at

http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net

.

Olaf Ronneberger, Philipp Fischer, Thomas Brox
Co-restoring Multimodal Microscopy Images

We propose a novel microscopy image restoration algorithm capable of co-restoring Phase Contrast and Differential Interference Contrast (DIC) microscopy images captured on the same cell dish simultaneously. Cells with different phase retardation and DIC gradient signals are restored into a single image without the halo artifact from phase contrast or pseudo 3D shadow-casting effect from DIC. The co-restoration integrates the advantages of two imaging modalities and overcomes the drawbacks in single-modal image restoration. Evaluated on a datasets of five hundred pairs of phase contrast and DIC images, the co-restoration demonstrates its effectiveness to greatly facilitate the cell image analysis tasks such as cell segmentation and classification.

Mingzhong Li, Zhaozheng Yin
A 3D Primary Vessel Reconstruction Framework with Serial Microscopy Images

Three dimensional microscopy images present significant potential to enhance biomedical studies. This paper presents an automated method for quantitative analysis of 3D primary vessel structures with histology whole slide images. With registered microscopy images of liver tissue, we identify primary vessels with an improved variational level set framework at each 2D slide. We propose a Vessel Directed Fitting Energy (VDFE) to provide prior information on vessel wall probability in an energy minimization paradigm. We find the optimal vessel cross-section associations along the image sequence with a two-stage procedure. Vessel mappings are first found between each pair of adjacent slides with a similarity function for four association cases. These bi-slide vessel components are further linked by Bayesian Maximum A Posteriori (MAP) estimation where the posterior probability is modeled as a Markov chain. The efficacy of the proposed method is demonstrated with 54 whole slide microscopy images of sequential sections from a human liver.

Yanhui Liang, Fusheng Wang, Darren Treanor, Derek Magee, George Teodoro, Yangyang Zhu, Jun Kong
Adaptive Co-occurrence Differential Texton Space for HEp-2 Cells Classification

The occurrence of antinuclear antibodies in patient serum has significant relation to autoimmune disease. But identification of them suffers from serious problems due to human subjective evaluation. In this study, we propose an automatic classification system for HEp-2 cells. Within this system, a Co-occurrence Differential Texton (CoDT) feature is designed to represent the local image patches, and a generative model is built to adaptively characterize the CoDT feature space. We further exploit a more discriminant representation for the HEp-2 cell images based on the adaptive partitioned feature space, then feed the representation into a linear Support Vector Machine classifier for identifying the staining patterns. The experimental results on two benchmark datasets: ICPR12 dataset and ICIP2013 training dataset, verified that our method remarkably outperforms the other contemporary approaches for HEp-2 cells classification.

Xiang Xu, Feng Lin, Carol Ng, Khai Pang Leong
Learning to Segment: Training Hierarchical Segmentation under a Topological Loss

We propose a generic and efficient learning framework that is applicable to segment images in which individual objects are mainly discernible by boundary cues. Our approach starts by first hierarchically clustering the image and then explaining the image in terms of a cost-minimal subset of non-overlapping segments. The cost of a segmentation is defined as a weighted sum of features of the selected candidates. This formulation allows us to take into account an extensible set of arbitrary features. The maximally discriminative linear combination of features is learned from training data using a margin-rescaled structured SVM. At the core of our formulation is a novel and simple topology-based structured loss which is a combination of counts and geodesic distance of topological errors (splits, merges, false positives and false negatives) relative to the training set. We demonstrate the generality and accuracy of our approach on three challenging 2D cell segmentation problems, where we improve accuracy compared to the current state of the art.

Jan Funke, Fred A. Hamprecht, Chong Zhang
You Should Use Regression to Detect Cells

Automated cell detection in histopathology images is a hard problem due to the large variance of cell shape and appearance. We show that cells can be detected reliably in images by predicting, for each pixel location, a monotonous function of the distance to the center of the closest cell. Cell centers can then be identified by extracting local extremums of the predicted values. This approach results in a very simple method, which is easy to implement. We show on two challenging microscopy image datasets that our approach outperforms state-of-the-art methods in terms of accuracy, reliability, and speed. We also introduce a new dataset that we will make publicly available.

Philipp Kainz, Martin Urschler, Samuel Schulter, Paul Wohlhart, Vincent Lepetit
A Hybrid Approach for Segmentation and Tracking of Myxococcus Xanthus Swarms

Segmentation and tracking of moving cells in time-lapse images is an important problem in biomedical image analysis. For

Myxococcus xanthus

, rod-like cells with highly coordinated motion, their segmentation and tracking are challenging because cells may touch tightly and form dense swarms that are difficult to identify accurately. Common methods fall under two frameworks: detection association and model evolution. Each framework has its own advantages and disadvantages. In this paper, we propose a new hybrid framework combining these two frameworks into one and leveraging their complementary advantages. Also, we propose an active contour model based on the Ribbon Snake and Chan-Vese model, which is seamlessly integrated with our hybrid framework. Our approach outperforms the state-of-the-art cell tracking algorithms on identifying completed cell trajectories, and achieves higher segmentation accuracy than some best known cell segmentation algorithms.

Jianxu Chen, Shant Mahserejian, Mark Alber, Danny Z. Chen
Fast Background Removal in 3D Fluorescence Microscopy Images Using One-Class Learning

With the recent advances of optical tissue clearing technology, current imaging modalities are able to image large tissue samples in 3D with single-cell resolution. However, the severe background noise remains a significant obstacle to the extraction of quantitative information from these high-resolution 3D images. Additionally, due to the potentially large sizes of 3D image data (over 10

11

voxels), the processing speed is becoming a major bottleneck that limits the applicability of many known background correction methods. In this paper, we present a fast background removal algorithm for large volume 3D fluorescence microscopy images. By incorporating unsupervised one-class learning into the percentile filtering approach, our algorithm is able to precisely and efficiently remove background noise even when the sizes and appearances of foreground objects vary greatly. Extensive experiments on real 3D datasets show our method has superior performance and efficiency comparing with the current state-of-the-art background correction method and the rolling ball algorithm in ImageJ.

Lin Yang, Yizhe Zhang, Ian H. Guldner, Siyuan Zhang, Danny Z. Chen
Motion Representation of Ciliated Cell Images with Contour-Alignment for Automated CBF Estimation

Ciliary beating frequency (CBF) estimation is of high interest for the diagnosis and therapeutic assessment of defective mucociliary clearance diseases. Image-based methods have recently become the focus of accurate CBF measurement. The influence from the moving ciliated cell however makes the processing a challenging problem. In this work, we present a registration method for cell movement alignment, based on cell contour segmentation. We also propose a filter feature-based ciliary motion representation, which can better characterize the periodical changes of beating cilia. Experimental results on microscopic time sequence human primary ciliated cell images show the accuracy of our method for CBF computation.

Fan Zhang, Yang Song, Siqi Liu, Paul Young, Daniela Traini, Lucy Morgan, Hui-Xin Ong, Lachlan Buddle, Sidong Liu, Dagan Feng, Weidong Cai
Multimodal Dictionary Learning and Joint Sparse Representation for HEp-2 Cell Classification

Use of automatic classification for Indirect Immunofluorescence (IIF) images of HEp-2 cells is increasingly gaining interest in Antinuclear Autoantibodies (ANAs) detection. In order to improve the classification accuracy, we propose a multi-modal joint dictionary learning method, to obtain a discriminative and reconstructive dictionary while training a classifier simultaneously. Here, the term ‘multi-modal’ refers to features extracted using different algorithms from the same data set. To utilize information fusion between feature modalities the algorithm is designed so that sparse codes of all modalities of each sample share the same sparsity pattern. The contribution of this paper is two-fold. First, we propose a new framework for multi-modal fusion at the feature level. Second, we impose an additional constraint on consistency of sparse coefficients among different modalities of the same class. Extensive experiments are conducted on the ICPR2012 and ICIP2013 HEp-2 data sets. All results confirm the higher level of accuracy of the proposed method compared with state-of-the-art.

Ali Taalimi, Shahab Ensafi, Hairong Qi, Shijian Lu, Ashraf A. Kassim, Chew Lim Tan
Cell Event Detection in Phase-Contrast Microscopy Sequences from Few Annotations

We study detecting cell events in phase-contrast microscopy sequences from few annotations. We first detect event candidates from the intensity difference of consecutive frames, and then train an unsupervised novelty detector on these candidates. The novelty detector assigns each candidate a degree of surprise. We annotate a tiny number of candidates chosen according to the novelty detector’s output, and finally train a sparse Gaussian process (GP) classifier. We show that the steepest learning curve is achieved when a collaborative multi-output Gaussian process is used as novelty detector, and its predictive mean and variance are used together to measure the degree of surprise. Following this scheme, we closely approximate the fully-supervised event detection accuracy by annotating only 3% of all candidates. The novelty detector based annotation used here clearly outperforms the studied active learning based approaches.

Melih Kandemir, Christian Wojek, Fred A. Hamprecht
Robust Muscle Cell Quantification Using Structured Edge Detection and Hierarchical Segmentation

Morphological characteristics of muscle cells, such as cross-sectional areas (CSAs), are critical factors to determine the muscle health. Automatic muscle cell segmentation is often the first prerequisite for quantitative analysis. In this paper, we have proposed a novel muscle cell segmentation algorithm that contains two steps: 1) A structured edge detection algorithm that can capture the inherent edge patterns of muscle images, and 2) a hierarchical segmentation algorithm. A set of nested partitions are first constructed, and a best subset selection algorithm is then designed to choose a maximum weighted subset of non-overlapping partitions as the final segmentation result. We have experimentally demonstrated that the proposed structured edge detection based hierarchical segmentation algorithm outperforms other state of the arts for muscle image segmentation.

Fujun Liu, Fuyong Xing, Zizhao Zhang, Mason Mcgough, Lin Yang
Fast Cell Segmentation Using Scalable Sparse Manifold Learning and Affine Transform-Approximated Active Contour

Efficient and effective cell segmentation of neuroendocrine tumor (NET) in whole slide scanned images is a difficult task due to a large number of cells. The weak or misleading cell boundaries also present significant challenges. In this paper, we propose a fast, high throughput cell segmentation algorithm by combining top-down shape models and bottom-up image appearance information. A scalable sparse manifold learning method is proposed to model multiple subpopulations of different cell shape priors. Followed by a shape clustering on the manifold, a novel affine transform-approximated active contour model is derived to deform contours without solving a large amount of computationally-expensive Euler-Lagrange equations, and thus dramatically reduces the computational time. To the best of our knowledge, this is the first report of a high throughput cell segmentation algorithm for whole slide scanned pathology specimens using manifold learning to accelerate active contour models. The proposed approach is tested using 12 NET images, and the comparative experiments with the state of the arts demonstrate its superior performance in terms of both efficiency and effectiveness.

Fuyong Xing, Lin Yang
Restoring the Invisible Details in Differential Interference Contrast Microscopy Images

Automated image restoration in microscopy, especially in Differential Interference Contrast (DIC) imaging modality, has attracted increasing attention since it greatly facilitates living cell analysis. Previous work is able to restore the nuclei of living cells, but it is very challenging to reconstruct the unnoticeable cytoplasm details in DIC images. In this paper, we propose to extract the tiny movement information of living cells in DIC images and reveal the hidden details in DIC images by magnifying the cell’s motion as well as attenuating the intensity variation from the background. From our restored images, we can clearly observe the previously-invisible details in DIC images. Experiments on two DIC image datasets demonstrate that the motion-based restoration method can reveal the hidden details of living cells, providing promising results on facilitating cell shape and behavior analysis.

Wenchao Jiang, Zhaozheng Yin
A Novel Cell Detection Method Using Deep Convolutional Neural Network and Maximum-Weight Independent Set

Cell detection is an important topic in biomedical image analysis and it is often the prerequisite for the following segmentation or classification procedures. In this paper, we propose a novel algorithm for general cell detection problem: Firstly, a set of cell detection candidates is generated using different algorithms with varying parameters. Secondly, each candidate is assigned a score by a trained deep convolutional neural network (DCNN). Finally, a subset of best detection results are selected from all candidates to compose the final cell detection results. The subset selection task is formalized as a maximum-weight independent set problem, which is designed to find the heaviest subset of mutually non-adjacent nodes in a graph. Experiments show that the proposed general cell detection algorithm provides detection results that are dramatically better than any individual cell detection algorithm.

Fujun Liu, Lin Yang
Beyond Classification: Structured Regression for Robust Cell Detection Using Convolutional Neural Network

Robust cell detection serves as a critical prerequisite for many biomedical image analysis applications. In this paper, we present a novel convolutional neural network (CNN) based structured regression model, which is shown to be able to handle touching cells, inhomogeneous background noises, and large variations in sizes and shapes. The proposed method only requires a few training images with weak annotations (just one click near the center of the object). Given an input image patch, instead of providing a single class label like many traditional methods, our algorithm will generate the structured outputs (referred to as proximity patches). These proximity patches, which exhibit higher values for pixels near cell centers, will then be gathered from all testing image patches and fused to obtain the final proximity map, where the maximum positions indicate the cell centroids. The algorithm is tested using three data sets representing different image stains and modalities. The comparative experiments demonstrate the superior performance of this novel method over existing state-of-the-art.

Yuanpu Xie, Fuyong Xing, Xiangfei Kong, Hai Su, Lin Yang
Joint Kernel-Based Supervised Hashing for Scalable Histopathological Image Analysis

Histopathology is crucial to diagnosis of cancer, yet its interpretation is tedious and challenging. To facilitate this procedure, content-based image retrieval methods have been developed as case-based reasoning tools. Recently, with the rapid growth of histopathological images, hashing-based retrieval approaches are gaining popularity due to their exceptional scalability. In this paper, we exploit a joint kernel-based supervised hashing (JKSH) framework for fusion of complementary features. Specifically, hashing functions are designed based on linearly combined kernel functions associated with individual features, and supervised information is incorporated to bridge the semantic gap between low-level features and high-level diagnosis. An alternating optimization method is utilized to learn the kernel combination and hashing functions. The obtained hashing functions compress high-dimensional features into tens of binary bits, enabling fast retrieval from a large database. Our approach is extensively validated on thousands of breast-tissue histopathological images by distinguishing between actionable and benign cases. It achieves 88.1% retrieval precision and 91.2% classification accuracy within 14.0 ms query time, comparing favorably with traditional methods.

Menglin Jiang, Shaoting Zhang, Junzhou Huang, Lin Yang, Dimitris N. Metaxas
Deep Voting: A Robust Approach Toward Nucleus Localization in Microscopy Images

Robust and accurate nuclei localization in microscopy image can provide crucial clues for accurate computer-aid diag In this paper, we propose a convolutional neural network (CNN) based hough voting method to localize nucleus centroids with heavy cluttering and morphologic variations in microscopy images. Our method, which we name as deep voting, mainly consists of two steps. (1) Given an input image, our method assigns each local patch several pairs of voting

offset

vectors which indicate the positions it votes to, and the corresponding voting

confidence

(used to weight each votes), our model can be viewed as an implicit hough-voting codebook. (2) We collect the weighted votes from all the testing patches and compute the final voting density map in a way similar to Parzen-window estimation. The final nucleus positions are identified by searching the local maxima of the density map. Our method only requires a few annotation efforts (just one click near the nucleus center). Experiment results on Neuroendocrine Tumor (NET) microscopy images proves the proposed method to be state-of-the-art.

Yuanpu Xie, Xiangfei Kong, Fuyong Xing, Fujun Liu, Hai Su, Lin Yang
Robust Cell Detection and Segmentation in Histopathological Images Using Sparse Reconstruction and Stacked Denoising Autoencoders

Computer-aided diagnosis (CAD) is a promising tool for accurate and consistent diagnosis and prognosis. Cell detection and segmentation are essential steps for CAD. These tasks are challenging due to variations in cell shapes, touching cells, and cluttered background. In this paper, we present a cell detection and segmentation algorithm using the sparse reconstruction with trivial templates and a stacked denoising autoencoder (sDAE). The sparse reconstruction handles the shape variations by representing a testing patch as a linear combination of shapes in the learned dictionary. Trivial templates are used to model the touching parts. The sDAE, trained with the original data and their structured labels, is used for cell segmentation. To the best of our knowledge, this is the first study to apply sparse reconstruction and sDAE with structured labels for cell detection and segmentation. The proposed method is extensively tested on two data sets containing more than 3000 cells obtained from brain tumor and lung cancer images. Our algorithm achieves the best performance compared with other state of the arts.

Hai Su, Fuyong Xing, Xiangfei Kong, Yuanpu Xie, Shaoting Zhang, Lin Yang
Automatic in Vivo Cell Detection in MRI

Due to recent advances in cell-based therapies, non-invasive monitoring of

in vivo

cells in MRI is gaining enormous interest. However, to date, the monitoring and analysis process is conducted manually and is extremely tedious, especially in the clinical arena. Therefore, this paper proposes a novel computer vision-based learning approach that creates superpixel-based 3D models for candidate spots in MRI, extracts a novel set of superfern features, and utilizes a partition-based Bayesian classifier ensemble to distinguish spots from non-spots. Unlike traditional ferns that utilize pixel-based differences, superferns exploit superpixel averages in computing difference-based features despite the absence of any order in superpixel arrangement. To evaluate the proposed approach, we develop the first labeled database with a total of more than 16 thousand labels on five

in vivo

and four

in vitro

MRI scans. Experimental results show the superiority of our approach in comparison to the two most relevant baselines. To the best of our knowledge, this is the first study to utilize a learning-based methodology for

in vivo

cell detection in MRI.

Muhammad Jamal Afridi, Xiaoming Liu, Erik Shapiro, Arun Ross

Quantitative Image Analysis III: Motion, Deformation, Development and Degeneration

Frontmatter
Automatic Vessel Segmentation from Pulsatile Radial Distension

Identification of vascular structures from medical images is integral to many clinical procedures. Most vessel segmentation techniques ignore the characteristic pulsatile motion of vessels in that formulation. In a recent effort to automatically segment vessels that are hidden under fat, we motivated the use of the magnitude of local pulsatile motion extracted from surgical endoscopic video. In this paper we propose a new approach that leverages the local orientation, in addition to magnitude of motion, and demonstrate that the extended computation of motion vectors can improve the segmentation of vascular structures. We implement our approach using two alternatives to magnitude-only motion estimation by using traditional optical flow and by exploiting the monogenic signal for fast flow estimation. Our evaluations are conducted on both synthetic phantoms as well as real ultrasound data showing improved segmentation results (0.36 increase in DSC and 0.11 increase in AUC) with negligible change in computational performance.

Alborz Amir-Khalili, Ghassan Hamarneh, Rafeef Abugharbieh
A Sparse Bayesian Learning Algorithm for Longitudinal Image Data

Longitudinal imaging studies, where serial (multiple) scans are collected on each individual, are becoming increasingly widespread. The field of machine learning has in general neglected the longitudinal design, since many algorithms are built on the assumption that each datapoint is an independent sample. Thus, the application of general purpose machine learning tools to longitudinal image data can be sub-optimal. Here, we present a novel machine learning algorithm designed to handle longitudinal image datasets. Our approach builds on a sparse Bayesian image-based prediction algorithm. Our empirical results demonstrate that the proposed method can offer a significant boost in prediction performance with longitudinal clinical data.

Mert R. Sabuncu
Descriptive and Intuitive Population-Based Cardiac Motion Analysis via Sparsity Constrained Tensor Decomposition

Analysing and understanding population-specific cardiac function is a challenging task due to the complex dynamics observed in both healthy and diseased subjects and the difficulty in quantitatively comparing the motion in different subjects. Affine parameters extracted from a Polyaffine motion model for a group of subjects can be used to represent the 3D motion regionally over time for a group of subjects. We propose to construct from these parameters a 4-way tensor of the rotation, stretch, shear, and translation components of each affine matrix defined in an intuitive coordinate system, stacked per region, for each affine component, over time, and for all subjects. From this tensor, Tucker decomposition can be applied with a constraint of sparsity on the core tensor in order to extract a few key, easily interpretable modes for each subject. Using this construction of a data tensor, the tensors of multiple groups can be stacked and collectively decomposed in order to compare and discriminate the motion by analysing the different loadings of each combination of modes for each group. The proposed method was applied to study and compare left ventricular dynamics for a group of healthy adult subjects and a group of adults with repaired Tetralogy of Fallot.

K. McLeod, M. Sermesant, P. Beerbaum, X. Pennec
Liver Motion Estimation via Locally Adaptive Over-Segmentation Regularization

Despite significant advances in the development of deformable registration methods, motion correction of deformable organs such as the liver remain a challenging task. This is due to not only low contrast in liver imaging, but also due to the particularly complex motion between scans primarily owing to patient breathing. In this paper, we address abdominal motion estimation using a novel regularization model that is advancing the state-of-the-art in liver registration in terms of accuracy. We propose a novel regularization of the deformation field based on spatially adaptive over-segmentation, to better model the physiological motion of the abdomen. Our quantitative analysis of abdominal Computed Tomography and dynamic contrast-enhanced Magnetic Resonance Imaging scans show a significant improvement over the state-of-the-art Demons approaches. This work also demonstrates the feasibility of segmentation-free registration between clinical scans that can inherently preserve sliding motion at the lung and liver boundary interfaces.

Bartlomiej W. Papież, Jamie Franklin, Mattias P. Heinrich, Fergus V. Gleeson, Julia A. Schnabel
Motion-Corrected, Super-Resolution Reconstruction for High-Resolution 3D Cardiac Cine MRI

Cardiac cine MRI with 3D isotropic resolution is challenging as it requires efficient data acquisition and motion management. It is proposed to use a 2D balanced SSFP (steady-state free precession) sequence rather than its 3D version as it provides better contrast between blood and tissue. In order to obtain 3D isotropic images, 2D multi-slice datasets are acquired in different orientations (short axis, horizontal long axis and vertical long axis) while the patient is breathing freely. Image reconstruction is performed in two steps: (i) a motion-compensated reconstruction of each image stack corrects for nonrigid cardiac and respiratory motion; (ii) a super-resolution (SR) algorithm combines the three motion-corrected volumes (with low resolution in the slice direction) into a single volume with isotropic resolution. The SR reconstruction was implemented with two regularization schemes including a conventional one (Tikhonov) and a feature-preserving one (Beltrami). The method was validated in 8 volunteers and 10 patients with breathing difficulties. Image sharpness, as assessed by intensity profiles and by objective metrics based on the structure tensor, was improved with both SR techniques. The Beltrami constraint provided efficient denoising without altering the effective resolution.

Freddy Odille, Aurélien Bustin, Bailiang Chen, Pierre-André Vuissoz, Jacques Felblinger
Motion Estimation of Common Carotid Artery Wall Using a H ∞  Filter Based Block Matching Method

The movement of the common carotid artery (CCA) vessel wall has been well accepted as one important indicator of atherosclerosis, but it is still one challenge to estimate the motion of vessel wall from ultrasound images. In this paper, a robust H

 ∞ 

filter was incorporated with block matching (BM) method to estimate the motion of carotid arterial wall. The performance of our method was compared with the standard BM method, Kalman filter, and manual traced method respectively on carotid artery ultrasound images from 50 subjects. Our results showed that the proposed method has a small estimation error (96

μ

m for the longitudinal motion and 46

μ

m for the radial motion), and good agreement (94.03% results fall within 95% confidence interval for the longitudinal motion and 95.53% for the radial motion) with the manual traced method. These results demonstrated the effectiveness of our method in the motion estimation of carotid wall in ultrasound images.

Zhifan Gao, Huahua Xiong, Heye Zhang, Dan Wu, Minhua Lu, Wanqing Wu, Kelvin K. L. Wong, Yuan-Ting Zhang
Gated-tracking: Estimation of Respiratory Motion with Confidence

Image-guided radiation therapy during free-breathing requires estimation of the target position and compensation for its motion. Estimation of the observed motion during therapy needs to be reliable and accurate. In this paper we propose a novel, image sequence-specific confidence measure to predict the reliability of the tracking results. The sequence-specific statistical relationship between the image similarities and the feature displacements is learned from the first breathing cycles. A confidence measure is then assigned to the tracking results during the real-time application phase based on the relative closeness to the expected values. The proposed confidence was tested on the results of a learning-based tracking algorithm. The method was assessed on 9 2D B-mode ultrasound sequences of healthy volunteers under free-breathing. Results were evaluated on a total of 15 selected vessel centers in the liver, achieving a mean tracking accuracy of 0.9 mm. When considering only highly-confident results, the mean (95th percentile) tracking error on the test data was reduced by 12% (16%) while duty cycle remained sufficient (60%), achieving a 95% accuracy below 3 mm, which is clinically acceptable. A similar performance was obtained on 10 2D liver MR sequences, showing the applicability of the method to a different image modality.

Valeria De Luca, Gábor Székely, Christine Tanner
Solving Logistic Regression with Group Cardinality Constraints for Time Series Analysis

We propose an algorithm to distinguish 3D+t images of healthy from diseased subjects by solving logistic regression based on cardinality constrained, group sparsity. This method reduces the risk of overfitting by providing an elegant solution to identifying anatomical regions most impacted by disease. It also ensures that consistent identification across the time series by grouping each image feature across time and counting the number of non-zero groupings. While popular in medical imaging, group cardinality constrained problems are generally solved by relaxing counting with summing over the groupings. We instead solve the original problem by generalizing a penalty decomposition algorithm, which alternates between minimizing a logistic regression function with a regularizer based on the Frobenius norm and enforcing sparsity. Applied to 86 cine MRIs of healthy cases and subjects with Tetralogy of Fallot (TOF), our method correctly identifies regions impacted by TOF and obtains a statistically significant higher classification accuracy than logistic regression without and relaxed grouped sparsity constraint.

Yong Zhang, Kilian M. Pohl
Spatiotemporal Parsing of Motor Kinematics for Assessing Stroke Recovery

Stroke is a major cause of adult disability and rehabilitative training is the prevailing approach to enhance motor recovery. However, the way rehabilitation helps to restore lost motor functions by continuous reshaping of kinematics is still an open research question. We follow the established setup of a rat model before/after stroke in the motor cortex to analyze the subtle changes in hand motor function solely based on video. Since nuances of paw articulation are crucial, mere tracking and trajectory analysis is insufficient. Thus, we propose an automatic spatiotemporal parsing of grasping kinematics based on a max-projection of randomized exemplar classifiers. A large ensemble of these discriminative predictors of hand posture is automatically learned and yields a measure of grasping similarity. This non-parametric distributed representation effectively captures the nuances of hand posture and its deformation over time. A max-margin projection then not only quantifies functional deficiencies, but also back-projects them accurately to specific defects in the grasping sequence to provide neuroscience with a better understanding of the precise effects of rehabilitation. Moreover, evaluation shows that our fully automatic approach is reliable and more efficient than the prevalent manual analysis of the day.

Borislav Antic, Uta Büchler, Anna-Sophia Wahl, Martin E. Schwab, Björn Ommer
Longitudinal Analysis of Pre-term Neonatal Brain Ventricle in Ultrasound Images Based on Convex Optimization

Intraventricular hemorrhage (IVH) is a major cause of brain injury in preterm neonates and leads to dilatation of the ventricles. Measuring ventricular volume quantitatively is an important step in monitoring patients and evaluating treatment options. 3D ultrasound (US) has been developed to monitor ventricle volume as a biomarker for ventricular dilatation and deformation. Ventricle volume as a global indicator, however, does not allow for the precise analysis of local ventricular changes. In this work, we propose a 3D+t spatial-temporal nonlinear registration approach, which is used to analyze the detailed local changes of the ventricles of preterm IVH neonates from 3D US images. In particular, a novel sequential convex/dual optimization is introduced to extract the optimal 3D+t spatial-temporal deformable registration. The experiments with five patients with 4 time-point images for each patient showed that the proposed registration approach accurately and efficiently recovered the longitudinal deformation of the ventricles from 3D US images. To the best of our knowledge, this paper reports the first study on the longitudinal analysis of the ventriclar system of pre-term newborn brains from 3D US images.

Wu Qiu, Jing Yuan, Jessica Kishimoto, Yimin Chen, Martin Rajchl, Eranga Ukwatta, Sandrine de Ribaupierre, Aaron Fenster
Multi-GPU Reconstruction of Dynamic Compressed Sensing MRI

Magnetic resonance imaging (MRI) is a widely used in-vivo imaging technique that is essential to the diagnosis of disease, but its longer acquisition time hinders its wide adaptation in time-critical applications, such as emergency diagnosis. Recent advances in compressed sensing (CS) research have provided promising theoretical insights to accelerate the MRI acquisition process, but CS reconstruction also poses computational challenges that make MRI less practical. In this paper, we introduce a fast, scalable parallel CS-MRI reconstruction method that runs on graphics processing unit (GPU) cluster systems for dynamic contrast-enhanced (DCE) MRI. We propose a modified Split-Bregman iteration using a variable splitting method for CS-based DCE-MRI. We also propose a parallel GPU Split-Bregman solver that scales well across multiple GPUs to handle large data size. We demonstrate the validity of the proposed method on several synthetic and real DCE-MRI datasets and compare with existing methods.

Tran Minh Quan, Sohyun Han, Hyungjoon Cho, Won-Ki Jeong
Prospective Identification of CRT Super Responders Using a Motion Atlas and Random Projection Ensemble Learning

Cardiac Resynchronisation Therapy (CRT) treats patients with heart failure and electrical dyssynchrony. However, many patients do not respond to therapy. We propose a novel framework for the prospective characterisation of CRT ‘super-responders’ based on motion analysis of the Left Ventricle (LV). A spatio-temporal motion atlas for the comparison of the LV motions of different subjects is built using cardiac MR imaging. Patients likely to present a super-response to the therapy are identified using a novel ensemble learning classification method based on random projections of the motion data. Preliminary results on a cohort of 23 patients show a sensitivity and specificity of 70% and 85%.

Devis Peressutti, Wenjia Bai, Thomas Jackson, Manav Sohal, Aldo Rinaldi, Daniel Rueckert, Andrew King
Motion Compensated Abdominal Diffusion Weighted MRI by Simultaneous Image Registration and Model Estimation (SIR-ME)

Non-invasive characterization of water molecule’s mobility variations by quantitative analysis of diffusion-weighted MRI (DW-MRI) signal decay in the abdomen has the potential to serve as a biomarker in gastrointestinal and oncological applications. Accurate and reproducible estimation of the signal decay model parameters is challenging due to the presence of respiratory, cardiac, and peristalsis motion. Independent registration of each b-value image to the b-value=0 s/mm

2

image prior to parameter estimation might be sub-optimal because of the low SNR and contrast difference between images of varying b-value. In this work, we introduce a motion-compensated parameter estimation framework that simultaneously solves image registration and model estimation (SIR-ME) problems by utilizing the interdependence of acquired volumes along the diffusion weighting dimension. We evaluated the improvement in model parameters estimation accuracy using 16 in-vivo DW-MRI data sets of Crohn’s disease patients by comparing parameter estimates obtained using the SIR-ME model to the parameter estimates obtained by fitting the signal decay model to the acquired DW-MRI images. The proposed SIR-ME model reduced the average root-mean-square error between the observed signal and the fitted model by more than 50%. Moreover, the SIR-ME model estimates discriminate between normal and abnormal bowel loops better than the standard parameter estimates.

Sila Kurugol, Moti Freiman, Onur Afacan, Liran Domachevsky, Jeannette M. Perez-Rossello, Michael J. Callahan, Simon K. Warfield
Fast Reconstruction of Accelerated Dynamic MRI Using Manifold Kernel Regression

We present a novel method for fast reconstruction of dynamic MRI from undersampled k-space data, thus enabling highly accelerated acquisition. The method is based on kernel regression along the manifold structure of the sequence derived directly from k-space data. Unlike compressed sensing techniques which require solving a complex optimisation problem, our reconstruction is fast, taking under 5 seconds for a 30 frame sequence on conventional hardware. We demonstrate our method on 10 retrospectively undersampled cardiac cine MR sequences, showing improved performance over state-of-the-art compressed sensing.

Kanwal K. Bhatia, Jose Caballero, Anthony N. Price, Ying Sun, Jo V. Hajnal, Daniel Rueckert
Predictive Modeling of Anatomy with Genetic and Clinical Data

We present a semi-parametric generative model for predicting anatomy of a patient in subsequent scans following a single baseline image. Such predictive modeling promises to facilitate novel analyses in both voxel-level studies and longitudinal biomarker evaluation. We capture anatomical change through a combination of population-wide regression and a non-parametric model of the subject’s health based on individual genetic and clinical indicators. In contrast to classical correlation and longitudinal analysis, we focus on predicting new observations from a single subject observation. We demonstrate prediction of follow-up anatomical scans in the ADNI cohort, and illustrate a novel analysis approach that compares a patient’s scans to the predicted subject-specific healthy anatomical trajectory.

Adrian V. Dalca, Ramesh Sridharan, Mert R. Sabuncu, Polina Golland
Joint Diagnosis and Conversion Time Prediction of Progressive Mild Cognitive Impairment (pMCI) Using Low-Rank Subspace Clustering and Matrix Completion

Identifying progressive mild cognitive impairment (pMCI) patients and predicting when they will convert to Alzheimer’s disease (AD) are important for early medical intervention. Multi-modality and longitudinal data provide a great amount of information for improving diagnosis and prognosis. But these data are often incomplete and noisy. To improve the utility of these data for prediction purposes, we propose an approach to denoise the data, impute missing values, and cluster the data into low-dimensional subspaces for pMCI prediction. We assume that the data reside in a space formed by a union of several low-dimensional subspaces and that similar MCI conditions reside in similar subspaces. Therefore, we first use incomplete low-rank representation (ILRR) and spectral clustering to cluster the data according to their representative low-rank subspaces. At the same time, we denoise the data and impute missing values. Then we utilize a low-rank matrix completion (LRMC) framework to identify pMCI patients and their time of conversion. Evaluations using the ADNI dataset indicate that our method outperforms conventional LRMC method.

Kim-Han Thung, Pew-Thian Yap, Ehsan Adeli-M, Dinggang Shen
Learning with Heterogeneous Data for Longitudinal Studies

Longitudinal studies are very important to understand cerebral structural changes especially during the course of pathologies. For instance, in the context of mental health research, it is interesting to evaluate how a certain disease degenerates over time in order to discriminate between pathological and normal time dependent brain deformations. However longitudinal data are not easily available, and very often they are characterized by a large variability in both the age of subjects and time between acquisitions (follow up time). This leads to heterogeneous data that may affect the overall study. In this paper we propose a learning method to deal with this kind of heterogeneous data by exploiting covariate measures in a Multiple Kernel Learning (MKL) framework. Cortical thickness and white matter volume of the left middle temporal region are collected from each subject. Then, a subject-dependent kernel weighting procedure is introduced in order to obtain the correction of covariate effect simultaneously with classification. Experiments are reported for First Episode Psychosis detection by showing very promising results.

Letizia Squarcina, Cinzia Perlini, Marcella Bellani, Antonio Lasalvia, Mirella Ruggeri, Paolo Brambilla, Umberto Castellani
Parcellation of Infant Surface Atlas Using Developmental Trajectories of Multidimensional Cortical Attributes

Cortical surface atlases, equipped with anatomically and functionally defined parcellations, are of fundamental importance in neuroimaging studies. Typically, parcellations of surface atlases are derived based on the sulcal-gyral landmarks, which are extremely variable across individuals and poorly matched with microstructural and functional boundaries. Cortical developmental trajectories in infants reflect underlying changes of microstructures, which essentially determines the molecular organization and functional principles of the cortex, thus allowing better definition of developmentally, microstructurally, and functionally distinct regions, compared to conventional sulcal-gyral landmarks. Accordingly, a parcellation of infant cortical surface atlas was proposed, based on the developmental trajectories of

cortical thickness

in infants, revealing regional patterning of cortical growth. However, cortical anatomy is jointly characterized by biologically-distinct, multidimensional cortical attributes, i.e., cortical thickness, surface area, and local gyrification, each with its distinct genetic underpinning, cellular mechanism, and developmental trajectories. To date, the parcellations based on the development of surface area and local gyrification is still missing. To bridge this critical gap, for the first time, we parcellate an infant cortical surface atlas into distinct regions based solely on developmental trajectories of surface area and local gyrification, respectively. For each cortical attribute, we first

nonlinearly fuse

the subject-specific similarity matrices of vertices’ developmental trajectories of all subjects into a single matrix, which helps better capture common and complementary information of the population than the conventional method of

simple averaging

of all subjects’ matrices. Then, we perform spectral clustering based on this fused matrix. We have applied our method to parcellate an infant surface atlas using the developmental trajectories of surface area and local gyrification from 35 healthy infants, each with up to 7 time points in the first two postnatal years, revealing biologically more meaningful growth patterning than the conventional method.

Gang Li, Li Wang, John H Gilmore, Weili Lin, Dinggang Shen
Graph-Based Motion-Driven Segmentation of the Carotid Atherosclerotique Plaque in 2D Ultrasound Sequences

Carotid plaque segmentation in ultrasound images is a crucial step for carotid atherosclerosis. However, image quality, important deformations and lack of texture are prohibiting factors towards manual or accurate carotid segmentation. We propose a novel fully automated methodology to identify the plaque region by exploiting kinematic dependencies between the atherosclerotic and the normal arterial wall. The proposed methodology exploits group-wise image registration towards recovering the deformation field, on which information theory criteria are used to determine dominant motion classes and a map reflecting kinematic dependencies, which is then segmented using Markov random fields. The algorithm was evaluated on 120 cases, for which manually-traced plaque contours by an experienced physician were available. Promising evaluation results showed the enhanced performance of the algorithm in automatically segmenting the plaque region, while future experiments on additional datasets are expected to further elucidate its potential.

Aimilia Gastounioti, Aristeidis Sotiras, Konstantina S. Nikita, Nikos Paragios
Cortical Surface-Based Construction of Individual Structural Network with Application to Early Brain Development Study

Analysis of anatomical covariance for cortex morphology in individual subjects plays an important role in the study of human brains. However, the approaches for constructing individual structural networks have not been well developed yet. Existing methods based on patch-wise image intensity similarity suffer from several major drawbacks, i.e., 1)

violation of cortical topological properties

, 2)

sensitivity to

intensity heterogeneity

, and3)

influence by patch size heterogeneity

. To overcome these limitations, this paper presents a novel cortical surface-based method for constructing individual structural networks. Specifically, our method first maps the cortical surfaces onto a standard spherical surface atlas and then uniformly samples vertices on the spherical surface as the nodes of the networks. The similarity between any two nodes is computed based on the biologically-meaningful cortical attributes (e.g., cortical thickness) in the spherical neighborhood of their sampled vertices. The connection between any two nodes is established only if the similarity is larger than a user-specified threshold. Through leveraging spherical cortical surface patches, our method generates biologically-meaningful individual networks that are comparable across ages and subjects. The proposed method has been applied to construct cortical-thickness networks for 73 healthy infants, with each infant having two MRI scans at 0 and 1 year of age. The constructed networks during the two ages were compared using various network metrics, such as degree, clustering coefficient, shortest path length, small world property, global efficiency, and local efficiency. Experimental results demonstrate that our method can effectively construct individual structural networks and reveal meaningful patterns in early brain development.

Yu Meng, Gang Li, Weili Lin, John H Gilmore, Dinggang Shen

Quantitative Image Analysis II: Classification, Detection, Features, and Morphology

Frontmatter
NEOCIVET: Extraction of Cortical Surface and Analysis of Neonatal Gyrification Using a Modified CIVET Pipeline

Cerebral cortical gyration becomes dramatically more complex in the fetal brain during the 3rd trimester of gestation; the process proceeds in a similar fashion

ex utero

in children who are born prematurely. To quantify this morphological development , it is necessary to extract the interface between gray matter and white matter. We employed the well-established CIVET pipeline to extract this cortical surface, with point correspondence across subjects, using a surface-based spherical registration. We developed a variant of the pipeline, called NEOCIVET, that addresses the well-known problems of poor and temporally-varying gray/white contrast in neonatal MRI. NEOCIVET includes a tissue classification strategy that combines i) local and global contrast features, ii) neonatal template construction based on age-specific subgroups, and iii) masking of non-interesting structures using label-fusion approaches. These techniques replaced modules that might be suboptimal for regional analysis of poor-contrast neonatal cortex. In the analysis of 43 pretermborn neonates, many with multiple scans (n=65; 28-40 wks PMA at scan), NEOCIVET identified increases in cortical folding over time in numerous cortical regions (mean curvature: +0.004/wk) while folding did not change in major sulci that are known to develop early (corrected p<0.05). Cortical folding increase was disrupted in the presence of severe types of perinatal WM injury. The proposed pipeline successfully mapped cortical structural development, supporting current models of cerebral morphogenesis.

Hosung Kim, Claude Lepage, Alan C. Evans, A. James Barkovich, Duan Xu
Learning-Based Shape Model Matching: Training Accurate Models with Minimal Manual Input

Recent work has shown that statistical model-based methods lead to accurate and robust results when applied to the segmentation of bone shapes from radiographs. To achieve good performance, model-based matching systems require large numbers of annotations, which can be very time-consuming to obtain. Non-rigid registration can be applied to unlabelled images to obtain correspondences from which models can be built. However, such models are rarely as effective as those built from careful manual annotations, and the accuracy of the registration is hard to measure. In this paper, we show that small numbers of manually annotated points can be used to guide the registration, leading to significant improvements in performance of the resulting model matching system, and achieving results close to those of a model built from dense manual annotations. Placing such sparse points manually is much less time-consuming than a full dense annotation, allowing good models to be built for new bone shapes more quickly than before. We describe detailed experiments on varying the number of sparse points, and demonstrate that manually annotating fewer than 30% of the points is sufficient to create robust and accurate models for segmenting hip and knee bones in radiographs. The proposed method includes a very effective and novel way of estimating registration accuracy in the absence of ground truth.

Claudia Lindner, Jessie Thomson, The arcOGEN Consortium, Tim F. Cootes
Scale and Curvature Invariant Ridge Detector for Tortuous and Fragmented Structures

Segmenting dendritic trees and corneal nerve fibres is challenging due to their uneven and irregular appearance in the respective image modalities. State-of-the-art approaches use hand-crafted features based on local assumptions that are often violated by tortuous and point-like structures, e.g., straight tubular shape. We propose a novel ridge detector, SCIRD, which is simultaneously rotation, scale and curvature invariant, and relaxes shape assumptions to achieve enhancement of target image structures. Experimental results on three datasets show that our approach outperforms state-of-the-art hand-crafted methods on tortuous and point-like structures, especially when captured at low resolution or limited signal-to-noise ratio and in the presence of other non-target structures.

Roberto Annunziata, Ahmad Kheirkhah, Pedram Hamrah, Emanuele Trucco
Boosting Hand-Crafted Features for Curvilinear Structure Segmentation by Learning Context Filters

Combining hand-crafted features and learned filters (i.e. feature boosting) for curvilinear structure segmentation has been proposed recently to capture key structure configurations while limiting the number of learned filters. Here, we present a novel combination method pairing hand-crafted appearance features with learned context filters. Unlike recent solutions based only on appearance filters, our method introduces

context information

in the filter learning process. Moreover, it reduces the potential redundancy of learned appearance filters that may be reconstructed using a combination of hand-crafted filters. Finally, the use of k-means for filter learning makes it fast and easily adaptable to other datasets, even when large dictionary sizes (e.g. 200 filters) are needed to improve performance. Comprehensive experimental results using 3 challenging datasets show that our combination method outperforms recent state-of-the-art HCFs and a recent combination approach for both performance and computational time.

Roberto Annunziata, Ahmad Kheirkhah, Pedram Hamrah, Emanuele Trucco
Kernel-Based Analysis of Functional Brain Connectivity on Grassmann Manifold

Functional Magnetic Resonance Imaging (fMRI) is widely adopted to measure brain activity, aiming at studying brain functions both in healthy and pathological subjects. Discrimination and identification of functional alterations in the connectivity, characterizing mental disorders, are getting increasing attention in neuroscience community.

We present a kernel-based method allowing to classify functional networks and characterizing those features that are significantly discriminative between two classes.

We used a manifold approach based on Grassmannian geometry and graph Laplacians, which permits to learn a set of sub-connectivities that can be used in combination with Support Vector Machine (SVM) to classify functional connectomes and for identifying neuroanatomically different connections.

We tested our approach on a real dataset of functional connectomes with subjects affected by Autism Spectrum Disorder (ASD), finding consistent results with the models of aberrant connections in ASD.

Luca Dodero, Fabio Sambataro, Vittorio Murino, Diego Sona
A Steering Engine: Learning 3-D Anatomy Orientation Using Regression Forests

Anatomical structures have intrinsic orientations along one or more directions, e.g., the tangent directions at points along a vessel central-line, the normal direction of an inter-vertebral disc or the base-to-apex direction of the heart. Although auto-detection of anatomy

orientation

is critical to various clinical applications, it is much less explored compared to its peer, “auto-detection of anatomy

location

”. In this paper, we propose a novel and generic algorithm, named as “steering engine”, to detect anatomy orientation in a robust and efficient way. Our work is distinguished by three main contributions. (1) Regression-based colatitude angle predictor: we use regression forests to model the highly non-linear mapping between appearance features and anatomy colatitude. (2) Iterative colatitude prediction scheme: we propose an algorithm that iteratively queries colatitude until longitude ambiguity is eliminated. (3) Rotation-invariant integral image: we design a spherical coordinates-based integral image from which Haar-like features of any orientation can be calculated efficiently. We validate our method on three diverse applications (different imaging modalities and organ systems), i.e., vertebral column tracing in CT, aorta tracing in CT and spinal cord tracing in MRI. In all applications (tested on a total of 400 scans), our method achieves a success rate above 90%. Experimental results suggest our method is fast, robust and accurate.

Fitsum A. Reda, Yiqiang Zhan, Xiang Sean Zhou
Automated Localization of Fetal Organs in MRI Using Random Forests with Steerable Features

Fetal MRI is an invaluable diagnostic tool complementary to ultrasound thanks to its high contrast and resolution. Motion artifacts and the arbitrary orientation of the fetus are two main challenges of fetal MRI. In this paper, we propose a method based on Random Forests with steerable features to automatically localize the heart, lungs and liver in fetal MRI. During training, all MR images are mapped into a standard coordinate system that is defined by landmarks on the fetal anatomy and normalized for fetal age. Image features are then extracted in this coordinate system. During testing, features are computed for different orientations with a search space constrained by previously detected landmarks. The method was tested on healthy fetuses as well as fetuses with intrauterine growth restriction (IUGR) from 20 to 38 weeks of gestation. The detection rate was above 90% for all organs of healthy fetuses in the absence of motion artifacts. In the presence of motion, the detection rate was 83% for the heart, 78% for the lungs and 67% for the liver. Growth restriction did not decrease the performance of the heart detection but had an impact on the detection of the lungs and liver. The proposed method can be used to initialize subsequent processing steps such as segmentation or motion correction, as well as automatically orient the 3D volume based on the fetal anatomy to facilitate clinical examination.

Kevin Keraudren, Bernhard Kainz, Ozan Oktay, Vanessa Kyriakopoulou, Mary Rutherford, Joseph V. Hajnal, Daniel Rueckert
A Statistical Model for Smooth Shapes in Kendall Shape Space

This paper proposes a novel framework for learning a statistical shape model from image data, automatically without manual annotations. The framework proposes a

generative model

for image data of individuals within a group, relying on a model of group shape variability. The framework represents shape as an equivalence class of pointsets and models group shape variability in Kendall shape space. The proposed model captures a novel shape-covariance structure that incorporates

shape smoothness

, relying on Markov regularization. Moreover, the framework employs a novel model for data likelihood, which lends itself to an inference algorithm of low complexity. The framework infers the model via a novel expectation maximization algorithm that

samples

smooth shapes in the Riemannian space. Furthermore, the inference algorithm normalizes the data (via similarity transforms) by optimal

alignment

to (sampled) individual shapes. Results on simulated and clinical data show that the proposed framework learns better-fitting compact statistical models as compared to the state of the art.

Akshay V. Gaikwad, Saurabh J. Shigwan, Suyash P. Awate
A Cross Saliency Approach to Asymmetry-Based Tumor Detection

Identifying regions that break the symmetry within an organ or between paired organs is widely used to detect tumors on various modalities. Algorithms for detecting these regions often align the images and compare the corresponding regions using some set of features. This makes them prone to misalignment errors and inaccuracies in the estimation of the symmetry axis. Moreover, they are susceptible to errors induced by both inter and intra image correlations. We use recent advances in image saliency and extend them to handle pairs of images, by introducing an algorithm for image cross-saliency. Our cross-saliency approach is able to estimate regions that differ both in organ and paired organs symmetry without using image alignment, and can handle large errors in symmetry axis estimation. Since our approach is based on internal patch distribution it returns only statistically informative regions and can be applied as is to various modalities. We demonstrate the application of our approach on brain MRI and breast mammogram.

Miri Erihov, Sharon Alpert, Pavel Kisilev, Sharbell Hashoul
Grey Matter Sublayer Thickness Estimation in the Mouse Cerebellum

The cerebellar grey matter morphology is an important feature to study neurodegenerative diseases such as Alzheimer’s disease or Down’s syndrome. Its volume or thickness is commonly used as a surrogate imaging biomarker for such diseases. Most studies about grey matter thickness estimation focused on the cortex, and little attention has been drawn on the morphology of the cerebellum. Using

ex vivo

high-resolution MRI, it is now possible to visualise the different cell layers in the mouse cerebellum. In this work, we introduce a framework to extract the Purkinje layer within the grey matter, enabling the estimation of the thickness of the cerebellar grey matter, the granular layer and molecular layer from gadolinium-enhanced

ex vivo

mouse brain MRI. Application to mouse model of Down’s syndrome found reduced cortical and layer thicknesses in the transchromosomic group.

Da Ma, Manuel J. Cardoso, Maria A. Zuluaga, Marc Modat, Nick Powell, Frances Wiseman, Victor Tybulewicz, Elizabeth Fisher, Mark F. Lythgoe, Sébastien Ourselin
Unregistered Multiview Mammogram Analysis with Pre-trained Deep Learning Models

We show two important findings on the use of deep convolutional neural networks (CNN) in medical image analysis. First, we show that CNN models that are pre-trained using computer vision databases (e.g., Imagenet) are useful in medical image applications, despite the significant differences in image appearance. Second, we show that multiview classification is possible without the pre-registration of the input images. Rather, we use the high-level features produced by the CNNs trained in each view separately. Focusing on the classification of mammograms using craniocaudal (CC) and mediolateral oblique (MLO) views and their respective mass and micro-calcification segmentations of the same breast, we initially train a separate CNN model for each view and each segmentation map using an Imagenet pre-trained model. Then, using the features learned from each segmentation map and unregistered views, we train a final CNN classifier that estimates the patient’s risk of developing breast cancer using the Breast Imaging-Reporting and Data System (BI-RADS) score. We test our methodology in two publicly available datasets (InBreast and DDSM), containing hundreds of cases, and show that it produces a volume under ROC surface of over 0.9 and an area under ROC curve (for a 2-class problem - benign and malignant) of over 0.9. In general, our approach shows state-of-the-art classification results and demonstrates a new comprehensive way of addressing this challenging classification problem.

Gustavo Carneiro, Jacinto Nascimento, Andrew P. Bradley
Automatic Craniomaxillofacial Landmark Digitization via Segmentation-Guided Partially-Joint Regression Forest Model

Craniomaxillofacial (CMF) deformities involve congenital and acquired deformities of the head and face. Landmark digitization is a critical step in quantifying CMF deformities. In current clinical practice, CMF landmarks have to be manually digitized on 3D models, which is time-consuming. To date, there is no clinically acceptable method that allows automatic landmark digitization, due to morphological variations among different patients and artifacts of cone-beam computed tomography (CBCT) images. To address these challenges, we propose a segmentation-guided partially-joint regression forest model that can automatically digitizes CMF landmarks. In this model, a regression voting strategy is first adopted to localize landmarks by aggregating evidences from context locations, thus potentially relieving the problem caused by image artifacts near the landmark. Second, segmentation is also utilized to resolve inconsistent landmark appearances that are caused by morphological variations among different patients, especially on the teeth. Third, a partially-joint model is proposed to separately localize landmarks based on coherence of landmark positions to improve digitization reliability. The experimental results show that the accuracy of automatically digitized landmarks using our approach is clinically acceptable.

Jun Zhang, Yaozong Gao, Li Wang, Zhen Tang, James J. Xia, Dinggang Shen
Automatic Feature Learning for Glaucoma Detection Based on Deep Learning

Glaucoma is a chronic and irreversible eye disease in which the optic nerve is progressively damaged, leading to deterioration in vision and quality of life. In this paper, we present an Automatic feature Learning for glAucoma Detection based on Deep LearnINg (ALADDIN), with deep convolutional neural network (CNN) for feature learning. Different from the traditional convolutional layer that uses linear filters followed by a nonlinear activation function to scan the input, the adopted network embeds micro neural networks (multilayer perceptron) with more complex structures to abstract the data within the receptive field. Moreover, a contextualizing deep learning structure is proposed in order to obtain a hierarchical representation of fundus images to discriminate between glaucoma and non-glaucoma pattern, where the network takes the outputs from other CNN as the context information to boost the performance. Extensive experiments are performed on the

ORIGA

and

SCES

datasets. The results show area under curve (AUC) of the receiver operating characteristic curve in glaucoma detection at 0.838 and 0.898 in the two databases, much better than state-of-the-art algorithms. The method could be used for glaucoma diagnosis.

Xiangyu Chen, Yanwu Xu, Shuicheng Yan, Damon Wing Kee Wong, Tien Yin Wong, Jiang Liu
Fast Automatic Vertebrae Detection and Localization in Pathological CT Scans - A Deep Learning Approach

Automatic detection and localization of vertebrae in medical images are highly sought after techniques for computer-aided diagnosis systems of the spine. However, the presence of spine pathologies and surgical implants, and limited field-of-view of the spine anatomy in these images, make the development of these techniques challenging. This paper presents an automatic method for detection and localization of vertebrae in volumetric computed tomography (CT) scans. The method makes no assumptions about which section of the vertebral column is visible in the image. An efficient approach based on deep feed-forward neural networks is used to predict the location of each vertebra using its contextual information in the image. The method is evaluated on a public data set of 224 arbitrary-field-of-view CT scans of pathological cases and compared to two state-of-the-art methods. Our method can perform vertebrae detection at a rate of 96% with an overall run time of less than 3 seconds. Its fast and comparably accurate detection makes it appealing for clinical diagnosis and therapy applications.

Amin Suzani, Alexander Seitel, Yuan Liu, Sidney Fels, Robert N. Rohling, Purang Abolmaesumi
Guided Random Forests for Identification of Key Fetal Anatomy and Image Categorization in Ultrasound Scans

In this paper, we propose a novel machine learning based method to categorize unlabeled fetal ultrasound images. The proposed method guides the learning of a Random Forests classifier to extract features from regions inside the images where meaningful structures exist. The new method utilizes a translation and orientation invariant feature which captures the appearance of a region at multiple spatial resolutions. Evaluated on a large real world clinical dataset (~30K images from a hospital database), our method showed very promising categorization accuracy (accuracy

top1

is 75% while accuracy

top2

is 91%).

Mohammad Yaqub, Brenda Kelly, A. T. Papageorghiou, J. Alison Noble
Dempster-Shafer Theory Based Feature Selection with Sparse Constraint for Outcome Prediction in Cancer Therapy

As a pivotal task in cancer therapy, outcome prediction is the foundation for tailoring and adapting a treatment planning. In this paper, we propose to use image features extracted from PET and clinical characteristics. Considering that both information sources are imprecise or noisy, a novel prediction model based on Dempster-Shafer theory is developed. Firstly, a specific loss function with sparse regularization is designed for learning an adaptive dissimilarity metric between feature vectors of labeled patients. Through minimizing this loss function, a linear low-dimensional transformation of the input features is then achieved; meanwhile, thanks to the sparse penalty, the influence of imprecise input features can also be reduced via feature selection. Finally, the learnt dissimilarity metric is used with the Evidential

K

-Nearest-Neighbor (EK-NN) classifier to predict the outcome. We evaluated the proposed method on two clinical data sets concerning to lung and esophageal tumors, showing good performance.

Chunfeng Lian, Su Ruan, Thierry Denœux, Hua Li, Pierre Vera
Disjunctive Normal Shape and Appearance Priors with Applications to Image Segmentation

The use of appearance and shape priors in image segmentation is known to improve accuracy; however, existing techniques have several drawbacks. Active shape and appearance models require landmark points and assume unimodal shape and appearance distributions. Level set based shape priors are limited to global shape similarity. In this paper, we present a novel shape and appearance priors for image segmentation based on an implicit parametric shape representation called disjunctive normal shape model (DNSM). DNSM is formed by disjunction of conjunctions of half-spaces defined by discriminants. We learn shape and appearance statistics at varying spatial scales using nonparametric density estimation. Our method can generate a rich set of shape variations by locally combining training shapes. Additionally, by studying the intensity and texture statistics around each discriminant of our shape model, we construct a local appearance probability map. Experiments carried out on both medical and natural image datasets show the potential of the proposed method.

Fitsum Mesadi, Mujdat Cetin, Tolga Tasdizen
Positive Delta Detection for Alpha Shape Segmentation of 3D Ultrasound Images of Pathologic Kidneys

Ultrasound is the mainstay of imaging for pediatric hydronephrosis, which appears as the dilation of the renal collecting system. However, its potential as diagnostic tool is limited by the subjective visual interpretation of radiologists. As a result, the severity of hydronephrosis in children is evaluated by invasive and ionizing diuretic renograms. In this paper, we present the first complete framework for the segmentation and quantification of renal structures in 3D ultrasound images, a difficult and barely studied challenge. In particular, we propose a new active contour-based formulation for the segmentation of the renal collecting system, which mimics the propagation of fluid inside the kidney. For this purpose, we introduce a new positive delta detector for ultrasound images that allows to identify the fat of the renal sinus surrounding the dilated collecting system, creating an alpha shape-based patient-specific positional map. Finally, we incorporate a Gabor-based semi-automatic segmentation of the kidney to create the first complete ultrasound-based framework for the quantification of hydronephrosis. The promising results obtained over a dataset of 13 pathological cases (dissimilarity of 2.8 percentage points on the computation of the volumetric hydronephrosis index) demonstrate the potential utility of the new framework for the non-invasive and non-ionizing assessment of hydronephrosis severity among the pediatric population.

Juan J. Cerrolaza, Christopher Meyer, James Jago, Craig Peters, Marius George Linguraru
Non-local Atlas-guided Multi-channel Forest Learning for Human Brain Labeling

Labeling MR brain images into anatomically meaningful regions is important in many quantitative brain researches. In many existing label fusion methods, appearance information is widely used. Meanwhile, recent progress in computer vision suggests that the context feature is very useful in identifying an object from a complex scene. In light of this, we propose a novel learning-based label fusion method by using both low-level appearance features (computed from the target image) and high-level context features (computed from warped atlases or tentative labeling maps of the target image). In particular, we employ a multi-channel random forest to learn the nonlinear relationship between these hybrid features and the target labels (i.e., corresponding to certain anatomical structures). Moreover, to accommodate the high inter-subject variations, we further extend our learning-based label fusion to a multi-atlas scenario, i.e., we train a random forest for each atlas and then obtain the final labeling result according to the consensus of all atlases. We have comprehensively evaluated our method on both LONI-LBPA40 and IXI datasets, and achieved the highest labeling accuracy, compared to the state-of-the-art methods in the literature.

Guangkai Ma, Yaozong Gao, Guorong Wu, Ligang Wu, Dinggang Shen
Model Criticism for Regression on the Grassmannian

Reliable estimation of model parameters from data requires a suitable model. In this work, we investigate and extend a recent

model criticism

approach to evaluate regression models on the Grassmann manifold. Model criticism allows us to check if a model fits and if the underlying model assumptions are justified by the observed data. This is a critical step to check model validity which is often neglected in practice. Using synthetic data we demonstrate that the proposed model criticism approach can indeed reject models that are improper for observed data and that the approach can guide the model selection process. We study two real applications: degeneration of corpus callosum shapes during aging and developmental shape changes in the rat calvarium. Our experimental results suggest that the three tested regression models on the Grassmannian (equivalent to linear, time-warped, and cubic-spline regression in ℝ

n

, respectively) can all capture changes of the corpus callosum, but only the cubic-spline model is appropriate for shape changes of the rat calvarium. While our approach is developed for the Grassmannian, the principles are applicable to smooth manifolds in general.

Yi Hong, Roland Kwitt, Marc Niethammer
Structural Edge Detection for Cardiovascular Modeling

Computational simulations provide detailed hemodynamics and physiological data that can assist in clinical decision-making. However, accurate cardiovascular simulations require complete 3D models constructed from image data. Though edge localization is a key aspect in pinpointing vessel walls in many segmentation tools, the edge detection algorithms widely utilized by the medical imaging community have remained static. In this paper, we propose a novel approach to medical image edge detection by adopting the powerful structured forest detector and extending its application to the medical imaging domain. First, we specify an effective set of medical imaging driven features. Second, we directly incorporate an adaptive prior to create a robust three-dimensional edge classifier. Last, we boost our accuracy through an intelligent sampling scheme that only samples areas of importance to edge fidelity. Through experimentation, we demonstrate that the proposed method outperforms widely used edge detectors and probabilistic boosting tree edge classifiers and is robust to error in a

prori

information.

Jameson Merkow, Zhuowen Tu, David Kriegman, Alison Marsden
Sample Size Estimation for Outlier Detection

The study of brain disorders which display spatially heterogeneous patterns of abnormalities has led to a number of techniques aimed at providing subject specific abnormality (SSA) maps. One popular method to identify SSAs is to calculate, for a set of regions, the z-score between a feature in a test subject and the distribution of this feature in a normative atlas, and identify regions exceeding a threshold. While sample size estimation and power calculations are well understood in group comparisons, describing the confidence interval of a z-score threshold or estimating sample size for a desired threshold uncertainty in SSA analyses have not been thoroughly considered. In this paper, we propose a method to quantify the impact of the size and distribution properties of the control data on the uncertainty of the z-score threshold. The main idea is that a z-score threshold confidence interval can be approximated by using Gaussian Error Propagation of the uncertainties associated with the sample mean and standard deviation. In addition, we provide a method to estimate the sample size of the control data required for a z-score threshold and associated desired width of confidence interval. We provide both parametric and resampling methods to estimate these confidence intervals and apply our techniques to establish confidence of SSA maps of diffusion data in subjects with traumatic brain injury.

Timothy Gebhard, Inga Koerte, Sylvain Bouix
Multi-scale Heat Kernel Based Volumetric Morphology Signature

Here we introduce a novel multi-scale heat kernel based regional shape statistical approach that may improve statistical power on the structural analysis. The mechanism of this analysis is driven by the graph spectrum and the heat kernel theory, to capture the volumetric geometry information in the constructed tetrahedral mesh. In order to capture profound volumetric changes, we first use the volumetric Laplace-Beltrami operator to determine the point pair correspondence between two boundary surfaces by computing the streamline in the tetrahedral mesh. Secondly, we propose a multi-scale volumetric morphology signature to describe the transition probability by random walk between the point pairs, which reflects the inherent geometric characteristics. Thirdly, a point distribution model is applied to reduce the dimensionality of the volumetric morphology signatures and generate the internal structure features. The multi-scale and physics based internal structure features may bring stronger statistical power than other traditional methods for volumetric morphology analysis. To validate our method, we apply support vector machine to classify synthetic data and brain MR images. In our experiments, the proposed work outperformed FreeSurfer thickness features in Alzheimer’s disease patient and normal control subject classification analysis.

Gang Wang, Yalin Wang
Structural Brain Mapping

Brain mapping plays an important role in neuroscience and medical imaging fields, which flattens the convoluted brain cortical surface and exposes the hidden geometry details onto a canonical domain. Existing methods such as conformal mappings didn’t consider the anatomical atlas network structure, and the anatomical landmarks, e.g., gyri curves, appear highly curvy on the canonical domains. Using such maps, it is difficult to recognize the connecting pattern and compare the atlases. In this work, we present a novel brain mapping method to efficiently visualize the convoluted and partially invisible cortical surface through a well-structured view, called the

structural brain mapping

. In computation, the brain atlas network (“node” - the junction of anatomical cortical regions, “edge” - the connecting curve between cortical regions) is first mapped to a planar straight line graph based on Tutte graph embedding, where all the edges are crossing-free and all the faces are convex polygons; the brain surface is then mapped to the convex shape domain based on harmonic map with linear constraints. Experiments on two brain MRI databases, including 250 scans with automatic atlases processed by FreeSurfer and 40 scans with manual atlases from LPBA40, demonstrate the efficiency and efficacy of the algorithm and the practicability for visualizing and comparing brain cortical anatomical structures.

Muhammad Razib, Zhong-Lin Lu, Wei Zeng
Backmatter
Metadata
Title
Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015
Editors
Nassir Navab
Joachim Hornegger
William M. Wells
Alejandro F. Frangi
Copyright Year
2015
Electronic ISBN
978-3-319-24574-4
Print ISBN
978-3-319-24573-7
DOI
https://doi.org/10.1007/978-3-319-24574-4

Premium Partner