Skip to main content

2015 | Buch

Machine Learning in Medical Imaging

6th International Workshop, MLMI 2015, Held in Conjunction with MICCAI 2015, Munich, Germany, October 5, 2015, Proceedings

insite
SUCHEN

Über dieses Buch

This book constitutes the proceedings of the 6th International Workshop on Machine Learning in Medical Imaging, MLMI 2015, held in conjunction with MICCAI 2015, in Munich in October 2015.

The 40 full papers presented in this volume were carefully reviewed and selected from 69 submissions. The workshop focuses on major trends and challenges in the area of machine learning in medical imaging and present works aimed to identify new cutting-edge techniques and their use in medical imaging.

Inhaltsverzeichnis

Frontmatter
Segmentation of Right Ventricle in Cardiac MR Images Using Shape Regression

Accurate and automatic segmentation of the right ventricle is challenging due to its complex anatomy and large shape variation observed between patients. In this paper the ability of shape regression is explored to segment right ventricle in presence of large shape variation among the patients. We propose a robust and efficient cascaded shape regression method which iteratively learns the final shape from a given initial shape. We use gradient boosted regression trees to learn each regressor in the cascade to take the advantage of supervised feature selection mechanism. A novel data augmentation method is proposed to generate synthetic training samples to improve regressors performance. In addition to that, a robust fusion method is proposed to reduce the the variance in the predictions given by different initial shapes, which is a major drawback of cascade regression based methods. The proposed method is evaluated on an image set of 45 patients and shows high segmentation accuracy with dice metric of

$$0.87\pm 0.06$$

. Comparative study shows that our proposed method performs better than state-of-the-art multi-atlas label fusion based segmentation methods.

Suman Sedai, Pallab Roy, Rahil Garnavi
Visual Saliency Based Active Learning for Prostate MRI Segmentation

We propose an active learning (AL) approach for prostate segmentation from magnetic resonance (MR) images. Our label query strategy is inspired from the principles of visual saliency that has similar considerations for choosing the most salient region. These similarities are encoded in a graph using classification maps and low level features. Random walks identify the most informative node which is equivalent to the label query sample in AL. Experimental results on the MICCAI 2012 Prostate segmentation challenge show the superior performance of our approach to conventional methods using fully supervised learning.

Dwarikanath Mahapatra, Joachim M. Buhmann
Soft-Split Random Forest for Anatomy Labeling

Random Forest (RF) has been widely used in the learning-based labeling. In RF, each sample is directed from the root to each leaf based on the decisions made in the interior nodes, also called splitting nodes. The splitting nodes assign a testing sample to

either

left

or

right child based on the learned splitting function. The final prediction is determined as the average of label probability distributions stored in all arrived leaf nodes. For ambiguous testing samples, which often lie near the splitting boundaries, the conventional splitting function, also referred to as

hard split

function, tends to make wrong assignments, hence leading to wrong predictions. To overcome this limitation, we propose a novel

soft

-

split

random forest (SSRF) framework to improve the reliability of node splitting and finally the accuracy of classification. Specifically, a

soft split

function is employed to assign a testing sample into

both

left

and

right child nodes with their certain probabilities, which can effectively reduce influence of the wrong node assignment on the prediction accuracy. As a result, each testing sample can arrive at multiple leaf nodes, and their respective results can be fused to obtain the final prediction according to the weights accumulated along the path from the root node to each leaf node. Besides, considering the importance of context information, we also adopt a Haar-features based context model to iteratively refine the classification map. We have comprehensively evaluated our method on two public datasets, respectively, for labeling hippocampus in MR images and also labeling three organs in Head & Neck CT images. Compared with the

hard-split

RF (HSRF), our method achieved a notable improvement in labeling accuracy.

Guangkai Ma, Yaozong Gao, Li Wang, Ligang Wu, Dinggang Shen
A New Image Data Set and Benchmark for Cervical Dysplasia Classification Evaluation

Cervical cancer is one of the most common types of cancer in women worldwide. Most deaths of cervical cancer occur in less developed areas of the world. In this work, we introduce a new image dataset along with ground truth diagnosis for evaluating image-based cervical disease classification algorithms. We collect a large number of cervigram images from a database provided by the US National Cancer Institute. From these images, we extract three types of complementary image features, including Pyramid histogram in L*A*B* color space (PLAB), Pyramid Histogram of Oriented Gradients (PHOG), and Pyramid histogram of Local Binary Patterns (PLBP). PLAB captures color information, PHOG encodes edges and gradient information, and PLBP extracts texture information. Using these features, we run seven classic machine-learning algorithms to differentiate images of high-risk patient visits from those of low-risk patient visits. Extensive experiments are conducted on both balanced and imbalanced subsets of the data to compare the seven classifiers. These results can serve as a baseline for future research in cervical dysplasia classification using images. The image-based classifiers also outperform results of several other screening tests on the same datasets.

Tao Xu, Cheng Xin, L. Rodney Long, Sameer Antani, Zhiyun Xue, Edward Kim, Xiaolei Huang
Machine Learning on High Dimensional Shape Data from Subcortical Brain Surfaces: A Comparison of Feature Selection and Classification Methods

Recently, high-dimensional shape data (HDSD) has been demonstrated to be informative in describing subcortical brain morphometry in several disorders. While HDSD may serve as a biomarker of disease, its high dimensionality may require careful treatment in its application to machine learning. Here, we compare several possible approaches for feature selection and pattern classification using HDSD. We explore the efficacy of three candidate feature selection (FS) methods: Guided Random Forest (GRF), LASSO and no feature selection (NFS). Each feature set was applied to three classifiers: Random Forest (RF), Support Vector Machines (SVM) and Naïve Bayes (NB). Each model was cross-validated using two diagnostic contrasts: Alzheimer’s Disease and mild cognitive impairment; each relative to matched controls. GRF and NFS outperformed LASSO as FS methods and were comparably competitive. NB underperformed relative to RF and SVM, which were comparable in performance. Our results advocate the NFS-RF approach for its speed, simplicity and interpretability.

Benjamin S. C. Wade, Shantanu H. Joshi, Boris A. Gutman, Paul M. Thompson
Node-Based Gaussian Graphical Model for Identifying Discriminative Brain Regions from Connectivity Graphs

Despite that the bulk of our knowledge on brain function is established around brain regions, current methods for comparing connectivity graphs largely take an edge-based approach with the aim of identifying discriminative connections. In this paper, we explore a node-based Gaussian Graphical Model (NBGGM) that facilitates identification of brain regions attributing to connectivity differences seen between a pair of graphs. To enable group analysis, we propose an extension of NBGGM via incorporation of stability selection. We evaluate NBGGM on two functional magnetic resonance imaging (fMRI) datasets pertaining to within and between-group studies. We show that NBGGM more consistently selects the same brain regions over random data splits than using node-based graph measures. Importantly, the regions found by NBGGM correspond well to those known to be involved for the investigated conditions.

Bernard Ng, Anna-Clare Milazzo, Andre Altmann
BundleMAP: Anatomically Localized Features from dMRI for Detection of Disease

We present BundleMAP, a novel method for extracting features from diffusion MRI (dMRI), which can be used to detect disease with supervised classification. BundleMAP uses manifold learning to aggregate measurements over localized segments of nerve fiber bundles, which are natural anatomical units in this data. We obtain a fully integrated machine learning pipeline by combining this idea with mechanisms for outlier removal and feature selection. We demonstrate that it increases accuracy on a clinical dataset for which classification results have been reported previously, and that it pinpoints the anatomical locations relevant to the classification.

Mohammad Khatami, Tobias Schmidt-Wilcke, Pia C. Sundgren, Amin Abbasloo, Bernhard Schölkopf, Thomas Schultz
FADR: Functional-Anatomical Discriminative Regions for Rest fMRI Characterization

Resting state fMRI is a powerful method of functional brain imaging, which can reveal information of functional connectivity between regions during rest. In this paper, we present a novel method, called Functional-Anatomical Discriminative Regions (FADR), for selecting a discriminative subset of functional-anatomical regions of the brain in order to characterize functional connectivity abnormalities in mental disorders. FADR integrates Independent Component Analysis with a sparse feature selection strategy, namely Elastic Net, in a supervised framework to extract a new sparse representation. In particular, ICA is used for obtaining group Resting State Networks and functional information is extracted from the subject-specific spatial maps. Anatomical information is incorporated to localize the discriminative regions. Thus, functional-anatomical information is combined in the new descriptor, which characterizes areas of different networks and carries discriminative power. Experimental results on the public database ADHD-200 validate the method being able to automatically extract discriminative areas and extending results from previous studies. The classification ability is evaluated showing that our method performs better than the average of the teams in the ADHD-200 Global Competition while giving relevant information about the disease by selecting the most discriminative regions at the same time.

Marta Nuñez-Garcia, Sonja Simpraga, Maria Angeles Jurado, Maite Garolera, Roser Pueyo, Laura Igual
Craniomaxillofacial Deformity Correction via Sparse Representation in Coherent Space

Orthognathic surgery is popular for patients with craniomaxillofacial (CMF) deformity. For orthognathic surgical planning, it is critical to have a patient-specific jaw reference model as guidance. One way is to estimate a normal jaw shape for the patient, by first searching for a normal subject with similar midface and then borrowing his/her (normal) jaw shape as reference. Intuitively, we can search for multiple normal subjects with similar midface and then linearly combine them as final reference. The respective coefficients for linear combination can be estimated, i.e., by sparse representation of patient’s midface by midfaces of all training normal subjects. However, this approach implicitly assumes that the representation of midface shapes is strongly correlated with the representation of jaw shapes, which is unfortunately difficult to meet in practice due to generally different data distributions of shapes of midfaces and jaws. To address this limitation, we propose to estimate the patient-specific jaw reference model in a coherent space. Specifically, we first employ canonical correlation analysis (CCA) to map the midface and jaw landmarks of training normal subjects into a coherent space, in which their correlation is maximized. Then, in the coherent space, the mapped midface landmarks of patient can be sparsely represented by the mapped midface landmarks of training normal subjects. Those learned sparse coefficients can now be used to combine the jaw landmarks of training normal subjects for estimating the normal jaw landmarks for patient and then building normal jaw shape reference model. Moreover, we also iteratively maximize the correlation between the midface and the jaw shapes in the new coherent space with a multi-layer mapping and refinement (MMR) process. Experimental results on real clinical data show that the proposed method can more accurately reconstruct the normal jaw shape for patient than the competing methods.

Zuoyong Li, Le An, Jun Zhang, Li Wang, James J. Xia, Dinggang Shen
Nonlinear Graph Fusion for Multi-modal Classification of Alzheimer’s Disease

Recent studies have demonstrated that biomarkers from multiple modalities contain complementary information for the diagnosis of Alzheimer’s disease (AD) and its prodromal stage mild cognitive impairment (MCI). In order to fuse data from multiple modalities, most previous approaches calculate a mixed kernel or a similarity matrix by linearly combining kernels or similarities from multiple modalities. However, the complementary information from multi-modal data are not necessarily linearly related. In addition, this linear combination is also sensitive to the weights assigned to each modality. In this paper, we propose a nonlinear graph fusion method to efficiently exploit the complementarity in the multi-modal data for the classification of AD. Specifically, a graph is first constructed for each modality individually. Afterwards, a single unified graph is obtained via a nonlinear combination of the graphs in an iterative cross diffusion process. Using the unified graphs, we achieved classification accuracies of 91.8% between AD subjects and normal controls (NC), 79.5% between MCI subjects and NC and 60.2% in a three-way classification, which are competitive with state-of-the-art results.

Tong Tong, Katherine Gray, Qinquan Gao, Liang Chen, Daniel Rueckert
HEp-2 Staining Pattern Recognition Using Stacked Fisher Network for Encoding Weber Local Descriptor

This study addresses the recognition problem of the HEp-2 cell using indirect immunofluorescent (IIF) image analysis, which can indicate the presence of autoimmune diseases by finding antibodies in the patient serum. Generally, the method used for IIF analysis remains subjective, and depends too heavily on the experience and expertise of the physician. This study aims to explore an automatic HEp-2 cell recognition system, in which how to extract highly discriminate visual features plays a key role in this recognition application. In order to realize this purpose, our main efforts include: (1) a transformed excitation domain instead of the raw image domain, which is based on the fact that human perception for disguising a pattern depends not only on the absolute intensity of the stimulus but also on the relative variance of the stimulus; (2) a simple but robust micro-texton without any quantization in the excitation domain, called as Weber local descriptor (WLD); (3) a data-driven coding strategy with a parametric probability process, and the extraction of not only low- but also high-order statistics for image representation called as Fisher vector; (4) the stacking of the Fisher network into deep learning framework for more discriminate feature. Experiments using the open HEp-2 cell dataset released in the ICIP2013 contest validate that the proposed strategy can achieve a much better performance than the state-of-the-art approaches, and that the achieved recognition error rate is even very significantly below the observed intra-laboratory variability

Xian-Hua Han, Yen-Wei Chen, Gang Xu
Supervoxel Classification Forests for Estimating Pairwise Image Correspondences

This paper proposes a general method for establishing pairwise correspondences, which is a fundamental problem in image analysis. The method consists of over-segmenting a pair of images into supervoxels. A forest classifier is then trained on one of the images, the source, by using supervoxel indices as voxelwise class labels. Applying the forest on the other image, the target, yields a supervoxel labelling which is then regularized using majority voting within the boundaries of the target’s supervoxels. This yields semi-dense correspondences in a fully automatic, efficient and robust manner. The advantage of our approach is that no prior information or manual annotations are required, making it suitable as a general initialisation component for various medical imaging tasks that require coarse correspondences, such as, atlas/patch-based segmentation, registration, and atlas construction. Our approach is evaluated on a set of 150 abdominal CT images. In this dataset we use manual organ segmentations for quantitative evaluation. In particular, the quality of the correspondences is determined in a label propagation setting. Comparison to other state-of-the-art methods demonstrate the potential of supervoxel classification forests for estimating image correspondences.

Fahdi Kanavati, Tong Tong, Kazunari Misawa, Michitaka Fujiwara, Kensaku Mori, Daniel Rueckert, Ben Glocker
Non-rigid Free-Form 2D-3D Registration Using Statistical Deformation Model

This paper presents a non-rigid free-from 2D-3D registration approach using statistical deformation model (SDM). In our approach the SDM is first constructed from a set of training data using a non-rigid registration algorithm based on b-spline free-form deformation to encode

a priori

information about the underlying anatomy. A novel intensity-based non-rigid 2D-3D registration algorithm is then presented to iteratively fit the 3D b-spline-based SDM to the 2D X-ray images of an unseen subject, which requires a computationally expensive inversion of the instantiated deformation in each iteration. In this paper, we propose to solve this challenge with a fast B-spline pseudo-inversion algorithm that is implemented on graphics processing unit (GPU). Experiments conducted on C-arm and X-ray images of cadaveric femurs demonstrate the efficacy of the present approach.

Guoyan Zheng, Weimin Yu
Learning and Combining Image Similarities for Neonatal Brain Population Studies

The characterization of neurodevelopment is challenging due to the complex structural changes of the brain in early childhood. To analyze the changes in a population across time and to relate them with clinical information, manifold learning techniques can be applied. The neighborhood definition used for constructing manifold representations of the population is crucial for preserving the similarity structure in the embedding and highly application dependent. It has been shown that the combination of several notions of similarity and features can improve the new representation. However, how to combine and weight different similarites and features is non-trivial. In this work, we propose to learn the neighborhood structure and similarity measure used for manifold learning through Neighborhood Approximation Forests (NAFs). The recently proposed NAFs learn a neighborhood structure in a dataset based on a user-defined distance. A characterization of image similarity using NAFs enables us to construct manifold representations based on a previously defined criterion to improve predictions regarding structural and clinical information. In particular, NAFs can be used naturally to combine the affinities learned from multiple distances in a joint manifold towards a more meaningful representation and an improved characterization of the resulting embedding. We demonstrate the utility of NAFs in manifold learning on a population of preterm and in term neonates for classification regarding structural volume and clinical information.

Veronika A. Zimmer, Ben Glocker, Paul Aljabar, Serena J. Counsell, Mary A. Rutherford, A. David Edwards, Jo V. Hajnal, Miguel Ángel González Ballester, Daniel Rueckert, Gemma Piella
Deep Learning, Sparse Coding, and SVM for Melanoma Recognition in Dermoscopy Images

This work presents an approach for melanoma recognition in dermoscopy images that combines deep learning, sparse coding, and support vector machine (SVM) learning algorithms. One of the beneficial aspects of the proposed approach is that unsupervised learning within the domain, and feature transfer from the domain of natural photographs, eliminates the need of annotated data in the target task to learn good features. The applied feature transfer also allows the system to draw analogies between observations in dermoscopic images and observations in the natural world, mimicking the process clinical experts themselves employ to describe patterns in skin lesions. To evaluate the methodology, performance is measured on a dataset obtained from the International Skin Imaging Collaboration, containing 2624 clinical cases of melanoma (334), atypical nevi (144), and benign lesions (2146). The approach is compared to the prior state-of-art method on this dataset. Two-fold cross-validation is performed 20 times for evaluation (40 total experiments), and two discrimination tasks are examined: 1) melanoma vs. all non-melanoma lesions, and 2) melanoma vs. atypical lesions only. The presented approach achieves an accuracy of 93.1% (94.9% sensitivity, and 92.8% specificity) for the first task, and 73.9% accuracy (73.8% sensitivity, and 74.3% specificity) for the second task. In comparison, prior state-of-art ensemble modeling approaches alone yield 91.2% accuracy (93.0% sensitivity, and 91.0% specificity) first the first task, and 71.5% accuracy (72.7% sensitivity, and 68.9% specificity) for the second. Differences in performance were statistically significant (p

$$<$$

0.05), suggesting the proposed approach is an effective improvement over prior state-of-art.

Noel Codella, Junjie Cai, Mani Abedini, Rahil Garnavi, Alan Halpern, John R. Smith
Predicting Standard-Dose PET Image from Low-Dose PET and Multimodal MR Images Using Mapping-Based Sparse Representation

Positron emission tomography (PET) has been widely used in clinical diagnosis of diseases or disorders. To reduce the risk of radiation exposure, we propose a mapping-based sparse representation (

m

-SR) framework for prediction of

standard-dose

PET image from its

low-dose

counterpart and corresponding multimodal magnetic resonance (MR) images. Compared with the conventional patch-based SR, our method uses a mapping strategy to ensure that the sparse coefficients estimated from the low-dose PET and multimodal MR images could be directly applied to the prediction of standard-dose PET images. An incremental refinement framework is also proposed to further improve the performance. Finally, a patch selection based dictionary construction method is used to speed up the prediction process. The proposed method has been validated on a real human brain dataset, showing that our method can work much better than the state-of-the-art method both qualitatively and quantitatively.

Yan Wang, Pei Zhang, Le An, Guangkai Ma, Jiayin Kang, Xi Wu, Jiliu Zhou, David S. Lalush, Weili Lin, Dinggang Shen
Boosting Convolutional Filters with Entropy Sampling for Optic Cup and Disc Image Segmentation from Fundus Images

We propose a novel convolutional neural network (CNN) based method for optic cup and disc segmentation. To reduce computational complexity, an entropy based sampling technique is introduced that gives superior results over uniform sampling. Filters are learned over several layers with the output of previous layers serving as the input to the next layer. A softmax logistic regression classifier is subsequently trained on the output of all learned filters. In several error metrics, the proposed algorithm outperforms existing methods on the public DRISHTI-GS data set.

Julian G. Zilly, Joachim M. Buhmann, Dwarikanath Mahapatra
Brain Fiber Clustering Using Non-negative Kernelized Matching Pursuit

We present a kernel dictionary learning method to cluster fiber tracts obtained from diffusion Magnetic Resonance Imaging (dMRI) data. This method extends the kernelized Orthogonal Matching Pursuit (kOMP) model by adding non-negativity constraints to the dictionary and sparse weights, and uses an efficient technique based on non-negative tri-factorization to compute these parameters. Unlike existing fiber clustering approaches, the proposed method allows fibers to be assigned to more than one cluster and does not need to compute an explicit embedding of the fibers. We evaluate the performance of our method on labeled and multi-subject data, using several fiber distance measures, and compare it with state of the art fiber clustering approaches. Our experiments show that the method is more accurate than the ones we compare against, while being robust to the choice of distance measure and number of clusters.

Kuldeep Kumar, Christian Desrosiers, Kaleem Siddiqi
Automatic Detection of Good/Bad Colonies of iPS Cells Using Local Features

In this paper we propose a method able to automatically detect good/bad colonies of iPS cells using local patches based on densely extracted SIFT features. Different options for local patch classification based on a kernelized novelty detector, a 2-class SVM and a local Bag-of-Features approach are considered. Experimental results on 33 images of iPS cell colonies have shown that excellent accuracy can be achieved by the proposed approach.

Atsuki Masuda, Bisser Raytchev, Takio Kurita, Toru Imamura, Masashi Suzuki, Toru Tamaki, Kazufumi Kaneda
Detecting Abnormal Cell Division Patterns in Early Stage Human Embryo Development

Recently, it has been shown that early division patterns, such as cell division timing biomarkers, are crucial to predict human embryo viability. Precise and accurate measurement of these markers requires cell lineage analysis to identify normal and abnormal division patterns. However, current approaches to early-stage embryo analysis only focus on estimating the number of cells and their locations, thus failing to detect abnormal division patterns and potentially yielding incorrect timing biomarkers. In this work we propose an automated tool that can perform lineage tree analysis up to the 5-cell stage, which is sufficient to accurately compute all the known important biomarkers. To this end, we introduce a CRF-based cell localization framework. We demonstrate the benefits of our approach on a data set of 22 human embryos, resulting in correct identification of all abnormal division patterns in the data set.

Aisha Khan, Stephen Gould, Mathieu Salzmann
Identification of Infants at Risk for Autism Using Multi-parameter Hierarchical White Matter Connectomes

Autism spectrum disorder (ASD) is a variety of developmental disorders that cause life-long communication and social deficits. However, ASD could only be diagnosed at children as early as 2 years of age, while early signs may emerge within the first year. White matter (WM) connectivity abnormalities have been documented in the first year of lives of ASD subjects. We introduce a novel multi-kernel support vector machine (SVM) framework to identify infants at high-risk for ASD at 6 months old, by utilizing the diffusion parameters derived from a hierarchical set of WM connectomes. Experiments show that the proposed method achieves an accuracy of 76%, in comparison to 70% with the best single connectome. The complementary information extracted from hierarchical networks enhances the classification performance, with the top discriminative connections consistent with other studies. Our framework provides essential imaging connectomic markers and contributes to the evaluation of ASD risks as early as 6 months.

Yan Jin, Chong-Yaw Wee, Feng Shi, Kim-Han Thung, Pew-Thian Yap, Dinggang Shen, Infant Brain Imaging Study (IBIS) Network
Group-Constrained Laplacian Eigenmaps: Longitudinal AD Biomarker Learning

Longitudinal modeling of biomarkers to assess a subject’s risk of developing Alzheimers disease (AD) or determine the current state in the disease trajectory has recently received increased attention.Here, a new method to estimate the time-to-conversion (TTC) of mild cognitive impaired (MCI) subjects to AD from a low-dimensional representation of the data is proposed. This is achieved via a combination of multilevel feature selection followed by a novel formulation of the Laplacian Eigenmaps manifold learning algorithm that allows the incorporation of group constraints.Feature selection is performed using Magnetic Resonance (MR) images that have been aligned at different detail levels to a template. The suggested group constraints are added to the construction of the neighborhood matrix which is used to calculate the graph Laplacian in the Laplacian Eigenmaps algorithm.The proposed formulation yields relevant improvements for the prediction of the TTC and for the three-way classification (control/MCI/AD) on the ADNI database.

R. Guerrero, C. Ledig, A. Schmidt-Richberg, D. Rueckert
Multi-atlas Context Forests for Knee MR Image Segmentation

It is important, yet a challenging procedure, to segment bones and cartilages from knee MR images. In this paper, we propose multi-atlas context forests to first segment bones and then segment cartilages. Specifically, for both the bone and cartilage segmentations, we

iteratively

train sets of random forests, based on training atlas images, to classify the individual voxels. The random forests rely on (1) the appearance features directly computed from images and also (2) the context features associated with tentative segmentation results, generated by the previous layer of random forest in the iterative framework. To extract context features, multiple atlases (with expert segmentation) are first registered, with the tentative segmentation result of the subject under consideration. Then, the spatial priors of anatomical labels of registered atlases are computed and used to calculate context features of the subject. Note that these multi-atlas context features will be iteratively refined based on the (updated) tentative segmentation result of the subject. As better segmentation result leads to more accurate registration between multiple atlases and the subject, context features will become increasingly more useful for the training of subsequent random forests in the iterative framework. As validated by experiments on the SKI10 dataset, our proposed method can achieve high segmentation accuracy.

Qin Liu, Qian Wang, Lichi Zhang, Yaozong Gao, Dinggang Shen
Longitudinal Patch-Based Segmentation of Multiple Sclerosis White Matter Lesions

Segmenting

$$T_2$$

-weighted white matter lesions from longitudinal MR images is essential in understanding progression of multiple sclerosis. Most lesion segmentation techniques find lesions independently at each time point, even though there are different noise and image contrast variations at each point in the time series. In this paper, we present a patch based 4D lesion segmentation method that takes advantage of the temporal component of longitudinal data. For each subject with multiple time-points, 4D patches are constructed from the

$$T_1$$

-w and FLAIR scans of all time-points. For every 4D patch from a subject, a few relevant matching 4D patches are found from a reference, such that their convex combination reconstructs the subject’s 4D patch. Then corresponding manual segmentation patches of the reference are combined in a similar manner to generate a 4D membership of lesions of the subject patch. We compare our 4D patch-based segmentation with independent 3D voxel-based and patch-based lesion segmentation algorithms. Based on ground truth segmentations from 30 data sets, we show that the mean Dice coefficients between manual and automated segmentations improve after using the 4D approach compared to two state-of-the-art 3D segmentation algorithms.

Snehashis Roy, Aaron Carass, Jerry L. Prince, Dzung L. Pham
Hierarchical Multi-modal Image Registration by Learning Common Feature Representations

Mutual information (MI) has been widely used for registering images with different modalities. Since most inter-modality registration methods simply estimate deformations in a local scale, but optimizing MI from the entire image, the estimated deformations for certain structures could be dominated by the surrounding unrelated structures. Also, since there often exist multiple structures in each image, the intensity correlation between two images could be complex and highly nonlinear, which makes global MI unable to precisely guide local image deformation. To solve these issues, we propose a hierarchical inter-modality registration method by robust feature matching. Specifically, we first select a small set of key points at salient image locations to drive the entire image registration. Since the original image features computed from different modalities are often difficult for direct comparison, we propose to learn their common feature representations by projecting them from their native feature spaces to a common space, where the correlations between corresponding features are maximized. Due to the large heterogeneity between two high-dimension feature distributions, we employ Kernel CCA (Canonical Correlation Analysis) to reveal such non-linear feature mappings. Then, our registration method can take advantage of the learned common features to reliably establish correspondences for key points from different modality images by robust feature matching. As more and more key points take part in the registration, our hierarchical feature-based image registration method can efficiently estimate the deformation pathway between two inter-modality images in a global to local manner. We have applied our proposed registration method to prostate CT and MR images, as well as the infant MR brain images in the first year of life. Experimental results show that our method can achieve more accurate registration results, compared to other state-of-the-art image registration methods.

Hongkun Ge, Guorong Wu, Li Wang, Yaozong Gao, Dinggang Shen
Semi-automatic Liver Tumor Segmentation in Dynamic Contrast-Enhanced CT Scans Using Random Forests and Supervoxels

Pre-operative locoregional treatments (PLT) delay the tumor progression by necrosis for patients with hepato-cellular carcinoma (HCC). Toward an efficient evaluation of PLT response, we address the estimation of liver tumor necrosis (TN) from CT scans. The TN rate could shortly supplant standard criteria (RECIST, mRECIST, EASL or WHO) since it has recently shown higher correlation to survival rates. To overcome the inter-expert variability induced by visual qualitative assessment, we propose a semi-automatic method that requires weak interaction efforts to segment parenchyma, tumoral active and necrotic tissues. By combining SLIC supervoxels and random decision forest, it involves discriminative multi-phase cluster-wise features extracted from registered dynamic contrast-enhanced CT scans. Quantitative assessment on expert groundtruth annotations confirms the benefits of exploiting multi-phase information from semantic regions to accurately segment HCC liver tumors.

Pierre-Henri Conze, François Rousseau, Vincent Noblet, Fabrice Heitz, Riccardo Memeo, Patrick Pessaux
Flexible and Latent Structured Output Learning
Application to Histology

Malignant tumors that contain a high proportion of regions deprived of adequate oxygen supply (hypoxia) in areas supplied by a microvessel (i.e., a microcirculatory supply unit - MCSU) have been shown to present resistance to common cancer treatments. Given the importance of the estimation of this proportion for improving the clinical prognosis of such treatments, a manual annotation has been proposed, which uses two image modalities of the same histological specimen and produces the number and proportion of MCSUs classified as normoxia (normal oxygenation level), chronic hypoxia (limited diffusion), and acute hypoxia (transient disruptions in perfusion), but this manual annotation requires an expertise that is generally not available in clinical settings. Therefore, in this paper, we propose a new methodology that automates this annotation. The major challenge is that the training set comprises weakly labeled samples that only contains the number of MCSU types per sample, which means that we do not have the underlying structure of MCSU locations and classifications. Hence, we formulate this problem as a latent structured output learning that minimizes a high order loss function based on the number of MCSU types, where the underlying MCSU structure is flexible in terms of number of nodes and connections. Using a database of 89 pairs of weakly annotated images (from eight tumors), we show that our methodology produces highly correlated number and proportion of MCSU types compared to the manual annotations.

Gustavo Carneiro, Tingying Peng, Christine Bayer, Nassir Navab
Identifying Abnormal Network Alterations Common to Traumatic Brain Injury and Alzheimer’s Disease Patients Using Functional Connectome Data

The objective of this study is to determine if patients with traumatic brain injury (TBI) have similar pathological changes in brain network organization as patients with Alzheimer’s disease (AD) using functional connectome data reconstructed from resting-state fMRI (rsfMRI). To achieve our objective a novel machine learning technique is proposed that uses a top-down reverse engineering approach to identify abnormal network alterations in functional connectome data that are common to patients with AD and TBI. In general, if the proposed machine learning approach classifies a TBI connectome as AD, then this suggests a common network pathology exists in the connectomes of AD and TBI. The advantage of proposed machine learning technique is two-fold: 1) existing longitudinal TBI imaging data is not required, and 2) the potential risk of a TBI patient converting to AD later in life does not require a lengthy and potentially expensive longitudinal imaging study. Experiments are provided that show the AD pathology learned by our connectome-based machine learning technique is able to correctly identify TBI patients with 80% accuracy. In summary, this research may lead to early interventions that can dramatically increase the quality of life for TBI patients who may convert to AD.

Davy Vanderweyen, Brent C. Munsell, Jacobo E. Mintzer, Olga Mintzer, Andy Gajadhar, Xun Zhu, Guorong Wu, Jane Joseph, Alzheimers Disease Neuroimaging Initiative
Multimodal Multi-label Transfer Learning for Early Diagnosis of Alzheimer’s Disease

Recent machine learning based studies for early Alzheimer’s disease (AD) diagnosis focus on the joint learning of both regression and classification tasks. However, most of existing methods only use data from a single domain, and thus cannot utilize the intrinsic useful correlation information among data from correlated domains. Accordingly, in this paper, we consider the joint learning of multi-domain regression and classification tasks with multimodal features for AD diagnosis. Specifically, we propose a novel multimodal multi-label transfer learning framework, which consists of two key components: 1) a multi-domain multi-label feature selection (MDML) model that selects the most informative feature subset from multi-domain data, and 2) multimodal regression and classification methods that can predict clinical scores and identify the conversion of mild cognitive impairment (MCI) to AD patients, respectively. Experimental results on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database show that the proposed method help improve the performances of both clinical score prediction and disease status identification, compared with the state-of-the-art methods.

Bo Cheng, Mingxia Liu, Daoqiang Zhang
Soft-Split Sparse Regression Based Random Forest for Predicting Future Clinical Scores of Alzheimer’s Disease

In this study, we propose a novel sparse regression based random forest (RF) to predict future clinical scores of Alzheimer’s disease (AD) with the baseline scores and the MRI features. To avoid the stair-like decision boundary caused by axis-aligned split function in the conventional RF, we present a supervised method to construct the oblique split function by using sparse regression to select the informative features and transform the original features into the target-like features that are more discriminative. Then, we construct the oblique splitting function by applying the principal component analysis (PCA) on the transformed target-like features. Furthermore, to reduce the negative impact of potential mis-split induced by the conventional “hard-split”, we further introduce the “soft-split” technique, in which both left and right nodes are visited with certain weights given a test sample. The experiment results show that sparse regression based RF alone can improve the prediction performance of the conventional RF. And further improvement can be achieved when both of the techniques are combined.

Lei Huang, Yaozong Gao, Yan Jin, Kim-Han Thung, Dinggang Shen
Multi-view Classification for Identification of Alzheimer’s Disease

In this paper, we propose a multi-view learning method using Magnetic Resonance Imaging (MRI) data for Alzheimer’s Disease (AD) diagnosis. Specifically, we extract both Region-Of-Interest (ROI) features and Histograms of Oriented Gradient (HOG) features from each MRI image, and then propose mapping HOG features onto the space of ROI features to make them comparable and to impose high intra-class similarity with low inter-class similarity. Finally, both mapped HOG features and original ROI features are input to the support vector machine for AD diagnosis. The purpose of mapping HOG features onto the space of ROI features is to provide complementary information so that features from different views can

not only

be comparable (

i.e.,

homogeneous)

but also

be interpretable. For example, ROI features are robust to noise, but lack of reflecting small or subtle changes, while HOG features are diverse but less robust to noise. The proposed multi-view learning method is designed to learn the transformation between two spaces and to separate the classes under the supervision of class labels. The experimental results on the MRI images from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset show that the proposed multi-view method helps enhance disease status identification performance, outperforming both baseline methods and state-of-the-art methods.

Xiaofeng Zhu, Heung-Il Suk, Yonghua Zhu, Kim-Han Thung, Guorong Wu, Dinggang Shen
Clustering Analysis for Semi-supervised Learning Improves Classification Performance of Digital Pathology

Purpose:

Completely labeled datasets of pathology slides are often difficult and time consuming to obtain. Semi-supervised learning methods are able to learn reliable models from small number of labeled instances and large quantities of unlabeled data. In this paper, we explored the potential of clustering analysis for semi-supervised support vector machine (SVM) classifier.

Method:

A clustering analysis method was proposed to find regions of high density prior to finding the decision boundary using a supervised SVM and was compared with another state-of-the-art semi-supervised technique. Different percentages of labeled instances were used to train supervised and semi-supervised SVM learners from an image dataset generated from 50 whole-mount images (8 patients) of breast specimen. Their cross-validated classification performances were compared with each other using the area under the ROC curve measure.

Result:

Our proposed clustering analysis for semi-supervised learning was able to produce a reliable classification model from small amounts of labeled data. Comparing the proposed method in this study with a well-known implementation of semi-supervised SVM, our method performed much faster and produced better results.

Mohammad Peikari, Judit Zubovits, Gina Clarke, Anne L. Martel
A Composite of Features for Learning-Based Coronary Artery Segmentation on Cardiac CT Angiography

Coronary artery segmentation is important in quantitative coronary angiography. In this work, a novel method is proposed for coronary artery segmentation. It integrates coronary artery features of density, local shape and global structure into the learning framework. The density feature is the vessel’s relative density estimated by means of Gaussian mixture models and is able to suppress individual variances. The local tube shape of the vessel is measured with the advantages of the 3-dimensional multi-scale Hessian filter and is able to enhance the small vessels. The global structure feature is predicted from a support vector regression in terms of vessel’s spatial position and emphasizes the geometric morphometric attribute of the coronary artery tree running across the surface of the heart. The features are fed into a support vector classifier for vessel segmentation. The proposed methodology was tested on ten 3D cardiac computed tomography angiography datasets. It obtained a sensitivity of 81%, a specificity of 99%, and Dice coefficient of 84%. The performance is good.

Yanling Chi, Weimin Huang, Jiayin Zhou, Liang Zhong, Swee Yaw Tan, Keng Yung Jih Felix, Low Choon Seng Sheon, Ru San Tan
Ensemble Prostate Tumor Classification in H&E Whole Slide Imaging via Stain Normalization and Cell Density Estimation

Classification of prostate tumor regions in digital histology images requires comparable features across datasets. Here we introduce adaptive cell density estimation and apply H&E stain normalization into a supervised classification framework to improve inter-cohort classifier robustness. The framework uses Random Forest feature selection, class-balanced training example subsampling and support vector machine (SVM) classification to predict the presence of high- and low-grade prostate cancer (HG-PCa and LG-PCa) on image tiles. Using annotated whole-slide prostate digital pathology images to train and test on two separate patient cohorts, classification performance, as measured with area under the ROC curve (AUC), was 0.703 for HG-PCa and 0.705 for LG-PCa. These results improve upon previous work and demonstrate the effectiveness of cell-density and stain normalization on classification of prostate digital slides across cohorts.

Michaela Weingant, Hayley M. Reynolds, Annette Haworth, Catherine Mitchell, Scott Williams, Matthew D. DiFranco
Computer-Assisted Diagnosis of Lung Cancer Using Quantitative Topology Features

In this paper, we proposed a computer-aided diagnosis and analysis for a challenging and important clinical case in lung cancer, i.e., differentiation of two subtypes of Non-small cell lung cancer (NSCLC). The proposed framework utilized both local and topological features from histopathology images. To extract local features, a robust cell detection and segmentation method is first adopted to segment each individual cell in images. Then a set of extensive local features is extracted using efficient geometry and texture descriptors based on cell detection results. To investigate the effectiveness of topological features, we calculated architectural properties from labeled nuclei centroids. Experimental results from four popular classifiers suggest that the cellular structure is very important and the topological descriptors are representative markers to distinguish between two subtypes of NSCLC.

Jiawen Yao, Dheeraj Ganti, Xin Luo, Guanghua Xiao, Yang Xie, Shirley Yan, Junzhou Huang
Inherent Structure-Guided Multi-view Learning for Alzheimer’s Disease and Mild Cognitive Impairment Classification

Multi-atlas based morphometric pattern analysis has been recently proposed for the automatic diagnosis of Alzheimer’s disease (AD) and its early stage, i.e., mild cognitive impairment (MCI), where multi-view feature representations for subjects are generated by using multiple atlases. However, existing multi-atlas based methods usually assume that each class is represented by a specific type of data distribution (i.e., a single cluster), while the underlying distribution of data is actually a prior unknown. In this paper, we propose an inherent structure-guided multi-view leaning (ISML) method for AD/MCI classification. Specifically, we first extract multi-view features for subjects using multiple selected atlases, and then cluster subjects in the original classes into several sub-classes (i.e., clusters) in each atlas space. Then, we encode each subject with a new label vector, by considering both the original class labels and the coding vectors for those sub-classes, followed by a multi-task feature selection model in each of multi-atlas spaces. Finally, we learn multiple SVM classifiers based on the selected features, and fuse them together by an ensemble classification method. Experimental results on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database demonstrate that our method achieves better performance than several state-of-the-art methods in AD/MCI classification.

Mingxia Liu, Daoqiang Zhang, Dinggang Shen
Nonlinear Feature Transformation and Deep Fusion for Alzheimer’s Disease Staging Analysis

In this study, we develop a novel nonlinear metric learning method to improve biomarker identification for Alzheimer’s Disease (AD) and Mild Cognitive Impairment (MCI). Formulated under a constrained optimization framework, the proposed method learns a smooth nonlinear feature space transformation that makes the input data points more linearly separable in SVMs. The thin-plate spline (TPS) is chosen as the geometric model due to its remarkable versatility and representation power in accounting for sophisticated deformations. In addition, a deep network based feature fusion strategy through stacked denoising sparse autoencoder (DSAE) is adopted to integrate cross-sectional and longitudinal features estimated from MR brain images. Using the ADNI dataset, we evaluate the effectiveness of the proposed feature transformation and feature fusion strategies and demonstrate the improvements over the state-of-the-art solutions within the same category.

Yani Chen, Bibo Shi, Charles D. Smith, Jundong Liu
Tumor Classification by Deep Polynomial Network and Multiple Kernel Learning on Small Ultrasound Image Dataset

Ultrasound imaging is a most common modality for tumor detection and diagnosis. Deep learning (DL) algorithms generally suffer from the small sample problem. The traditional texture feature extraction methods are still commonly used for small ultrasound image dataset. Deep polynomial network (DPN) is a newly proposed DL algorithm with excellent feature representation, which has the potential for small dataset. However, the simple concatenation of the learned hierarchical features from different layers in DPN limits its performance. Since the features from different layers in DPN can be regarded as heterogeneous features, they then can be effectively integrated by multiple kernel learning (MKL) methods. In this work, we propose a DPN and MKL based feature learning and classification framework (DPN-MKL) for tumor classification on small ultrasound image dataset. The experimental results show that DPN-MKL algorithm outperforms the commonly used DL algorithms for ultrasound image based tumor classification on small dataset.

Xiao Liu, Jun Shi, Qi Zhang
Multi-source Information Gain for Random Forest: An Application to CT Image Prediction from MRI Data

Random forest has been widely recognized as one of the most powerful learning-based predictors in literature, with a broad range of applications in medical imaging. Notable efforts have been focused on enhancing the algorithm in multiple facets. In this paper, we present an original concept of

multi-source information gain

that escapes from the conventional notion inherent to random forest. We propose the idea of characterizing information gain in the training process by utilizing

multiple beneficial sources of information

, instead of the

sole governing of prediction targets

as conventionally known. We suggest the use of location and input image patches as the secondary sources of information for guiding the splitting process in random forest, and experiment on the challenging task of predicting CT images from MRI data. The experimentation is thoroughly analyzed in two datasets, i.e., human brain and prostate, with its performance further validated with the integration of auto-context model. Results prove that the

multi-source information gain

concept effectively helps better guide the training process with consistent improvement in prediction accuracy.

Tri Huynh, Yaozong Gao, Jiayin Kang, Li Wang, Pei Zhang, Dinggang Shen, Alzheimer’s Disease Neuroimaging Initiative (ADNI)
Joint Learning of Multiple Longitudinal Prediction Models by Exploring Internal Relations

Longitudinal prediction of the brain disorder such as Alzheimer’s disease (AD) is important for possible early detection and early intervention. Given the baseline imaging and clinical data, it will be interesting to predict the progress of disease for an individual subject, such as predicting the conversion of Mild Cognitive Impairment (MCI) to AD, in the future years. Most existing methods predicted different clinical scores using different models, or predicted multiple scores at different future time points separately. This often misses the chance of coordinated learning of multiple prediction models for jointly predicting multiple clinical scores at multiple future time points. In this paper, we propose a novel method for joint learning of multiple longitudinal prediction models for multiple clinical scores at multiple future time points. First, for each longitudinal prediction model, we explore three important relationships among training samples, features, and clinical scores, respectively, for enhancing its learning. Then, we further introduce additional relation among different longitudinal prediction models for allowing them to select a common set of features from the baseline imaging and clinical data, with

l

2,1

sparsity constraint, for their joint training. We evaluate the performance of our joint prediction models with the data from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database, showing much better performance than the state-of-the-art methods in predicting multiple clinical scores at multiple future time points.

Baiying Lei, Siping Chen, Dong Ni, Tianfu Wang
Backmatter
Metadaten
Titel
Machine Learning in Medical Imaging
herausgegeben von
Luping Zhou
Li Wang
Qian Wang
Yinghuan Shi
Copyright-Jahr
2015
Electronic ISBN
978-3-319-24888-2
Print ISBN
978-3-319-24887-5
DOI
https://doi.org/10.1007/978-3-319-24888-2