Skip to main content

2011 | Buch

Machine Learning in Medical Imaging

Second International Workshop, MLMI 2011, Held in Conjunction with MICCAI 2011, Toronto, Canada, September 18, 2011. Proceedings

herausgegeben von: Kenji Suzuki, Fei Wang, Dinggang Shen, Pingkun Yan

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the Second International Workshop on Machine Learning in Medical Imaging, MLMI 2011, held in conjunction with MICCAI 2011, in Toronto, Canada, in September 2011. The 44 revised full papers presented were carefully reviewed and selected from 74 submissions. The papers focus on major trends in machine learning in medical imaging aiming to identify new cutting-edge techniques and their use in medical imaging.

Inhaltsverzeichnis

Frontmatter
Learning Statistical Correlation of Prostate Deformations for Fast Registration

This paper presents a novel fast registration method for aligning the planning image onto each treatment image of a patient for adaptive radiation therapy of the prostate cancer. Specifically, an online correspondence interpolation method is presented to learn the statistical correlation of the deformations between prostate boundary and non-boundary regions from a population of training patients, as well as from the online-collected treatment images of the same patient. With this learned statistical correlation, the estimated boundary deformations can be used to rapidly predict regional deformations between prostates in the planning and treatment images. In particular, the population-based correlation can be initially used to interpolate the dense correspondences when the number of available treatment images from the current patient is small. With the acquisition of more treatment images from the current patient, the patient-specific information gradually plays a more important role to reflect the prostate shape changes of the current patient during the treatment. Eventually, only the patient-specific correlation is used to guide the regional correspondence prediction, once a sufficient number of treatment images have been acquired and segmented from the current patient. Experimental results show that the proposed method can achieve much faster registration speed yet with comparable registration accuracy compared with the thin plate spline (TPS) based interpolation approach.

Yonghong Shi, Shu Liao, Dinggang Shen
Automatic Segmentation of Vertebrae from Radiographs: A Sample-Driven Active Shape Model Approach

Segmentation of vertebral contours is an essential task in the design of automatic tools for vertebral fracture assessment. In this paper, we propose a novel segmentation technique which does not require operator interaction. The proposed technique solves the segmentation problem in a hierarchical manner. In a first phase, a coarse estimate of the overall spine alignment and the vertebra locations is computed using a shape model sampling scheme. These samples are used to initialize a second phase of active shape model search, under a nonlinear model of vertebra appearance. The search is constrained by a conditional shape model, based on the variability of the coarse spine location estimates. The technique is evaluated on a data set of manually annotated lumbar radiographs. The results compare favorably to the previous work in automatic vertebra segmentation, in terms of both segmentation accuracy and failure rate.

Peter Mysling, Kersten Petersen, Mads Nielsen, Martin Lillholm
Computer-Assisted Intramedullary Nailing Using Real-Time Bone Detection in 2D Ultrasound Images

In this paper, we propose a new method for bone surface detection in 2D ultrasound (US) images, and its application in a Computer Assisted Orthopaedic Surgery system to assist the surgeon during the locking of the intramedullary nail in tibia fractures reduction. It is a three main steps method: first, a vertical gradient is applied to extract potential segments of bone from 2D US images, and then, a new method based on shortest path is used to eliminate all segments that do not belong to the final contour. Finally, the contour is closed using least square polynomial approximation. The first validation of the method has been done using US images of anterior femoral condyles from 9 healthy volunteers. To calculate the accuracy of the method, we compared our results to a manual segmentation performed by an expert. The Misclassification Error (ME) is between 0.10% and 0.26% and the average computation time was 0.10 second per image.

Agnès Masson-Sibut, Amir Nakib, Eric Petit, François Leitner
Multi-Kernel Classification for Integration of Clinical and Imaging Data: Application to Prediction of Cognitive Decline in Older Adults

Diagnosis of neurologic and neuropsychiatric disorders typically involves considerable assessment including clinical observation, neuroimaging, and biological and neuropsychological measurements. While it is reasonable to expect that the integration of neuroimaging data and complementary non-imaging measures is likely to improve early diagnosis on individual basis, due to technical challenges associated with the task of combining different data types, medical image pattern recognition analysis has been largely focusing solely on neuroimaging evaluations. In this paper, we explore the potential of integrating neuroimaging and clinical information within a pattern classification framework, and propose that the multi-kernel learning (MKL) paradigm may be suitable for building a multimodal classifier of a disorder, as well as for automatic identification of the relevance of each information type. We apply our approach to the problem of detecting cognitive decline in healthy older adults from single-visit evaluations, and show that the performance of a classifier can be improved when nouroimaging and clinical evaluations are used

simultaneously

within a MKL-based classification framework.

Roman Filipovych, Susan M. Resnick, Christos Davatzikos
Automated Selection of Standardized Planes from Ultrasound Volume

The search for the standardized planes in a 3D ultrasound volume is a hard and time consuming process even for expert physicians. A scheme for finding the standardized planes would be beneficial in advancing the use of volumetric ultrasound for clinical diagnosis. In this paper, we propose a new method to automatically select the standard plane from the fetal ultrasound volume for the application of fetal biometry measurement. To our knowledge, this is the first study in the fetal ultrasound domain. The method is based on the AdaBoost learning algorithm and has been evaluated on a set of 30 volumes. The experimental results are promising with a recall rate of 91.29%. We believe this will increase the accuracy and efficiency in patient monitoring and care management in obstetrics, specifically in detecting growth restricted fetuses.

Bahbibi Rahmatullah, Aris Papageorghiou, J. Alison Noble
Maximum Likelihood and James-Stein Edge Estimators for Left Ventricle Tracking in 3D Echocardiography

Accurate and consistent detection of endocardial borders in 3D echocardiography is a challenging task. Part of the reason for this is that the trabeculated structure of the endocardial boundary leads to alternating edge characteristics that varies over a cardiac cycle. The maximum gradient (MG), step criterion (STEP) and max flow/min cut (MFMC) edge detectors have been previously applied for the detection of endocardial edges. In this paper, we combine the responses of these edge detectors based on their confidences using maximum likelihood (MLE) and James-Stein (JS) estimators. We also present a method for utilizing the confidence-based estimates as measurements in a Kalman filter based left ventricle (LV) tracking framework. The effectiveness of the introduced methods are validated via comparative analyses among the MG, STEP, MFMC, MLE and JS.

Engin Dikici, Fredrik Orderud
A Locally Deformable Statistical Shape Model

Statistical shape models are one of the most powerful methods in medical image segmentation problems. However, if the task is to segment complex structures, they are often too constrained to capture the full amount of anatomical variation. This is due to the fact that the number of training samples is limited in general, because generating hand-segmented reference data is a tedious and time-consuming task. To circumvent this problem, we present a Locally Deformable Statistical Shape Model that is able to segment complex structures with only a few training samples at hand. This is achieved by allowing a unique solution in each contour point. Unlike previous approaches, trying to tackle this problem by partitioning the statistical model, we do not need predefined segments. Smoothness constraints ensure that the local solution is restricted to the space of feasible shapes. Very promising results are obtained when we compare our new approach to a global fitting approach.

Carsten Last, Simon Winkelbach, Friedrich M. Wahl, Klaus W. G. Eichhorn, Friedrich Bootz
Monte Carlo Expectation Maximization with Hidden Markov Models to Detect Functional Networks in Resting-State fMRI

We propose a novel Bayesian framework for partitioning the cortex into distinct functional networks based on resting-state fMRI. Spatial coherence within the network clusters is modeled using a hidden Markov random field prior. The normalized time-series data, which lie on a high-dimensional sphere, are modeled with a mixture of von Mises-Fisher distributions. To estimate the parameters of this model, we maximize the posterior using a Monte Carlo expectation maximization (MCEM) algorithm in which the intractable expectation over all possible labelings is approximated using Monte Carlo integration. We show that MCEM solutions on synthetic data are superior to those computed using a mode approximation of the expectation step. Finally, we demonstrate on real fMRI data that our method is able to identify visual, motor, salience, and default mode networks with considerable consistency between subjects.

Wei Liu, Suyash P. Awate, Jeffrey S. Anderson, Deborah Yurgelun-Todd, P. Thomas Fletcher
DCE-MRI Analysis Using Sparse Adaptive Representations

Dynamic contrast-enhanced MRI (DCE-MRI) plays an important role as an imaging method for the diagnosis and evaluation of several diseases. Indeed, clinically relevant, per-voxel quantitative information may be extracted through the analysis of the enhanced MR signal. This paper presents a method for the automated analysis of DCE-MRI data that works by decomposing the enhancement curves as sparse linear combinations of elementary curves learned without supervision from the data. Experimental results show that performances in denoising and unsupervised segmentation improve over parametric methods.

Gabriele Chiusano, Alessandra Staglianò, Curzio Basso, Alessandro Verri
Learning Optical Flow Propagation Strategies Using Random Forests for Fast Segmentation in Dynamic 2D & 3D Echocardiography

Fast segmentation of the left ventricular (LV) myocardium in 3D+time echocardiographic sequences can provide quantitative data of heart function that can aid in clinical diagnosis and disease assessment. We present an algorithm for automatic segmentation of the LV myocardium in 2D and 3D sequences which employs learning optical flow (OF) strategies. OF motion estimation is used to propagate single-frame segmentation results of the Random Forest classifier from one frame to the next. The best strategy for propagating between frames is learned on a per-frame basis. We demonstrate that our algorithm is fast and accurate. We also show that OF propagation increases the performance of the method with respect to the static baseline procedure, and that learning the best OF propagation strategy performs better than single-strategy OF propagation.

Michael Verhoek, Mohammad Yaqub, John McManigle, J. Alison Noble
A Non-rigid Registration Framework That Accommodates Pathology Detection

Image-guided external beam radiation therapy (EBRT) for the treatment of cancer enables accurate placement of radiation dose to the cancerous region. However, the deformation of soft tissue during the course of treatment, such as in cervical cancer, presents significant challenges. Furthermore, the presence of pathologies such as tumors may violate registration constraints and cause registration errors. In this paper, we present a novel MAP framework that performs nonrigid registration and pathology detection simultaneously. The matching problem here is defined as a mixture of two different distributions which describe statistically image gray-level variations for two pixel classes (i.e. tumor class and normal tissue class). The determinant of the transformation’s Jacobian is also constrained, which guarantees the transformation to be smooth and simulates the tumor regression process. We perform the experiments on 30 patient MR data to validate our approach. Quantitative analysis of experimental results illustrate the promising performance of this method in comparison to previous techniques.

Chao Lu, James S. Duncan
Segmentation Based Features for Lymph Node Detection from 3-D Chest CT

Lymph nodes routinely need to be considered in clinical practice in all kinds of oncological examinations. Automatic detection of lymph nodes from chest CT data is however challenging because of low contrast and clutter. Sliding window detectors using traditional features easily get confused by similar structures like muscles and vessels. It recently has been proposed to combine segmentation and detection to improve the detection performance. Features extracted from a segmentation that is initialized with a detection candidate can be used to train a classifier that decides whether the detection is a true or false positive. In this paper, the graph cuts method is adapted to the problem of lymph nodes segmentation. We propose a setting that requires only a single positive seed and at the same time solves the small cut problem of graph cuts. Furthermore, we propose a feature set that is extracted from the candidate segmentation. A classifier is trained on this feature set and used to reject false alarms. Cross validation on 54 CT datasets showed that the proposed system reaches a detection rate of 60.9% with only 6.1 false alarms per volume image, which is better than the current state of the art of mediastinal lymph node detection.

Johannes Feulner, S. Kevin Zhou, Matthias Hammon, Joachim Hornegger, Dorin Comaniciu
Segmenting Hippocampus from 7.0 Tesla MR Images by Combining Multiple Atlases and Auto-Context Models

In investigation of neurological diseases, accurate measurement of hippocampus is very important for differentiating inter-subject difference and subtle longitudinal change. Although many automatic segmentation methods have been developed, their performance can be limited by the poor image contrast of hippocampus in the MR images, acquired from either 1.5T or 3.0T scanner. Recently, the emergence of 7.0T scanner sheds new light on the study of hippocampus by providing much higher contrast and resolution. But the automatic segmentation algorithm for 7.0T images still lags behind the development of high-resolution imaging techniques. In this paper, we present a learning-based algorithm for segmenting hippocampi from 7.0T images, by using multi-atlases technique and auto-context models. Specifically, for each atlas (along with other aligned atlases), Auto-Context Model (ACM) is performed to iteratively construct a sequence of classifiers by integrating both image appearance and context features in the local patch. Since there exist plenty of texture information in 7.0T images, more advanced texture features are also extracted and incorporated into the ACM during the training stage. With the use of multiple atlases, multiple sequences of ACM-based classifiers will be trained, respectively in each atlas’ space. Thus, in the application stage, a new image will be segmented by first applying the sequence of the learned classifiers of each atlas to it, and then fusing multiple segmentation results from multiple atlases (or multiple sequences of classifiers) by a label-fusion technique. Experimental results on the six 7.0T images with voxel size of 0.35×0.35×0.35

mm

3

show much better results obtained by our method than by the method using only the conventional auto-context model.

Minjeong Kim, Guorong Wu, Wei Li, Li Wang, Young-Don Son, Zang-Hee Cho, Dinggang Shen
Texture Analysis by a PLS Based Method for Combined Feature Extraction and Selection

We present a methodology that applies machine-learning techniques to guide partial least square regression (PLS) for feature extraction combined with feature selection. The developed methodology was evaluated in a framework that supports the diagnosis of knee osteoarthritis (OA). Initially, a set of texture features are extracted from the MRI scans. These features are used for segmenting the region-ofinterest and as input to the PLS regression. Our method uses PLS output to rank the features and implements a learning step that iteratively selects the most important features and applies PLS to transform the new feature space. The selected bone texture features are used as input to a linear classifier trained to separate the subjects in healthy or OA. The developed algorithm selected 18% of the initial feature set and reached a generalization area-under-the-ROC of 0.93, which is higher than established markers known to relate to OA diagnosis.

Joselene Marques, Erik Dam
An Effective Supervised Framework for Retinal Blood Vessel Segmentation Using Local Standardisation and Bagging

In this paper, we present a supervised framework for extracting blood vessels from retinal images. The local standardisation of the green channel of the retinal image and the Gabor filter responses at four different scales are used as features for pixel classification. The Bayesian classifier is used with a bagging framework to classify each image pixel as vessel or background. A post processing method is also proposed to correct central reflex artifacts and improve the segmentation accuracy. On the public DRIVE database, our method achieves an accuracy of 0.9491 which is higher than any existing methods. More importantly, visual inspection on the segmentation results shows that our method gives two important improvements on the segmentation quality: vessels are well separated and central reflex are effectively removed. These are important factors that affect to the accuracy of all subsequent vascular analysis.

Uyen T. V. Nguyen, Alauddin Bhuiyan, Kotagiri Ramamohanarao, Laurence A. F. Park
Automated Identification of Thoracolumbar Vertebrae Using Orthogonal Matching Pursuit

A reliable detection and definitive labeling of vertebrae can be difficult due to factors such as the limited imaging coverage and various vertebral anomalies. In this paper, we investigate the problem of identifying the last thoracic vertebra and the first lumbar vertebra in CT images, aiming to improve the accuracy of an automatic spine labeling system especially when the field of view is limited in the lower spine region. We present a dictionary-based classification method using a cascade of simultaneous orthogonal matching pursuit (SOMP) classifiers on 2D vertebral regions extracted from the maximum intensity projection (MIP) images. The performance of the proposed method in terms of accuracy and speed has been validated by experimental results on hundreds of CT images collected from various clinical sites.

Tao Wu, Bing Jian, Xiang Sean Zhou
Segmentation of Skull Base Tumors from MRI Using a Hybrid Support Vector Machine-Based Method

To achieve robust classification performance of support vector machine (SVM), it is essential to have balanced and representative samples for both positive and negative classes. A novel three-stage hybrid SVM (HSVM) is proposed and applied for the segmentation of skull base tumor. The main idea of the method is to construct an online hybrid support vector classifier (HSVC), which is a seamless and nature connection of one-class and binary SVMs, by a boosting tool. An initial tumor region was first pre-segmented by a one-class SVC (OSVC). Then the boosting tool was employed to automatically generate the negative (non-tumor) samples, according to certain criteria. Subsequently the pre-segmented initial tumor region and the non-tumor samples were used to train a binary SVC (BSVC). By the trained BSVC, the final tumor lesion was segmented out. This method was tested on 13 MR images data sets. Quantitative results suggested that the developed method achieved significantly higher segmentation accuracy than OSVC and BSVC.

Jiayin Zhou, Qi Tian, Vincent Chong, Wei Xiong, Weimin Huang, Zhimin Wang
Spatial Nonparametric Mixed-Effects Model with Spatial-Varying Coefficients for Analysis of Populations

Voxel-wise comparisons have been largely used in the analysis of populations to identify biomarkers for pathologies, disease progression, or to predict a treatment outcome. On the basis of a good interindividual spatial alignment, 3D maps are produced, allowing to localise regions where significant differences between groups exist. However, these techniques have received some criticism as they rely on conditions wich are not always met. Firstly, the results may be affected by misregistrations; secondly, the statistics behind the models assumes normally distributed data; finally, because of the size of the images, some strategies must be used to control for the rate of false detection. In this paper, we propose a spatial (3D) nonparametric mixed-effects model for population analysis. It overcomes some of the issues of classical voxel-based approaches, namely robustness to false positive rates, misregistrations and large variances between groups. Examples on numerical phantoms and real clinical data illustrate the feasiblity of the approach. An example of application within the development of voxel-wise predictive models of rectal toxicity in prostate cancer radiotherapy is presented. Results demonstrate an improved sensitivity and reliability for group analysis compared with standard voxel-wise methods and open the way for potential applications in computational anatomy.

Juan David Ospina, Oscar Acosta, Gaël Dréan, Guillaume Cazoulat, Antoine Simon, Juan Carlos Correa, Pascal Haigron, Renaud de Crevoisier
A Machine Learning Approach to Tongue Motion Analysis in 2D Ultrasound Image Sequences

Analysis of tongue motions as captured in dynamic ultrasound (US) images has been an important tool in speech research. Previous studies generally required semi-automatic tongue segmentations to perform data analysis. In this paper, we adopt a machine learning approach that does not require tongue segmentation. Specifically, we employ advanced normalization procedures to temporally register the US sequences using their corresponding audio files. To explicitly encode motion, we then register the image frames spatio-temporally to compute a set of deformation fields from which we construct the velocity-based and spatio-temporal gestural descriptors, where the latter explicitly encode tongue dynamics during speech. Next, making use of the recently proposed Histogram Intersection Kernel, we perform support vector machine classification to evaluate the extracted descriptors with a set of clinical measures. We applied our method to speech abnormality and tongue gestures prediction. Overall, differentiating tongue motion, as produced by patients with or without speech impediments on a dataset of 24 US sequences, was achieved with classification accuracy of 94%. When applied to another dataset of 90 US sequences for two other classification tasks, accuracies were 86% and 84%.

Lisa Tang, Ghassan Hamarneh, Tim Bressmann
Random Forest-Based Manifold Learning for Classification of Imaging Data in Dementia

Neurodegenerative disorders are characterized by changes in multiple biomarkers, which may provide complementary information for diagnosis and prognosis. We present a framework in which proximities derived from random forests are used to learn a low-dimensional manifold from labelled training data and then to infer the clinical labels of test data mapped to this space. The proposed method facilitates the combination of embeddings from multiple datasets, resulting in the generation of a joint embedding that simultaneously encodes information about all the available features. It is possible to combine different types of data without additional processing, and we demonstrate this key feature by application to voxel-based FDG-PET and region-based MR imaging data from the ADNI study. Classification based on the joint embedding coordinates out-performs classification based on either modality alone. Results are impressive compared with other state-of-the-art machine learning techniques applied to multi-modality imaging data.

Katherine R. Gray, Paul Aljabar, Rolf A. Heckemann, Alexander Hammers, Daniel Rueckert
Probabilistic Graphical Model of SPECT/MRI

The combination of PET and SPECT with MRI is an area of active research at present time and will enable new biological and pathological analysis tools for clinical applications and pre-clinical research. Image processing and reconstruction in multi-modal PET/MRI and SPECT/MRI poses new algorithmic and computational challenges. We investigate the use of Probabilistic Graphical Models (PGM) to construct a system model and to factorize the complex joint distribution that arises from the combination of the two imaging systems. A joint generative system model based on finite mixtures is proposed and the structural properties of the associated PGM are addressed in order to obtain an iterative algorithm for estimation of activity and multi-modal segmentation. In a SPECT/MRI digital phantom study, the proposed algorithm outperforms a well established method for multi-modal activity estimation in terms of bias/variance characteristics and identification of lesions.

Stefano Pedemonte, Alexandre Bousse, Brian F. Hutton, Simon Arridge, Sebastien Ourselin
Directed Graph Based Image Registration

In this paper, a novel intermediate templates guided image registration algorithm is proposed to achieve accurate registration results with a more appropriate strategy for intermediate template selection. We first demonstrate that registration directions and paths play a key role in the intermediate template guided registration methods. In light of this, a directed graph is built based on the asymmetric distances defined on all ordered image-pairs in the dataset. The allocated directed path can be used to guide the pairwise registration by successively registering the underlying subject towards the template through all intermediate templates on the path. Moreover, for the groupwise registration, a minimum spanning arborescence (MSA) is built with both the template (the root) and the directed paths (from all images to the template) determined simultaneously. Experiments on synthetic and real datasets show that our method can achieve more accurate registration results than both the traditional pairwise registration and the undirected graph based registration methods.

Hongjun Jia, Guorong Wu, Qian Wang, Yaping Wang, Minjeong Kim, Dinggang Shen
Improving the Classification Accuracy of the Classic RF Method by Intelligent Feature Selection and Weighted Voting of Trees with Application to Medical Image Segmentation

Enhancement of the Random Forests to segment 3D objects in different 3D medical imaging modalities. More accurate voxel classification is achieved by intelligently selecting "good" features and neglecting irrelevant ones; this also leads to a faster training. Moreover, weighting each tree in the forest is proposed to provide an unbiased and more accurate probabilistic decision during the testing stage. Validation is performed on adult brain MRI and 3D fetal femoral ultrasound datasets. Comparisons between the classic Random Forests and the proposed new one show significant improvement on segmentation accuracy. We also compare our work with other techniques to show its applicability.

Mohammad Yaqub, M. Kassim Javaid, Cyrus Cooper, J. Alison Noble
Network-Based Classification Using Cortical Thickness of AD Patients

In this article we propose a framework for establishing individual structural networks. An individual network is established for each subject using the mean cortical thickness of cortical regions as defined by the AAL atlas. Specifically, for each subject, we compute a similarity matrix of mean cortical thickness between pairs of cortical regions, which we refer to hereafter as the individual’s network. Such individual networks can be used for classification. We use a combination of two types of feature selection approaches to search for the most discriminative edges. These edges serve as the input to a support vector machine (SVM) for classification. We demonstrate the utility of the proposed method by a comparison with classifying the raw cortical thickness data, and individual networks, using a publically available dataset. In particular, 83 subjects from the OASIS database were chosen to validate this approach, 39 of which were diagnosed with either mild cognitive impairment (MCI) or moderate Alzheimer’s disease (AD) and the remaining were age-matched controls. While using an SVM on the raw cortical thickness data or individual networks without hybrid feature selection resulted in less than or nearly 80% classification accuracy, our approach yielded 90.4% classification accuracy in leave-one-out analysis.

Dai Dai, Huiguang He, Joshua Vogelstein, Zengguang Hou
Anatomical Regularization on Statistical Manifolds for the Classification of Patients with Alzheimer’s Disease

This paper introduces a continuous framework to spatially regularize support vector machines (SVM) for brain image analysis based on the Fisher metric. We show that, by considering the images as elements of a statistical manifold, one can define a metric that integrates various types of information. Based on this metric, replacing the standard SVM regularization with a Laplace-Beltrami regularization operator allows integrating to the classifier various types of constraints based on spatial and anatomical information. The proposed framework is applied to the classification of magnetic resonance (MR) images based on gray matter concentration maps from 137 patients with Alzheimer’s disease and 162 elderly controls. The results demonstrate that the proposed classifier generates less-noisy and consequently more interpretable feature maps with no loss of classification performance.

Rémi Cuingnet, Joan Alexis Glaunès, Marie Chupin, Habib Benali, Olivier Colliot
Rapidly Adaptive Cell Detection Using Transfer Learning with a Global Parameter

Recent advances in biomedical imaging have enabled the analysis of many different cell types. Learning-based cell detectors tend to be specific to a particular imaging protocol and cell type. For a new dataset, a tedious re-training process is required. In this paper, we present a novel method of training a cell detector on new datasets with minimal effort. First, we combine the classification rules extracted from existing data with the training samples of new data using transfer learning. Second, a global parameter is incorporated to refine the ranking of the classification rules. We demonstrate that our method achieves the same performance as previous approaches with only 10% of the training effort.

Nhat H. Nguyen, Eric Norris, Mark G. Clemens, Min C. Shin
Automatic Morphological Classification of Lung Cancer Subtypes with Boosting Algorithms for Optimizing Therapy

Patient-targeted therapies have recently been highlighted as important. An important development in the treatment of metastatic non-small cell lung cancer (NSCLC) has been the tailoring of therapy on the basis of histology. A pathology diagnosis of “non-specified NSCLC” is no longer routinely acceptable; an effective approach for classification of adenocarcinoma (AC) and squamous carcinoma (SC) histotypes is needed for optimizing therapy. In this study, we present a robust and objective automatic classification system for real time classification of AC and SC based on morphological tissue pattern of H&E images alone to assist medical experts in diagnosis of lung cancer. Various original and extended Densitometric and Haralick’s texture features are used to extract image features, and a Boosting algorithm is utilized to train the classifier, together with alternative decision tree as the base learner. For evaluation, 369 tissue samples were collected in tissue microarray format, including 97 adenocarcinoma and 272 squamous carcinoma samples. Using 10-fold cross validation, the technique achieved high accuracy of 92.41%, and we also found that the two Boosting algorithms (cw-Boost and AdaBoost.M1) perform consistently well in comparison with other popularly adopted machine learning methods, including support vector machine, neural network, single decision tree and alternative decision tree. This approach offers a robust, objective and rapid procedure for optimized patient-targeted therapies.

Ching-Wei Wang, Cheng-Ping Yu
Hot Spots Conjecture and Its Application to Modeling Tubular Structures

The second eigenfunction of the Laplace-Beltrami operator follows the pattern of the overall shape of an object. This geometric property is well known and used for various applications including mesh processing, feature extraction, manifold learning, data embedding and the minimum linear arrangement problem. Surprisingly, this geometric property has not been mathematically formulated yet. This problem is directly related to the somewhat obscure

hot spots conjecture

in differential geometry. The aim of the paper is to raise the awareness of this nontrivial issue and formulate the problem more concretely. As an application, we show how the second eigenfunction alone can be used for complex shape modeling of tubular structures such as the human mandible.

Moo K. Chung, Seongho Seo, Nagesh Adluru, Houri K. Vorperian
Fuzzy Statistical Unsupervised Learning Based Total Lesion Metabolic Activity Estimation in Positron Emission Tomography Images

Accurate tumor lesion activity estimation is critical for tumor staging and follow up studies. Positron emission tomography (PET) successfully images and quantifies the lesion metabolic activity. Recently, PET images were modeled as a fuzzy Gaussian mixture to delineate tumor lesions accurately. Nonetheless, on the course of accurate delineation, chances are high to potentially end up with activity underestimation, due to the limited PET resolution, the reconstruction images suffer from partial volume effects (PVE). In this work, we propose a statistical lesion activity computation (SLAC) approach to robustly estimate the total lesion activity (TLA) directly from the modeled Gaussian partial volume mixtures. To evaluate the proposed method, synthetic lesions were simulated and reconstructed. TLA was estimated from 3 state-of-the-art PET delineation schemes for comparison. All schemes were evaluated with reference to the ground truth knowledge. The experimental results convey that the SLAC is robust enough for clinical use.

Jose George, Kathleen Vunckx, Sabine Tejpar, Christophe M. Deroose, Johan Nuyts, Dirk Loeckx, Paul Suetens
Predicting Clinical Scores Using Semi-supervised Multimodal Relevance Vector Regression

We present a novel semi-supervised multimodal relevance vector regression (SM-RVR) method for predicting clinical scores of neurological diseases from multimodal brain images, to help evaluate pathological stage and predict future progression of diseases, e.g., Alzheimer’s diseases (AD). Different from most existing methods, we predict clinical scores from multimodal (imaging and biological) biomarkers, including MRI, FDG-PET, and CSF. Also, since mild cognitive impairment (MCI) subjects generally contain more noises in their clinical scores compared to AD and healthy control (HC) subjects, we use only their multimodal data (i.e., MRI, FDG-PET and CSF), not their clinical scores, to train a semi-supervised model for enhancing the estimation of clinical scores for AD and healthy control (HC). Experimental results on ADNI dataset validate the efficacy of the proposed method.

Bo Cheng, Daoqiang Zhang, Songcan Chen, Dinggang Shen
Automated Cephalometric Landmark Localization Using Sparse Shape and Appearance Models

In this paper an automated method is presented for the localization of cephalometric landmarks in craniofacial cone-beam computed tomography images. This method makes use of a statistical sparse appearance and shape model obtained from training data. The sparse appearance model captures local image intensity patterns around each landmark. The sparse shape model, on the other hand, is constructed by embedding the landmarks in a graph. The edges of this graph represent pairwise spatial dependencies between landmarks, hence leading to a sparse shape model. The edges connecting different landmarks are defined in an automated way based on the intrinsic topology present in the training data. A maximum a posteriori approach is employed to obtain an energy function. To minimize this energy function, the problem is discretized by considering a finite set of candidate locations for each landmark, leading to a labeling problem. Using a leave-one-out approach on the training data the overall accuracy of the method is assessed. The mean and median error values for all landmarks are equal to 2.41

$\textrm{mm}$

and 1.49

$\textrm{mm}$

, respectively, demonstrating a clear improvement over previously published methods.

Johannes Keustermans, Dirk Smeets, Dirk Vandermeulen, Paul Suetens
A Comparison Study of Inferences on Graphical Model for Registering Surface Model to 3D Image

In this article, we report on a performance comparison study of inferences on graphical models for model-to-image registration. Both Markov chain Monte Carlo (MCMC) and nonparametric belief propagation (NBP) are widely used for inferring marginal posterior distributions of random variables on graphical models. It is known that the accuracy of the inferred distributions changes according to the methods used for the inference and to the structures of graphical models. In this article, we focus on a model-to-image registration method, which registers a surface model to given 3D images based on the inference on a graphical model. We applied MCMC and NBP for the inference and compared the accuracy of the registration on different structures of graphical models. Then, MCMC outperformed NBP significantly in the accuracy.

Yoshihide Sawada, Hidekata Hontani
A Large-Scale Manifold Learning Approach for Brain Tumor Progression Prediction

We present a novel manifold learning approach to efficiently identify low-dimensional structures, known as manifolds, embedded in large-scale, high dimensional MRI datasets for brain tumor growth prediction. The datasets consist of a series of MRI scans for three patients with tumor and progressed regions identified. We attempt to identify low dimensional manifolds for tumor, progressed and normal tissues, and most importantly, to verify if the progression manifold exists - the bridge between tumor and normal manifolds. By mapping the bridge manifold back to MRI image space, this method has the potential to predict tumor progression, thereby, greatly benefiting patient management. Preliminary results supported our hypothesis: normal and tumor manifolds are well separated in a low dimensional space and the progressed manifold is found to lie roughly between them but closer to the tumor manifold.

Loc Tran, Deb Banerjee, Xiaoyan Sun, Jihong Wang, Ashok J. Kumar, David Vinning, Frederic D. McKenzie, Yaohang Li, Jiang Li
Automated Detection of Major Thoracic Structures with a Novel Online Learning Method

This paper presents a novel on-line learning method for automatically detecting anatomic structures in medical images. Conventional off-line learning requires collecting all representative samples before the commencement of training. Our presented approach eliminates the need for storing historical training samples and is capable of continuously enhancing its performance with new samples. We evaluate our approach with three distinct thoracic structures, demonstrating that our approach yields competing performance to the off-line approach. This demonstrated performance is attributed to our novel on-line learning structure coupled with histogram as weaker learner.

Nima Tajbakhsh, Hong Wu, Wenzhe Xue, Jianming Liang
Accurate Regression-Based 4D Mitral Valve Surface Reconstruction from 2D+t MRI Slices

Cardiac MR (CMR) imaging is increasingly accepted as the gold standard for the evaluation of cardiac anatomy, function and mass. The multi-plan ability of CMR makes it a well suited modality for evaluation of the complex anatomy of the mitral valve (MV). However, the 2D slice-based acquisition paradigm of CMR limits the 4D capabilities for precise and accurate morphological and pathological analysis due to long through-put times and protracted study. In this paper we propose a new CMR protocol for acquiring MR images for 4D MV analysis. The proposed protocol is optimized regarding the number and spatial configuration of the 2D CMR slices. Furthermore, we present a learning- based framework for patient-specific 4D MV segmentation from 2D CMR slices (sparse data). The key idea with our Regression-based Surface Reconstruction (RSR) algorithm is the use of available MV models from other imaging modalities (CT, US) to train a dynamic regression model which will then be able to infer the absent information pertinent to CMR. Extensive experiments on 200 transesophageal echochardiographic (TEE) US and 20 cardiac CT sequences are performed to train the regression model and to define the CMR acquisition protocol. With the proposed acquisition protocol, a stack of 6 parallel long-axis (LA) planes, we acquired CMR patient images and regressed 4D patient-specific MV model with an accuracy of 1.5±0.2 mm and average speed of 10 sec per volume.

Dime Vitanovski, Alexey Tsymbal, Razvan Ioan Ionasec, Michaela Schmidt, Andreas Greiser, Edgar Mueller, Xiaoguang Lu, Gareth Funka-Lea, Joachim Hornegger, Dorin Comaniciu
Tree Structured Model of Skin Lesion Growth Pattern via Color Based Cluster Analysis

This paper presents a novel approach to analysis and classification of skin lesions based on their growth pattern. Our method constructs a tree structure for every lesion by repeatedly subdividing the image into sub-images using color based clustering. In this method, segmentation which is a challenging task is not required. The obtained multi-scale tree structure provides a framework that allows us to extract a variety of features, based on the appearance of the tree structure or sub-images corresponding to nodes of the tree. Preliminary features (the number of nodes, leaves, and depth of the tree, and 9 compactness indices of the dark spots represented by the sub-images associated with each node of the tree) are used to train a supervised learning algorithm. Results show the strength of the method in classifying lesions into malignant and benign classes. We achieved Precision of 0.855, Recall of 0.849, and F-measure of 0.834 using 3-layer perceptron and Precision of 0.829, Recall of 0.832, and F-measure of 0.817 using AdaBoost on a dataset containing 112 malignant and 298 benign lesion dermoscopic images.

Sina KhakAbi, Tim K. Lee, M. Stella Atkins
Subject-Specific Cardiac Segmentation Based on Reinforcement Learning with Shape Instantiation

Subject-specific segmentation for medical images plays a critical role in translating medical image computing techniques to routine clinical practice. Many current segmentation methods, however, are still focused on building general models, and thus lack the generalizability for unseen, particularly pathological data. In this paper, a reinforcement learning algorithm is proposed to integrate specific human expert behavior for segmenting subject-specific data. The algorithm uses a generic two-layer reinforcement learning framework and incorporates shape instantiation to constrain the target shape geometrically. The learning occurs in the background when the user segments the image in real-time, thus eliminating the need to prepare subject-specific training data. Detailed validation of the algorithm on hypertrophic cardiomyopathy (HCM) datasets demonstrates improved segmentation accuracy, reduced user-input, and thus the potential clinical value of the proposed algorithm.

Lichao Wang, Su-Lin Lee, Robert Merrifield, Guang-Zhong Yang
Faster Segmentation Algorithm for Optical Coherence Tomography Images with Guaranteed Smoothness

This paper considers the problem of segmenting an accurate and smooth surface from 3D volumetric images. Despite extensive studies in the past, the segmentation problem remains challenging in medical imaging, and becomes even harder in highly noisy and edge-weak images. In this paper we present a highly efficient graph-theoretical approach for segmenting a surface from 3D OCT images. Our approach adopts an objective function that combines the weight and the smoothness of the surface so that the resulting segmentation achieves global optimality and smoothness simultaneously. Based on a volumetric graph representation of the 3D images that incorporates curvature information, our approach first generates a set of 2D local optimal segmentations, and then iteratively improves the solution by fast local computation at regions where significant improvement can be achieved. It can be shown that our approach monotonically improves the quality of solution and converges rather quickly to the global optimal solution. To evaluate the convergence and performance of our method, we test it on both artificial data sets and a set of 14 3D OCT images. Our experiments suggest that the proposed method yields optimal (or almost optimal) solutions in 3 to 5 iterations. Comparing to the existing approaches, our method has a much improved running time, yields almost the same global optimality but with much better smoothness, which makes it especially suitable for segmenting highly noisy images. Our approach can be easily generalized to multi-surface detection.

Lei Xu, Branislav Stojkovic, Hu Ding, Qi Song, Xiaodong Wu, Milan Sonka, Jinhui Xu
Automated Nuclear Segmentation of Coherent Anti-Stokes Raman Scattering Microscopy Images by Coupling Superpixel Context Information with Artificial Neural Networks

Coherent anti-Stokes Raman scattering (CARS) microscopy is attracting major scientific attention because its high-resolution, label-free properties have great potential for real time cancer diagnosis during an image-guided-therapy process. In this study, we develop a nuclear segmentation technique which is essential for the automated analysis of CARS images in differential diagnosis of lung cancer subtypes. Thus far, no existing automated approaches could effectively segment CARS images due to their low signal-to-noise ratio (SNR) and uneven background. Naturally, manual delineation of cellular structures is time-consuming, subject to individual bias, and restricts the ability to process large datasets. Herein we propose a fully automated nuclear segmentation strategy by coupling superpixel context information and an artificial neural network (ANN), which is, to the best of our knowledge, the first automated nuclear segmentation approach for CARS images. The superpixel technique for local clustering divides an image into small patches by integrating the local intensity and position information. It can accurately separate nuclear pixels even when they possess subtly lower contrast with the background. The resulting patches either correspond to cell nuclei or background. To separate cell nuclei patches from background ones, we introduce the rayburst shape descriptors, and define a superpixel context index that combines information from a given superpixel and it’s immediate neighbors, some of which are background superpixels with higher intensity. Finally we train an ANN to identify the nuclear superpixels from those corresponding to background. Experimental validation on three subtypes of lung cancers demonstrates that the proposed approach is fast, stable, and accurate for segmentation of CARS images, the first step in the clinical use of CARS for differential cancer analysis.

Ahmad A. Hammoudi, Fuhai Li, Liang Gao, Zhiyong Wang, Michael J. Thrall, Yehia Massoud, Stephen T. C. Wong
3D Segmentation in CT Imagery with Conditional Random Fields and Histograms of Oriented Gradients

In this paper we focus on the problem of 3D segmention in volumetric computed tomography imagery to identify organs in the abdomen. We propose and evaluate different models and modeling strategies for 3D segmentation based on traditional Markov Random Fields (MRFs) and their discriminative counterparts known as Conditional Random Fields (CRFs). We also evaluate the utility of features based on histograms of oriented gradients or HOG features. CRFs and HOG features have independently produced state of the art performance in many other problem domains and we believe our work is the first to combine them and use them for medical image segmentation. We construct 3D lattice MRFs and CRFs, use variational message passing (VMP) for learning and max-product (MP) inference for prediction in the models. These inference and learning approaches allow us to learn pairwise terms in random fields that are non-submodular and are thus very flexible. We focus our experiments on abdominal organ and region segmentation, but our general approach should be useful in other settings. We evaluate our approach on a larger set of anatomical structures found within a publicly available liver database and we provide these labels for the dataset to the community for future research.

Chetan Bhole, Nicholas Morsillo, Christopher Pal
Automatic Human Knee Cartilage Segmentation from Multi-contrast MR Images Using Extreme Learning Machines and Discriminative Random Fields

Accurate and automatic segmentation of knee cartilage is required for the quantitative cartilage measures and is crucial for the assessment of acute injury or osteoarthritis. Unfortunately, the current works are still unsatisfactory. In this paper, we present a novel solution toward the automatic cartilage seg-mentation from multi-contrast magnetic resonance (MR) images using a pixel classification approach. Most of the previous classification based works for cartilage segmentation only rely on the labeling by a trained classifier, such as support vector machines (SVM) or k-nearest neighbor, but they do not consider the spatial interaction. Extreme learning machines (ELM) have been proposed as the training algorithm for the generalized single-hidden layer feedforward networks, which can be used in various regression and classification applica-tions. Works on ELM have shown that ELM for classification not only tends to achieve good generalization performance, but also is easy to be implemented since ELM requires less human intervention (only one user-specified parameter needs to be chosen) and can get direct least-square solution. To incorporate spatial dependency in classification, we propose a new segmentation method based on the convex optimization of an ELM-based association potential and a discriminative random fields (DRF) based interaction potential for segmenting cartilage automatically with multi-contrast MR images. Our method not only benefits from the good generalization classification performance of ELM but also incorporates the spatial dependencies in classification. We test the pro-posed method on multi-contrast MR datasets acquired from 11 subjects. Experimental results show that our method outperforms the classifiers based solely on DRF, SVM or ELM in segmentation accuracy.

Kunlei Zhang, Wenmiao Lu
MultiCost: Multi-stage Cost-sensitive Classification of Alzheimer’s Disease

Most traditional classification methods for Alzheimer’s disease (AD) aim to obtain a high accuracy, or equivalently a low classification error rate, which implicitly assumes that the losses of all misclassifications are the same. However, in practical AD diagnosis, the losses of misclassifying healthy subjects and AD patients are usually very different. For example, it may be troublesome if a healthy subject is misclassified as AD, but it could result in a more serious consequence if an AD patient is misclassified as healthy subject. In this paper, we propose a multi-stage cost-sensitive approach for AD classification via multimodal imaging data and CSF biomarkers. Our approach contains three key components: (1) a cost-sensitive feature selection which can select more AD-related brain regions by using different costs for different misclassifications in the feature selection stage, (2) a multimodal data fusion which effectively fuses data from MRI, PET and CSF biomarkers based on multiple kernels combination, and (3) a cost-sensitive classifier construction which further reduces the overall misclassification loss through a threshold-moving strategy. Experimental results on ADNI dataset show that the proposed approach can significantly reduce the cost of misclassification and simultaneously improve the sensitivity, under the same or even higher classification accuracy compared with conventional methods.

Daoqiang Zhang, Dinggang Shen
Classifying Small Lesions on Breast MRI through Dynamic Enhancement Pattern Characterization

Dynamic characterization of the lesion enhancement pattern can improve the classification performance of small diagnostically challenging lesions on dynamic-contrast enhanced MRI. This involves extraction of texture features from all post-contrast images of the lesion rather than using the first post-contrast image alone. In this study, statistical texture features derived from gray-level co-occurrence matrices are extracted from all five post-contrast images of 60 lesions and then used in a supervised learning task with a support vector regressor. Our results show that this approach significantly improves the performance of classifying small lesions (

p

 < 0.05). This suggests that such dynamic characterization of lesion enhancement has significant potential in assisting breast cancer diagnosis for small lesions.

Mahesh B. Nagarajan, Markus B. Huber, Thomas Schlossbauer, Gerda Leinsinger, Andrzej Krol, Axel Wismüller
Computer-Aided Detection of Polyps in CT Colonography with Pixel-Based Machine Learning Techniques

Pixel/voxel-based machine-learning techniques have been developed for classification between polyp regions of interest (ROIs) and non-polyp ROIs in computer-aided detection (CADe) of polyps in CT colonography (CTC). Although 2D/3D ROIs can be high-dimensional, they may reside in a lower dimensional manifold. We investigated the manifold structure of 2D CTC ROIs by use of the Laplacian eigenmaps technique. We compared a support vector machine (SVM) classifier with the Laplacian eigenmaps-based dimensionality-reduced ROIs with massive-training support vector regression (MTSVR) in reduction of false positive (FP) detections. The Laplacian eigenmaps-based SVM classifier removed 16.0% (78/489) of FPs without any loss of polyps in a leave-one-lesion-out cross-validation test, whereas the MTSVR removed 49.9% (244/489); thus, yielded a 96.6% by-polyp sensitivity at an FP rate of 2.4 (254/106) per patient.

Jian-Wu Xu, Kenji Suzuki
Backmatter
Metadaten
Titel
Machine Learning in Medical Imaging
herausgegeben von
Kenji Suzuki
Fei Wang
Dinggang Shen
Pingkun Yan
Copyright-Jahr
2011
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-24319-6
Print ISBN
978-3-642-24318-9
DOI
https://doi.org/10.1007/978-3-642-24319-6