Skip to main content
main-content

Über dieses Buch

The three-volume set LNCS 9900, 9901, and 9902 constitutes the refereed proceedings of the 19th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2016, held in Athens, Greece, in October 2016. Based on rigorous peer reviews, the program committee carefully selected 228 revised regular papers from 756 submissions for presentation in three volumes. The papers have been organized in the following topical sections: Part I: brain analysis, brain analysis - connectivity; brain analysis - cortical morphology; Alzheimer disease; surgical guidance and tracking; computer aided interventions; ultrasound image analysis; cancer image analysis; Part II: machine learning and feature selection; deep learning in medical imaging; applications of machine learning; segmentation; cell image analysis; Part III: registration and deformation estimation; shape modeling; cardiac and vascular image analysis; image reconstruction; and MR image analysis.

Inhaltsverzeichnis

Frontmatter

Feature Selection Based on Iterative Canonical Correlation Analysis for Automatic Diagnosis of Parkinson’s Disease

Parkinson’s disease (PD) is a major progressive neurodegenerative disorder. Accurate diagnosis of PD is crucial to control the symptoms appropriately. However, its clinical diagnosis mostly relies on the subjective judgment of physicians and the clinical symptoms that often appear late. Recent neuroimaging techniques, along with machine learning methods, provide alternative solutions for PD screening. In this paper, we propose a novel feature selection technique, based on iterative canonical correlation analysis (ICCA), to investigate the roles of different brain regions in PD through T1-weighted MR images. First of all, gray matter and white matter tissue volumes in brain regions of interest are extracted as two feature vectors. Then, a small group of significant features were selected using the iterative structure of our proposed ICCA framework from both feature vectors. Finally, the selected features are used to build a robust classifier for automatic diagnosis of PD. Experimental results show that the proposed feature selection method results in better diagnosis accuracy, compared to the baseline and state-of-the-art methods.

Luyan Liu, Qian Wang, Ehsan Adeli, Lichi Zhang, Han Zhang, Dinggang Shen

Identifying Relationships in Functional and Structural Connectome Data Using a Hypergraph Learning Method

The brain connectome provides an unprecedented degree of information about the organization of neuronal network architecture, both at a regional level, as well as regarding the entire brain network. Over the last several years the neuroimaging community has made tremendous advancements in the analysis of structural connectomes derived from white matter fiber tractography or functional connectomes derived from time-series blood oxygen level signals. However, computational techniques that combine structural and functional connectome data to discover complex relationships between fiber density and signal synchronization, including the relationship with health and disease, has not been consistently performed. To overcome this shortcoming, a novel connectome feature selection technique is proposed that uses hypergraphs to identify connectivity relationships when structural and functional connectome data is combined. Using publicly available connectome data from the UMCD database, experiments are provided that show SVM classifiers trained with structural and functional connectome features selected by our method are able to correctly identify autism subjects with 88 % accuracy. These results suggest our combined connectome feature selection approach may improve outcome forecasting in the context of autism.

Brent C. Munsell, Guorong Wu, Yue Gao, Nicholas Desisto, Martin Styner

Ensemble Hierarchical High-Order Functional Connectivity Networks for MCI Classification

Conventional functional connectivity (FC) and corresponding networks focus on characterizing the pairwise correlation between two brain regions, while the high-order FC (HOFC) and networks can model more complex relationship between two brain region “pairs” (i.e., four regions). It is eye-catching and promising for clinical applications by its irreplaceable function of providing unique and novel information for brain disease classification. Since the number of brain region pairs is very large, clustering is often used to reduce the scale of HOFC network. However, a single HOFC network, generated by a specific clustering parameter setting, may lose multifaceted, highly complementary information contained in other HOFC networks. To accurately and comprehensively characterize such complex HOFC towards better discriminability of brain diseases, in this paper, we propose a novel HOFC based disease diagnosis framework, which can hierarchically generate multiple HOFC networks and further ensemble them with a selective feature fusion method. Specifically, we create a multi-layer HOFC network construction strategy, where the networks in upper layers are formed by hierarchically clustering the nodes of the networks in lower layers. In such a way, information is passed from lower layers to upper layers by effectively removing the most redundant part of information and, at the same time, retaining the most unique part. Then, the retained information/features from all HOFC networks are fed into a selective feature fusion method, which combines sequential forward selection and sparse regression, to further select the most discriminative feature subset for classification. Experimental results confirm that our novel method outperforms all the single HOFC networks corresponding to any single parameter setting in diagnosis of mild cognitive impairment (MCI) subjects.

Xiaobo Chen, Han Zhang, Dinggang Shen

Outcome Prediction for Patient with High-Grade Gliomas from Brain Functional and Structural Networks

High-grade glioma (HGG) is a lethal cancer, which is characterized by very poor prognosis. To help optimize treatment strategy, accurate preoperative prediction of HGG patient’s outcome (i.e., survival time) is of great clinical value. However, there are huge individual variability of HGG, which produces a large variation in survival time, thus making prognostic prediction more challenging. Previous brain imaging-based outcome prediction studies relied only on the imaging intensity inside or slightly around the tumor, while ignoring any information that is located far away from the lesion (i.e., the “normal appearing” brain tissue). Notably, in addition to altering MR image intensity, we hypothesize that the HGG growth and its mass effect also change both structural (can be modeled by diffusion tensor imaging (DTI)) and functional brain connectivities (estimated by functional magnetic resonance imaging (rs-fMRI)). Therefore, integrating connectomics information in outcome prediction could improve prediction accuracy. To this end, we unprecedentedly devise a machine learning-based HGG prediction framework that can effectively extract valuable features from complex human brain connectome using network analysis tools, followed by a novel multi-stage feature selection strategy to single out good features while reducing feature redundancy. Ultimately, we use support vector machine (SVM) to classify HGG outcome as either bad (survival time ≤650 days) or good (survival time >650 days). Our method achieved 75 % prediction accuracy. We also found that functional and structural networks provide complementary information for the outcome prediction, thus leading to increased prediction accuracy compared with the baseline method, which only uses the basic clinical information (63.2 %).

Luyan Liu, Han Zhang, Islem Rekik, Xiaobo Chen, Qian Wang, Dinggang Shen

Mammographic Mass Segmentation with Online Learned Shape and Appearance Priors

Automatic segmentation of mammographic mass is an important yet challenging task. Despite the great success of shape prior in biomedical image analysis, existing shape modeling methods are not suitable for mass segmentation. The reason is that masses have no specific biological structure and exhibit complex variation in shape, margin, and size. In addition, it is difficult to preserve the local details of mass boundaries, as masses may have spiculated and obscure boundaries. To solve these problems, we propose to learn online shape and appearance priors via image retrieval. In particular, given a query image, its visually similar training masses are first retrieved via Hough voting of local features. Then, query specific shape and appearance priors are calculated from these training masses on the fly. Finally, the query mass is segmented using these priors and graph cuts. The proposed approach is extensively validated on a large dataset constructed on DDSM. Results demonstrate that our online learned priors lead to substantial improvement in mass segmentation accuracy, compared with previous systems.

Menglin Jiang, Shaoting Zhang, Yuanjie Zheng, Dimitris N. Metaxas

Differential Dementia Diagnosis on Incomplete Data with Latent Trees

Incomplete patient data is a substantial problem that is not sufficiently addressed in current clinical research. Many published methods assume both completeness and validity of study data. However, this assumption is often violated as individual features might be unavailable due to missing patient examination or distorted/wrong due to inaccurate measurements or human error. In this work we propose to use the Latent Tree (LT) generative model to address current limitations due to missing data. We show on 491 subjects of a challenging dementia dataset that LT feature estimation is more robust towards incomplete data as compared to mean or Gaussian Mixture Model imputation and has a synergistic effect when combined with common classifiers (we use SVM as example). We show that LTs allow the inclusion of incomplete samples into classifier training. Using LTs, we obtain a balanced accuracy of 62 % for the classification of all patients into five distinct dementia types even though 20 % of the features are missing in both training and testing data (68 % on complete data). Further, we confirm the potential of LTs to detect outlier samples within the dataset.

Christian Ledig, Sebastian Kaltwang, Antti Tolonen, Juha Koikkalainen, Philip Scheltens, Frederik Barkhof, Hanneke Rhodius-Meester, Betty Tijms, Afina W. Lemstra, Wiesje van der Flier, Jyrki Lötjönen, Daniel Rueckert

Bridging Computational Features Toward Multiple Semantic Features with Multi-task Regression: A Study of CT Pulmonary Nodules

The gap between the computational and semantic features is the one of major factors that bottlenecks the computer-aided diagnosis (CAD) performance from clinical usage. To bridge such gap, we propose to utilize the multi-task regression (MTR) scheme that leverages heterogeneous computational features derived from deep learning models of stacked denoising autoencoder (SDAE) and convolutional neural network (CNN) as well as Haar-like features to approach 8 semantic features of lung CT nodules. We regard that there may exist relations among the semantic features of “spiculation”, “texture”, “margin”, etc., that can be exploited with the multi-task learning technique. The Lung Imaging Database Consortium (LIDC) data is adopted for the rich annotations, where nodules were quantitatively rated for the semantic features from many radiologists. By treating each semantic feature as a task, the MTR selects and regresses the heterogeneous computational features toward the radiologists’ ratings with 10 fold cross-validation evaluation on the randomly selected LIDC 1400 nodules. The experimental results suggest that the predicted semantic scores from MTR are closer to the radiologists’ rating than the predicted scores from single-task LASSO and elastic net regression methods. The proposed semantic scoring scheme may provide richer quantitative assessments of nodules for deeper analysis and support more sophisticated clinical content retrieval in medical databases.

Sihong Chen, Dong Ni, Jing Qin, Baiying Lei, Tianfu Wang, Jie-Zhi Cheng

Robust Cancer Treatment Outcome Prediction Dealing with Small-Sized and Imbalanced Data from FDG-PET Images

Accurately predicting the outcome of cancer therapy is valuable for tailoring and adapting treatment planning. To this end, features extracted from multi-sources of information (e.g., radiomics and clinical characteristics) are potentially profitable. While it is of great interest to select the most informative features from all available ones, small-sized and imbalanced dataset, as often encountered in the medical domain, is a crucial challenge hindering reliable and stable subset selection. We propose a prediction system primarily using radiomic features extracted from FDG-PET images. It incorporates a feature selection method based on Dempster-Shafer theory, a powerful tool for modeling and reasoning with uncertain and/or imprecise information. Utilizing a data rebalancing procedure and specified prior knowledge to enhance the reliability and robustness of selected feature subsets, the proposed method aims to reduce the imprecision and overlaps between different classes in the selected feature subspace, thus finally improving the prediction accuracy. It has been evaluated by two clinical datasets, showing good performance.

Chunfeng Lian, Su Ruan, Thierry Denœux, Hua Li, Pierre Vera

Structured Sparse Kernel Learning for Imaging Genetics Based Alzheimer’s Disease Diagnosis

A kernel-learning based method is proposed to integrate multimodal imaging and genetic data for Alzheimer’s disease (AD) diagnosis. To facilitate structured feature learning in kernel space, we represent each feature with a kernel and then group kernels according to modalities. In view of the highly redundant features within each modality and also the complementary information across modalities, we introduce a novel structured sparsity regularizer for feature selection and fusion, which is different from conventional lasso and group lasso based methods. Specifically, we enforce a penalty on kernel weights to simultaneously select features sparsely within each modality and densely combine different modalities. We have evaluated the proposed method using magnetic resonance imaging (MRI) and positron emission tomography (PET), and single-nucleotide polymorphism (SNP) data of subjects from Alzheimer’s Disease Neuroimaging Initiative (ADNI) database. The effectiveness of our method is demonstrated by both the clearly improved prediction accuracy and the discovered brain regions and SNPs relevant to AD.

Jailin Peng, Le An, Xiaofeng Zhu, Yan Jin, Dinggang Shen

Semi-supervised Hierarchical Multimodal Feature and Sample Selection for Alzheimer’s Disease Diagnosis

Alzheimer’s disease (AD) is a progressive neurodegenerative disease that impairs a patient’s memory and other important mental functions. In this paper, we leverage the mutually informative and complementary features from both structural magnetic resonance imaging (MRI) and single nucleotide polymorphism (SNP) for improving the diagnosis. Due to the feature redundancy and sample outliers, direct use of all training data may lead to suboptimal performance in classification. In addition, as redundant features are involved, the most discriminative feature subset may not be identified in a single step, as commonly done in most existing feature selection approaches. Therefore, we formulate a hierarchical multimodal feature and sample selection framework to gradually select informative features and discard ambiguous samples in multiple steps. To positively guide the data manifold preservation, we utilize both labeled and unlabeled data in the learning process, making our method semi-supervised. The finally selected features and samples are then used to train support vector machine (SVM) based classification models. Our method is evaluated on 702 subjects from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset, and the superior classification results in AD related diagnosis demonstrate the effectiveness of our approach as compared to other methods.

Le An, Ehsan Adeli, Mingxia Liu, Jun Zhang, Dinggang Shen

Stability-Weighted Matrix Completion of Incomplete Multi-modal Data for Disease Diagnosis

Effective utilization of heterogeneous multi-modal data for Alzheimer’s Disease (AD) diagnosis and prognosis has always been hampered by incomplete data. One method to deal with this is low-rank matrix completion (LRMC), which simultaneous imputes missing data features and target values of interest. Although LRMC yields reasonable results, it implicitly weights features from all the modalities equally, ignoring the differences in discriminative power of features from different modalities. In this paper, we propose stability-weighted LRMC (swLRMC), an LRMC improvement that weights features and modalities according to their importance and reliability. We introduce a method, called stability weighting, to utilize subsampling techniques and outcomes from a range of hyper-parameters of sparse feature learning to obtain a stable set of weights. Incorporating these weights into LRMC, swLRMC can better account for differences in features and modalities for improving diagnosis. Experimental results confirm that the proposed method outperforms the conventional LRMC, feature-selection based LRMC, and other state-of-the-art methods.

Kim-Han Thung, Ehsan Adeli, Pew-Thian Yap, Dinggang Shen

Employing Visual Analytics to Aid the Design of White Matter Hyperintensity Classifiers

Accurate segmentation of brain white matter hyperintensities (WMHs) is important for prognosis and disease monitoring. To this end, classifiers are often trained – usually, using T1 and FLAIR weighted MR images. Incorporating additional features, derived from diffusion weighted MRI, could improve classification. However, the multitude of diffusion-derived features requires selecting the most adequate. For this, automated feature selection is commonly employed, which can often be sub-optimal. In this work, we propose a different approach, introducing a semi-automated pipeline to select interactively features for WMH classification. The advantage of this solution is the integration of the knowledge and skills of experts in the process. In our pipeline, a Visual Analytics (VA) system is employed, to enable user-driven feature selection. The resulting features are T1, FLAIR, Mean Diffusivity (MD), and Radial Diffusivity (RD) – and secondarily, $$C_S$$ and Fractional Anisotropy (FA). The next step in the pipeline is to train a classifier with these features, and compare its results to a similar classifier, used in previous work with automated feature selection. Finally, VA is employed again, to analyze and understand the classifier performance and results.

Renata Georgia Raidou, Hugo J. Kuijf, Neda Sepasian, Nicola Pezzotti, Willem H. Bouvy, Marcel Breeuwer, Anna Vilanova

The Automated Learning of Deep Features for Breast Mass Classification from Mammograms

The classification of breast masses from mammograms into benign or malignant has been commonly addressed with machine learning classifiers that use as input a large set of hand-crafted features, usually based on general geometrical and texture information. In this paper, we propose a novel deep learning method that automatically learns features based directly on the optmisation of breast mass classification from mammograms, where we target an improved classification performance compared to the approach described above. The novelty of our approach lies in the two-step training process that involves a pre-training based on the learning of a regressor that estimates the values of a large set of hand-crafted features, followed by a fine-tuning stage that learns the breast mass classifier. Using the publicly available INbreast dataset, we show that the proposed method produces better classification results, compared with the machine learning model using hand-crafted features and with deep learning method trained directly for the classification stage without the pre-training stage. We also show that the proposed method produces the current state-of-the-art breast mass classification results for the INbreast dataset. Finally, we integrate the proposed classifier into a fully automated breast mass detection and segmentation, which shows promising results.

Neeraj Dhungel, Gustavo Carneiro, Andrew P. Bradley

Multimodal Deep Learning for Cervical Dysplasia Diagnosis

To improve the diagnostic accuracy of cervical dysplasia, it is important to fuse multimodal information collected during a patient’s screening visit. However, current multimodal frameworks suffer from low sensitivity at high specificity levels, due to their limitations in learning correlations among highly heterogeneous modalities. In this paper, we design a deep learning framework for cervical dysplasia diagnosis by leveraging multimodal information. We first employ the convolutional neural network (CNN) to convert the low-level image data into a feature vector fusible with other non-image modalities. We then jointly learn the non-linear correlations among all modalities in a deep neural network. Our multimodal framework is an end-to-end deep network which can learn better complementary features from the image and non-image modalities. It automatically gives the final diagnosis for cervical dysplasia with 87.83 % sensitivity at 90 % specificity on a large dataset, which significantly outperforms methods using any single source of information alone and previous multimodal frameworks.

Tao Xu, Han Zhang, Xiaolei Huang, Shaoting Zhang, Dimitris N. Metaxas

Learning from Experts: Developing Transferable Deep Features for Patient-Level Lung Cancer Prediction

Due to recent progress in Convolutional Neural Networks (CNNs), developing image-based CNN models for predictive diagnosis is gaining enormous interest. However, to date, insufficient imaging samples with truly pathological-proven labels impede the evaluation of CNN models at scale. In this paper, we formulate a domain-adaptation framework that learns transferable deep features for patient-level lung cancer malignancy prediction. The presented work learns CNN-based features from a large discovery set (2272 lung nodules) with malignancy likelihood labels involving multiple radiologists’ assessments, and then tests the transferable predictability of these CNN-based features on a diagnosis-definite set (115 cases) with true pathologically-proven lung cancer labels. We evaluate our approach on the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) dataset, where both human expert labeling information on cancer malignancy likelihood and a set of pathologically-proven malignancy labels were provided. Experimental results demonstrate the superior predictive performance of the transferable deep features on predicting true patient-level lung cancer malignancy (Acc = 70.69 %, AUC = 0.66), which outperforms a nodule-level CNN model (Acc = 65.38 %, AUC = 0.63) and is even comparable to that of using the radiologists’ knowledge (Acc = 72.41 %, AUC = 0.76). The proposed model can largely reduce the demand for pathologically-proven data, holding promise to empower cancer diagnosis by leveraging multi-source CT imaging datasets.

Wei Shen, Mu Zhou, Feng Yang, Di Dong, Caiyun Yang, Yali Zang, Jie Tian

DeepVessel: Retinal Vessel Segmentation via Deep Learning and Conditional Random Field

Retinal vessel segmentation is a fundamental step for various ocular imaging applications. In this paper, we formulate the retinal vessel segmentation problem as a boundary detection task and solve it using a novel deep learning architecture. Our method is based on two key ideas: (1) applying a multi-scale and multi-level Convolutional Neural Network (CNN) with a side-output layer to learn a rich hierarchical representation, and (2) utilizing a Conditional Random Field (CRF) to model the long-range interactions between pixels. We combine the CNN and CRF layers into an integrated deep network called DeepVessel. Our experiments show that the DeepVessel system achieves state-of-the-art retinal vessel segmentation performance on the DRIVE, STARE, and CHASE_DB1 datasets with an efficient running time.

Huazhu Fu, Yanwu Xu, Stephen Lin, Damon Wing Kee Wong, Jiang Liu

Deep Retinal Image Understanding

This paper presents Deep Retinal Image Understanding (DRIU), a unified framework of retinal image analysis that provides both retinal vessel and optic disc segmentation. We make use of deep Convolutional Neural Networks (CNNs), which have proven revolutionary in other fields of computer vision such as object detection and image classification, and we bring their power to the study of eye fundus images. DRIU uses a base network architecture on which two set of specialized layers are trained to solve both the retinal vessel and optic disc segmentation. We present experimental validation, both qualitative and quantitative, in four public datasets for these tasks. In all of them, DRIU presents super-human performance, that is, it shows results more consistent with a gold standard than a second human annotator used as control.

Kevis-Kokitsi Maninis, Jordi Pont-Tuset, Pablo Arbeláez, Luc Van Gool

3D Deeply Supervised Network for Automatic Liver Segmentation from CT Volumes

Automatic liver segmentation from CT volumes is a crucial prerequisite yet challenging task for computer-aided hepatic disease diagnosis and treatment. In this paper, we present a novel 3D deeply supervised network (3D DSN) to address this challenging task. The proposed 3D DSN takes advantage of a fully convolutional architecture which performs efficient end-to-end learning and inference. More importantly, we introduce a deep supervision mechanism during the learning process to combat potential optimization difficulties, and thus the model can acquire a much faster convergence rate and more powerful discrimination capability. On top of the high-quality score map produced by the 3D DSN, a conditional random field model is further employed to obtain refined segmentation results. We evaluated our framework on the public MICCAI-SLiver07 dataset. Extensive experiments demonstrated that our method achieves competitive segmentation results to state-of-the-art approaches with a much faster processing speed.

Qi Dou, Hao Chen, Yueming Jin, Lequan Yu, Jing Qin, Pheng-Ann Heng

Deep Neural Networks for Fast Segmentation of 3D Medical Images

During the last years Deep Learning and especially Convolutional Neural Networks (CNN) have set new standards for different computer vision tasks like image classification and semantic segmentation. In this paper, a CNN for 3D volume segmentation based on recently introduced deep learning components will be presented. In addition to using image patches as input for a CNN, the usage of orthogonal patches, which combine shape and locality information with intensity information for CNN training will be evaluated. For this purpose a publically available CT dataset of the head-neck region has been used and the results have been compared with other state-of-the art atlas- and model-based segmentation approaches.The presented approach is fully automated, fast and not restricted to specific anatomical structures. Quantitative evaluation provides good results and shows the great potential of deep learning approaches for the segmentation of medical images.

Karl Fritscher, Patrik Raudaschl, Paolo Zaffino, Maria Francesca Spadea, Gregory C. Sharp, Rainer Schubert

SpineNet: Automatically Pinpointing Classification Evidence in Spinal MRIs

We describe a method to automatically predict radiological scores in spinal Magnetic Resonance Images (MRIs). Furthermore, we also identify and localize the pathologies that are the reasons for these scores. We term these pathological regions the “evidence hotspots”. Our contributions are two-fold: (i) a Convolutional Neural Network (CNN) architecture and training scheme to predict multiple radiological scores on multiple slice sagittal MRIs. The scheme uses multi-task CNN training with augmentation, and handles the class imbalance common in medical classification tasks. (ii) the prediction of a heat-map of evidence hotspots for each score. For both of these, all that is required for training is the class label of the disc or vertebrae, no stronger supervision (such as slice labels) is needed. We report state-of-the-art and near-human performances across multiple radiological scorings including: Pfirrmann grading, disc narrowing, endplate defects, and marrow changes.

Amir Jamaludin, Timor Kadir, Andrew Zisserman

A Deep Learning Approach for Semantic Segmentation in Histology Tissue Images

To make reliable diagnosis, pathologists often need to identify certain special regions in medical images. In inflammatory bowel disease (IBD) diagnosis via histology tissue image examination, muscle regions are known to have no immune cell infiltration, and thus are ignored by pathologists. Also, messy regions (e.g., due to distortion and poor staining) are low in diagnostic yield. Hence, excluding muscle and messy regions to focus on vital regions is crucial for accurate diagnosis of IBD. In this paper, we propose a novel deep neural network based on fully convolutional networks (FCN) to identify muscle and messy regions, in an end-to-end fashion. First, we address the challenge of having limited medical training data, for training our deep neural network (a common problem for medical image processing, which may impede the application of the powerful deep learning method). Second, to deal with target regions of largely different sizes and arbitrary shapes, our deep neural network explores multi-scale information and structural information. Experimental results on clinical images show that our approach outperforms the state-of-the-art FCN for semantic segmentation of muscle and messy regions. Our approach may be readily extended to identify other types of regions in a variety of medical imaging applications.

Jiazhuo Wang, John D. MacKenzie, Rageshree Ramachandran, Danny Z. Chen

Spatial Clockwork Recurrent Neural Network for Muscle Perimysium Segmentation

Accurate segmentation of perimysium plays an important role in early diagnosis of many muscle diseases because many diseases contain different perimysium inflammation. However, it remains as a challenging task due to the complex appearance of the perymisum morphology and its ambiguity to the background area. The muscle perimysium also exhibits strong structure spanned in the entire tissue, which makes it difficult for current local patch-based methods to capture this long-range context information. In this paper, we propose a novel spatial clockwork recurrent neural network (spatial CW-RNN) to address those issues. Specifically, we split the entire image into a set of non-overlapping image patches, and the semantic dependencies among them are modeled by the proposed spatial CW-RNN. Our method directly takes the 2D structure of the image into consideration and is capable of encoding the context information of the entire image into the local representation of each patch. Meanwhile, we leverage on the structured regression to assign one prediction mask rather than a single class label to each local patch, which enables both efficient training and testing. We extensively test our method for perimysium segmentation using digitized muscle microscopy images. Experimental results demonstrate the superiority of the novel spatial CW-RNN over other existing state of the arts.

Yuanpu Xie, Zizhao Zhang, Manish Sapkota, Lin Yang

Automated Age Estimation from Hand MRI Volumes Using Deep Learning

Biological age (BA) estimation from radiologic data is an important topic in clinical medicine, e.g. in determining endocrinological diseases or planning paediatric orthopaedic surgeries, while in legal medicine it is employed to approximate chronological age. In this work, we propose the use of deep convolutional neural networks (DCNN) for automatic BA estimation from hand MRI volumes, inspired by the way radiologists visually perform age estimation using established staging schemes that follow physical maturation. In our results we outperform the state of the art automatic BA estimation method, achieving a mean error between estimated and ground truth BA of $$0.36\,\pm \,0.30$$ years, which is in line with radiologists doing visual BA estimation.

Darko Štern, Christian Payer, Vincent Lepetit, Martin Urschler

Real-Time Standard Scan Plane Detection and Localisation in Fetal Ultrasound Using Fully Convolutional Neural Networks

Fetal mid-pregnancy scans are typically carried out according to fixed protocols. Accurate detection of abnormalities and correct biometric measurements hinge on the correct acquisition of clearly defined standard scan planes. Locating these standard planes requires a high level of expertise. However, there is a worldwide shortage of expert sonographers. In this paper, we consider a fully automated system based on convolutional neural networks which can detect twelve standard scan planes as defined by the UK fetal abnormality screening programme. The network design allows real-time inference and can be naturally extended to provide an approximate localisation of the fetal anatomy in the image. Such a framework can be used to automate or assist with scan plane selection, or for the retrospective retrieval of scan planes from recorded videos. The method is evaluated on a large database of 1003 volunteer mid-pregnancy scans. We show that standard planes acquired in a clinical scenario are robustly detected with a precision and recall of 69 % and 80 %, which is superior to the current state-of-the-art. Furthermore, we show that it can retrospectively retrieve correct scan planes with an accuracy of 71 % for cardiac views and 81 % for non-cardiac views.

Christian F. Baumgartner, Konstantinos Kamnitsas, Jacqueline Matthew, Sandra Smith, Bernhard Kainz, Daniel Rueckert

3D Deep Learning for Multi-modal Imaging-Guided Survival Time Prediction of Brain Tumor Patients

High-grade glioma is the most aggressive and severe brain tumor that leads to death of almost 50 % patients in 1–2 years. Thus, accurate prognosis for glioma patients would provide essential guidelines for their treatment planning. Conventional survival prediction generally utilizes clinical information and limited handcrafted features from magnetic resonance images (MRI), which is often time consuming, laborious and subjective. In this paper, we propose using deep learning frameworks to automatically extract features from multi-modal preoperative brain images (i.e., T1 MRI, fMRI and DTI) of high-grade glioma patients. Specifically, we adopt 3D convolutional neural networks (CNNs) and also propose a new network architecture for using multi-channel data and learning supervised features. Along with the pivotal clinical features, we finally train a support vector machine to predict if the patient has a long or short overall survival (OS) time. Experimental results demonstrate that our methods can achieve an accuracy as high as 89.9 % We also find that the learned features from fMRI and DTI play more important roles in accurately predicting the OS time, which provides valuable insights into functional neuro-oncological applications.

Dong Nie, Han Zhang, Ehsan Adeli, Luyan Liu, Dinggang Shen

From Local to Global Random Regression Forests: Exploring Anatomical Landmark Localization

State of the art anatomical landmark localization algorithms pair local Random Forest (RF) detection with disambiguation of locally similar structures by including high level knowledge about relative landmark locations. In this work we pursue the question, how much high-level knowledge is needed in addition to a single landmark localization RF to implicitly model the global configuration of multiple, potentially ambiguous landmarks. We further propose a novel RF localization algorithm that distinguishes locally similar structures by automatically identifying them, exploring the back-projection of the response from accurate local RF predictions. In our experiments we show that this approach achieves competitive results in single and multi-landmark localization when applied to 2D hand radiographic and 3D teeth MRI data sets. Additionally, when combined with a simple Markov Random Field model, we are able to outperform state of the art methods.

Darko Štern, Thomas Ebner, Martin Urschler

Regressing Heatmaps for Multiple Landmark Localization Using CNNs

We explore the applicability of deep convolutional neural networks (CNNs) for multiple landmark localization in medical image data. Exploiting the idea of regressing heatmaps for individual landmark locations, we investigate several fully convolutional 2D and 3D CNN architectures by training them in an end-to-end manner. We further propose a novel SpatialConfiguration-Net architecture that effectively combines accurate local appearance responses with spatial landmark configurations that model anatomical variation. Evaluation of our different architectures on 2D and 3D hand image datasets show that heatmap regression based on CNNs achieves state-of-the-art landmark localization performance, with SpatialConfiguration-Net being robust even in case of limited amounts of training data.

Christian Payer, Darko Štern, Horst Bischof, Martin Urschler

Self-Transfer Learning for Weakly Supervised Lesion Localization

Recent advances of deep learning have achieved remarkable performances in various computer vision tasks including weakly supervised object localization. Weakly supervised object localization is practically useful since it does not require fine-grained annotations. Current approaches overcome the difficulties of weak supervision via transfer learning from pre-trained models on large-scale general images such as ImageNet. However, they cannot be utilized for medical image domain in which do not exist such priors. In this work, we present a novel weakly supervised learning framework for lesion localization named as self-transfer learning (STL). STL jointly optimizes both classification and localization networks to help the localization network focus on correct lesions without any types of priors. We evaluate STL framework over chest X-rays and mammograms, and achieve significantly better localization performance compared to previous weakly supervised localization approaches.

Sangheum Hwang, Hyo-Eun Kim

Automatic Cystocele Severity Grading in Ultrasound by Spatio-Temporal Regression

Cystocele is a common disease in woman. Accurate assessment of cystocele severity is very important for treatment options. The transperineal ultrasound (US) has recently emerged as an alternative tool for cystocele grading. The cystocele severity is usually evaluated with the manual measurement of the maximal descent of the bladder (MDB) relative to the symphysis pubis (SP) during Valsalva maneuver. However, this process is time-consuming and operator-dependent. In this study, we propose an automatic scheme for csystocele grading from transperineal US video. A two-layer spatio-temporal regression model is proposed to identify the middle axis and lower tip of the SP, and segment the bladder, which are essential tasks for the measurement of the MDB. Both appearance and context features are extracted in the spatio-temporal domain to help the anatomy detection. Experimental results on 85 transperineal US videos show that our method significantly outperforms the state-of-the-art regression method.

Dong Ni, Xing Ji, Yaozong Gao, Jie-Zhi Cheng, Huifang Wang, Jing Qin, Baiying Lei, Tianfu Wang, Guorong Wu, Dinggang Shen

Graphical Modeling of Ultrasound Propagation in Tissue for Automatic Bone Segmentation

Bone surface identification and localization in ultrasound have been widely studied in the contexts of computer-assisted orthopedic surgeries, trauma diagnosis, and post-operative follow-up. Nevertheless, the (semi-)automatic bone surface segmentation methods proposed so far either require manual interaction or complex parametrizations, while failing to deliver accuracy fit for clinical purposes. In this paper, we utilize the physics of ultrasound propagation in human tissue by encoding this in a factor graph formulation for an automatic bone surface segmentation approach. We comparatively evaluate our method on annotated in-vivo ultrasound images of bones from several anatomical locations. Our method yields a root-mean-square error of 0.59 mm, far superior to state-of-the-art approaches.

Firat Ozdemir, Ece Ozkan, Orcun Goksel

Bayesian Image Quality Transfer

Image quality transfer (IQT) aims to enhance clinical images of relatively low quality by learning and propagating high-quality structural information from expensive or rare data sets. However, the original framework gives no indication of confidence in its output, which is a significant barrier to adoption in clinical practice and downstream processing. In this article, we present a general Bayesian extension of IQT which enables efficient and accurate quantification of uncertainty, providing users with an essential prediction of the accuracy of enhanced images. We demonstrate the efficacy of the uncertainty quantification through super-resolution of diffusion tensor images of healthy and pathological brains. In addition, the new method displays improved performance over the original IQT and standard interpolation techniques in both reconstruction accuracy and robustness to anomalies in input images.

Ryutaro Tanno, Aurobrata Ghosh, Francesco Grussu, Enrico Kaden, Antonio Criminisi, Daniel C. Alexander

Wavelet Appearance Pyramids for Landmark Detection and Pathology Classification: Application to Lumbar Spinal Stenosis

Appearance representation and feature extraction of anatomy or anatomical features is a key step for segmentation and classification tasks. We focus on an advanced appearance model in which an object is decomposed into pyramidal complementary channels, and each channel is represented by a part-based model. We apply it to landmark detection and pathology classification on the problem of lumbar spinal stenosis. The performance is evaluated on 200 routine clinical data with varied pathologies. Experimental results show an improvement on both tasks in comparison with other appearance models. We achieve a robust landmark detection performance with average point to boundary distances lower than 2 pixels, and image-level anatomical classification with accuracies around $$85\,\%$$.

Qiang Zhang, Abhir Bhalerao, Caron Parsons, Emma Helm, Charles Hutchinson

A Learning-Free Approach to Whole Spine Vertebra Localization in MRI

In recent years, analysis of magnetic resonance images of the spine gained considerable interest with vertebra localization being a key step for higher level analysis. Approaches based on trained appearance - which are de facto standard - may be inappropriate for certain tasks, because processing usually takes several minutes or training data is unavailable. Learning-free approaches have yet to show there competitiveness for whole-spine localization. Our work fills this gap. We combine a fast engineered detector with a novel vertebrae appearance similarity concept. The latter can compete with trained appearance, which we show on a data set of 64 $$T_1$$- and 64 $$T_2$$-weighted images. Our detection took $$27.7 \pm 3.78$$ s with a detection rate of 96.0 % and a distance to ground truth of $$3.45 \pm 2.2$$ mm, which is well below the slice thickness.

Marko Rak, Klaus-Dietz Tönnies

Automatic Quality Control for Population Imaging: A Generic Unsupervised Approach

Population imaging studies have opened new opportunities for a comprehensive characterization of disease phenotypes by providing large-scale databases. A major challenge is related to the ability to amass automatically and accurately the vast amount of data and hence to develop streamlined image analysis pipelines that are robust to the varying image quality. This requires a generic and fully unsupervised quality assessment technique. However, existing methods are designed for specific types of artefacts and cannot detect incidental unforeseen artefacts. Furthermore, they require manual annotations, which is a demanding task, prone to error, and in some cases ambiguous. In this study, we propose a generic unsupervised approach to simultaneously detect and localize the artefacts. We learn the normal image properties from a large dataset by introducing a new image representation approach based on an optimal coverage of images with the learned visual dictionary. The artefacts are then detected and localized as outliers. We tested our method on a femoral DXA dataset with 1300 scans. The sensitivity and specificity are 81.82 % and 94.12 %, respectively.

Mohsen Farzi, Jose M. Pozo, Eugene V. McCloskey, J. Mark Wilkinson, Alejandro F. Frangi

A Cross-Modality Neural Network Transform for Semi-automatic Medical Image Annotation

There is a pressing need in the medical imaging community to build large scale datasets that are annotated with semantic descriptors. Given the cost of expert produced annotations, we propose an automatic methodology to produce semantic descriptors for images. These can then be used as weakly labeled instances or reviewed and corrected by clinicians. Our solution is in the form of a neural network that maps a given image to a new space formed by a large number of text paragraphs written about similar, but different images, by a human expert. We then extract semantic descriptors from the text paragraphs closest to the output of the transform network to describe the input image. We used deep learning to learn mappings between images/texts and their corresponding fixed size spaces, but a shallow network as the transform between the image and text spaces. This limits the complexity of the transform model and reduces the amount of data, in the form of image and text pairs, needed for training it. We report promising results for the proposed model in automatic descriptor generation in the case of Doppler images of cardiac valves and show that the system catches up to 91 % of the disease instances and 77 % of disease severity modifiers.

Mehdi Moradi, Yufan Guo, Yaniv Gur, Mohammadreza Negahdar, Tanveer Syeda-Mahmood

Sub-category Classifiers for Multiple-instance Learning and Its Application to Retinal Nerve Fiber Layer Visibility Classification

We propose a novel multiple instance learning method to assess the visibility (visible/not visible) of the retinal nerve fiber layer (RNFL) in fundus camera images. Using only image-level labels, our approach learns to classify the images as well as to localize the RNFL visible regions. We transform the original feature space to a discriminative subspace, and learn a region-level classifier in that subspace. We propose a margin-based loss function to jointly learn this subspace and the region-level classifier. Experiments with a RNFL dataset containing 576 images annotated by two experienced ophthalmologists give an agreement (kappa values) of 0.65 and 0.58 respectively, with an inter-annotator agreement of 0.62. Note that our system gives higher agreements with the more experienced annotator. Comparative tests with three public datasets (MESSIDOR and DR for diabetic retinopathy, UCSB for breast cancer) show improved performance over the state-of-the-art.

Siyamalan Manivannan, Caroline Cobb, Stephen Burgess, Emanuele Trucco

Vision-Based Classification of Developmental Disorders Using Eye-Movements

This paper proposes a system for fine-grained classification of developmental disorders via measurements of individuals’ eye-movements using multi-modal visual data. While the system is engineered to solve a psychiatric problem, we believe the underlying principles and general methodology will be of interest not only to psychiatrists but to researchers and engineers in medical machine vision. The idea is to build features from different visual sources that capture information not contained in either modality. Using an eye-tracker and a camera in a setup involving two individuals speaking, we build temporal attention features that describe the semantic location that one person is focused on relative to the other person’s face. In our clinical context, these temporal attention features describe a patient’s gaze on finely discretized regions of an interviewing clinician’s face, and are used to classify their particular developmental disorder.

Guido Pusiol, Andre Esteva, Scott S. Hall, Michael Frank, Arnold Milstein, Li Fei-Fei

Scalable Unsupervised Domain Adaptation for Electron Microscopy

While Machine Learning algorithms are key to automating organelle segmentation in large EM stacks, they require annotated data, which is hard to come by in sufficient quantities. Furthermore, images acquired from one part of the brain are not always representative of another due to the variability in the acquisition and staining processes. Therefore, a classifier trained on the first may perform poorly on the second and additional annotations may be required. To remove this cumbersome requirement, we introduce an Unsupervised Domain Adaptation approach that can leverage annotated data from one brain area to train a classifier that applies to another for which no labeled data is available. To this end, we establish noisy visual correspondences between the two areas and develop a Multiple Instance Learning approach to exploiting them. We demonstrate the benefits of our approach over several baselines for the purpose of synapse and mitochondria segmentation in EM stacks of different parts of mouse brains.

Róger Bermúdez-Chacón, Carlos Becker, Mathieu Salzmann, Pascal Fua

Automated Diagnosis of Neural Foraminal Stenosis Using Synchronized Superpixels Representation

Neural foramina stenosis (NFS), as a common spine disease, affects $$80\,\%$$ of people. Clinical diagnosis by physicians’ manual segmentation is inefficient and laborious. Automated diagnosis is highly desirable but faces the class overlapping problem derived from the diverse shape and size. In this paper, a fully automated diagnosis approach is proposed for NFS. It is based on a newly proposed synchronized superpixels representation (SSR) model where a highly discriminative feature space is obtained for accurately and easily classifying neural foramina into normal and stenosed classes. To achieve it, class labels (0:normal,1:stenosed) are integrated to guide manifold alignment which correlates images from the same class, so that intra-class difference is reduced and the inter-class margin are maximized. The overall result reaches a high accuracy ($$98.52\,\%$$) in 110 mid-sagittal MR spine images collected from 110 subjects. Hence, with our approach, an efficient and accurate clinical tool is provided to greatly reduce the burden of physicians and ensure the timely treatment of NFS.

Xiaoxu He, Yilong Yin, Manas Sharma, Gary Brahm, Ashley Mercado, Shuo Li

Automated Segmentation of Knee MRI Using Hierarchical Classifiers and Just Enough Interaction Based Learning: Data from Osteoarthritis Initiative

We present a fully automated learning-based approach for segmenting knee cartilage in presence of osteoarthritis (OA). The algorithm employs a hierarchical set of two random forest classifiers. The first is a neighborhood approximation forest, the output probability map of which is utilized as a feature set for the second random forest (RF) classifier. The output probabilities of the hierarchical approach are used as cost functions in a Layered Optimal Graph Segmentation of Multiple Objects and Surfaces (LOGISMOS). In this work, we highlight a novel post-processing interaction called just-enough interaction (JEI) which enables quick and accurate generation of a large set of training examples. Disjoint sets of 15 and 13 subjects were used for training and tested on another disjoint set of 53 knee datasets. All images were acquired using double echo steady state (DESS) MRI sequence and are from the osteoarthritis initiative (OAI) database. Segmentation performance using the learning-based cost function showed significant reduction in segmentation errors ($$p< 0.05$$) in comparison with conventional gradient-based cost functions.

Satyananda Kashyap, Ipek Oguz, Honghai Zhang, Milan Sonka

Dynamically Balanced Online Random Forests for Interactive Scribble-Based Segmentation

Interactive scribble-and-learning-based segmentation is attractive for its good performance and reduced number of user interaction. Scribbles for foreground and background are often imbalanced. With the arrival of new scribbles, the imbalance ratio may change largely. Failing to deal with imbalanced training data and a changing imbalance ratio may lead to a decreased sensitivity and accuracy for segmentation. We propose a generic Dynamically Balanced Online Random Forest (DyBa ORF) to deal with these problems, with a combination of a dynamically balanced online Bagging method and a tree growing and shrinking strategy to update the random forests. We validated DyBa ORF on UCI machine learning data sets and applied it to two different clinical applications: 2D segmentation of the placenta from fetal MRI and adult lungs from radiographic images. Experiments show it outperforms traditional ORF in dealing with imbalanced data with a changing imbalance ratio, while maintaining a comparable accuracy and a higher efficiency compared with its offline counterpart. Our results demonstrate that DyBa ORF is more suitable than existing ORF for learning-based interactive image segmentation.

Guotai Wang, Maria A. Zuluaga, Rosalind Pratt, Michael Aertsen, Tom Doel, Maria Klusmann, Anna L. David, Jan Deprest, Tom Vercauteren, Sébastien Ourselin

Orientation-Sensitive Overlap Measures for the Validation of Medical Image Segmentations

Validation is a key concept in the development and assessment of medical image segmentation algorithms. However, the proliferation of modern, non-deterministic segmentation algorithms has not been met by an equivalent improvement in validation strategies. In this paper, we briefly examine the state of the art in validation, and propose an improved validation method for non-deterministic segmentations, showing that it improves validation precision and accuracy on both synthetic and clinical sets, compared to more traditional (but still widely used) methods and state of the art.

Tasos Papastylianou, Erica Dall’ Armellina, Vicente Grau

High-Throughput Glomeruli Analysis of CT Kidney Images Using Tree Priors and Scalable Sparse Computation

Kidney-related diseases have incrementally become one major cause of death. Glomeruli are the physiological units in the kidney responsible for the blood filtration. Therefore, their statistics including number and volume, directly describe the efficiency and health state of the kidney. Stereology is the current quantification method relying on histological sectioning, sampling and further 2D analysis, being laborious and sample destructive. New micro-Computed Tomography ($$\mu $$CT) imaging protocols resolute structures down to capillary level. However large-scale glomeruli analysis remains challenging due to object identifiability, allotted memory resources and computational time. We present a methodology for high-throughput glomeruli analysis that incorporates physiological apriori information relating the kidney vasculature with estimates of glomeruli counts. We propose an effective sampling strategy that exploits scalable sparse segmentation of kidney regions for refined estimates of both glomeruli count and volume. We evaluated the proposed approach on a database of $$\mu $$CT datasets yielding a comparable segmentation accuracy as an exhaustive supervised learning method. Furthermore we show the ability of the proposed sampling strategy to result in improved estimates of glomeruli counts and volume without requiring a exhaustive segmentation of the $$\mu $$CT image. This approach can potentially be applied to analogous organizations, such as for example the quantification of alveoli in lungs.

Carlos Correa Shokiche, Philipp Baumann, Ruslan Hlushchuk, Valentin Djonov, Mauricio Reyes

A Surface Patch-Based Segmentation Method for Hippocampal Subfields

Several neurological disorders are associated with hippocampal pathology. As changes may be localized to specific subfields or spanning across different subfields, accurate subfield segmentation may improve non-invasive diagnostics. We propose an automated subfield segmentation procedure, which combines surface-based processing with a patch-based template library and feature matching. Validation experiments in 25 healthy individuals showed high segmentation accuracy (Dice >82 % across all subfields) and robustness to variations in the template library size. Applying the algorithm to a cohort of patients with temporal lobe epilepsy and hippocampal sclerosis, we correctly lateralized the seizure focus in >90 %. This advantageously compares to classifiers relying on volumes retrieved from other state-of-the-art algorithms.

Benoit Caldairou, Boris C. Bernhardt, Jessie Kulaga-Yoskovitz, Hosung Kim, Neda Bernasconi, Andrea Bernasconi

Automatic Lymph Node Cluster Segmentation Using Holistically-Nested Neural Networks and Structured Optimization in CT Images

Lymph node segmentation is an important yet challenging problem in medical image analysis. The presence of enlarged lymph nodes (LNs) signals the onset or progression of a malignant disease or infection. In the thoracoabdominal (TA) body region, neighboring enlarged LNs often spatially collapse into “swollen” lymph node clusters (LNCs) (up to 9 LNs in our dataset). Accurate segmentation of TA LNCs is complexified by the noticeably poor intensity and texture contrast among neighboring LNs and surrounding tissues, and has not been addressed in previous work. This paper presents a novel approach to TA LNC segmentation that combines holistically-nested neural networks (HNNs) and structured optimization (SO). Two HNNs, built upon recent fully convolutional networks (FCNs) and deeply supervised networks (DSNs), are trained to learn the LNC appearance (HNN-A) or contour (HNN-C) probabilistic output maps, respectively. HNN first produces the class label maps with the same resolution as the input image, like FCN. Afterwards, HNN predictions for LNC appearance and contour cues are formulated into the unary and pairwise terms of conditional random fields (CRFs), which are subsequently solved using one of three different SO methods: dense CRF, graph cuts, and boundary neural fields (BNF). BNF yields the highest quantitative results. Its mean Dice coefficient between segmented and ground truth LN volumes is 82.1 % ± 9.6 %, compared to 73.0 % ± 17.6 % for HNN-A alone. The LNC relative volume ($$cm^3$$) difference is 13.7 % ± 13.1 %, a promising result for the development of LN imaging biomarkers based on volumetric measurements.

Isabella Nogues, Le Lu, Xiaosong Wang, Holger Roth, Gedas Bertasius, Nathan Lay, Jianbo Shi, Yohannes Tsehay, Ronald M. Summers

Evaluation-Oriented Training via Surrogate Metrics for Multiple Sclerosis Segmentation

In current approaches to automatic segmentation of multiple sclerosis (MS) lesions, the segmentation model is not optimized with respect to all relevant evaluation metrics at once, leading to unspecific training. An obstacle is that the computation of relevant metrics is three-dimensional (3D). The high computational costs of 3D metrics make their use impractical as learning targets for iterative training. In this paper, we propose an oriented training strategy that employs cheap 2D metrics as surrogates for expensive 3D metrics. We optimize a simple multilayer perceptron (MLP) network as segmentation model. We study fidelity and efficiency of surrogate 2D metrics. We compare oriented training to unspecific training. The results show that oriented training produces a better balance between metrics surpassing unspecific training on average. The segmentation quality obtained with a simple MLP through oriented training is comparable to the state-of-the-art; this includes a recent work using a deep neural network, a more complex model. By optimizing all relevant evaluation metrics at once, oriented training can improve MS lesion segmentation.

Michel M. Santos, Paula R. B. Diniz, Abel G. Silva-Filho, Wellington P. Santos

Corpus Callosum Segmentation in Brain MRIs via Robust Target-Localization and Joint Supervised Feature Extraction and Prediction

Accurate segmentation of the mid-sagittal corpus callosum as captured in magnetic resonance images is an important step in many clinical research studies for various neurological disorders. This task can be challenging, however, especially more so in clinical studies, like those acquired of multiple sclerosis patients, whose brain structures may have undergone significant changes, rendering accurate registrations and hence, (multi-) atlas-based segmentation algorithms inapplicable. Furthermore, the MRI scans to be segmented often vary significantly in terms of image quality, rendering many generic unsupervised segmentation methods insufficient, as demonstrated in a recent work. In this paper, we hypothesize that adopting a supervised approach to the segmentation task may bring a break-through to performance. By employing a discriminative learning framework, our method automatically learns a set of latent features useful for identifying the target structure that proved to generalize well across various datasets, as our experiments demonstrate. Our evaluations, as conducted on four large datasets collected from different sources, totaling 2,033 scans, demonstrates that our method achieves an average Dice similarity score of 0.93 on test sets, when the models were trained on at most 300 images, while the top-performing unsupervised method could only achieve an average Dice score of 0.77.

Lisa Y. W. Tang, Tom Brosch, XingTong Liu, Youngjin Yoo, Anthony Traboulsee, David Li, Roger Tam

Automatic Liver and Lesion Segmentation in CT Using Cascaded Fully Convolutional Neural Networks and 3D Conditional Random Fields

Automatic segmentation of the liver and its lesion is an important step towards deriving quantitative biomarkers for accurate clinical diagnosis and computer-aided decision support systems. This paper presents a method to automatically segment liver and lesions in CT abdomen images using cascaded fully convolutional neural networks (CFCNs) and dense 3D conditional random fields (CRFs). We train and cascade two FCNs for a combined segmentation of the liver and its lesions. In the first step, we train a FCN to segment the liver as ROI input for a second FCN. The second FCN solely segments lesions from the predicted liver ROIs of step 1. We refine the segmentations of the CFCN using a dense 3D CRF that accounts for both spatial coherence and appearance. CFCN models were trained in a 2-fold cross-validation on the abdominal CT dataset 3DIRCAD comprising 15 hepatic tumor volumes. Our results show that CFCN-based semantic liver and lesion segmentation achieves Dice scores over $$94\,\%$$ for liver with computation times below 100 s per volume. We experimentally demonstrate the robustness of the proposed method as a decision support system with a high accuracy and speed for usage in daily clinical routine.

Patrick Ferdinand Christ, Mohamed Ezzeldin A. Elshaer, Florian Ettlinger, Sunil Tatavarty, Marc Bickel, Patrick Bilic, Markus Rempfler, Marco Armbruster, Felix Hofmann, Melvin D’Anastasi, Wieland H. Sommer, Seyed-Ahmad Ahmadi, Bjoern H. Menze

3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation

This paper introduces a network for volumetric segmentation that learns from sparsely annotated volumetric images. We outline two attractive use cases of this method: (1) In a semi-automated setup, the user annotates some slices in the volume to be segmented. The network learns from these sparse annotations and provides a dense 3D segmentation. (2) In a fully-automated setup, we assume that a representative, sparsely annotated training set exists. Trained on this data set, the network densely segments new volumetric images. The proposed network extends the previous u-net architecture from Ronneberger et al. by replacing all 2D operations with their 3D counterparts. The implementation performs on-the-fly elastic deformations for efficient data augmentation during training. It is trained end-to-end from scratch, i.e., no pre-trained network is required. We test the performance of the proposed method on a complex, highly variable 3D structure, the Xenopus kidney, and achieve good results for both use cases.

Özgün Çiçek, Ahmed Abdulkadir, Soeren S. Lienkamp, Thomas Brox, Olaf Ronneberger

Model-Based Segmentation of Vertebral Bodies from MR Images with 3D CNNs

We propose an automated method for supervised segmentation of vertebral bodies (VBs) from three-dimensional (3D) magnetic resonance (MR) spine images that is based on coupling deformable models with convolutional neural networks (CNNs). We designed a 3D CNN architecture that learns the appearance from a training set of VBs to generate 3D spatial VB probability maps, which guide deformable models towards VB boundaries. The proposed method was applied to segment 161 VBs from 3D MR spine images of 23 subjects, and the results were compared to reference segmentations. By yielding an overall Dice similarity coefficient of $$93.4\,{\pm }\,1.7\,\%$$, mean symmetric surface distance of $$0.54\,{\pm }\,0.14\,\text {mm}$$ and Hausdorff distance of $$3.83\,{\pm }\,1.04\,\text {mm}$$, the proposed method proved superior to existing VB segmentation methods.

Robert Korez, Boštjan Likar, Franjo Pernuš, Tomaž Vrtovec

Pancreas Segmentation in MRI Using Graph-Based Decision Fusion on Convolutional Neural Networks

Automated pancreas segmentation in medical images is a prerequisite for many clinical applications, such as diabetes inspection, pancreatic cancer diagnosis, and surgical planing. In this paper, we formulate pancreas segmentation in magnetic resonance imaging (MRI) scans as a graph based decision fusion process combined with deep convolutional neural networks (CNN). Our approach conducts pancreatic detection and boundary segmentation with two types of CNN models respectively: (1) the tissue detection step to differentiate pancreas and non-pancreas tissue with spatial intensity context; (2) the boundary detection step to allocate the semantic boundaries of pancreas. Both detection results of the two networks are fused together as the initialization of a conditional random field (CRF) framework to obtain the final segmentation output. Our approach achieves the mean dice similarity coefficient (DSC) $$76.1\,\%$$ with the standard deviation of $$8.7\,\%$$ in a dataset containing 78 abdominal MRI scans. The proposed algorithm achieves the best results compared with other state of the arts.

Jinzheng Cai, Le Lu, Zizhao Zhang, Fuyong Xing, Lin Yang, Qian Yin

Spatial Aggregation of Holistically-Nested Networks for Automated Pancreas Segmentation

Accurate automatic organ segmentation is an important yet challenging problem for medical image analysis. The pancreas is an abdominal organ with very high anatomical variability. This inhibits traditional segmentation methods from achieving high accuracies, especially compared to other organs such as the liver, heart or kidneys. In this paper, we present a holistic learning approach that integrates semantic mid-level cues of deeply-learned organ interior and boundary maps via robust spatial aggregation using random forest. Our method generates boundary preserving pixel-wise class labels for pancreas segmentation. Quantitative evaluation is performed on CT scans of 82 patients in 4-fold cross-validation. We achieve a (mean ± std. dev.) Dice Similarity Coefficient of 78.01 %±8.2 % in testing which significantly outperforms the previous state-of-the-art approach of 71.8 %±10.7 % under the same evaluation criterion.

Holger R. Roth, Le Lu, Amal Farag, Andrew Sohn, Ronald M. Summers

Topology Aware Fully Convolutional Networks for Histology Gland Segmentation

The recent success of deep learning techniques in classification and object detection tasks has been leveraged for segmentation tasks. However, a weakness of these deep segmentation models is their limited ability to encode high level shape priors, such as smoothness and preservation of complex interactions between object regions, which can result in implausible segmentations. In this work, by formulating and optimizing a new loss, we introduce the first deep network trained to encode geometric and topological priors of containment and detachment. Our results on the segmentation of histology glands from a dataset of 165 images demonstrate the advantage of our novel loss terms and show how our topology aware architecture outperforms competing methods by up to 10 % in both pixel-level accuracy and object-level Dice.

Aïcha BenTaieb, Ghassan Hamarneh

HeMIS: Hetero-Modal Image Segmentation

We introduce a deep learning image segmentation framework that is extremely robust to missing imaging modalities. Instead of attempting to impute or synthesize missing data, the proposed approach learns, for each modality, an embedding of the input image into a single latent vector space for which arithmetic operations (such as taking the mean) are well defined. Points in that space, which are averaged over modalities available at inference time, can then be further processed to yield the desired segmentation. As such, any combinatorial subset of available modalities can be provided as input, without having to learn a combinatorial number of imputation models. Evaluated on two neurological MRI datasets (brain tumors and MS lesions), the approach yields state-of-the-art segmentation results when provided with all modalities; moreover, its performance degrades remarkably gracefully when modalities are removed, significantly more so than alternative mean-filling or other synthesis approaches.

Mohammad Havaei, Nicolas Guizard, Nicolas Chapados, Yoshua Bengio

Deep Learning for Multi-task Medical Image Segmentation in Multiple Modalities

Automatic segmentation of medical images is an important task for many clinical applications. In practice, a wide range of anatomical structures are visualised using different imaging modalities. In this paper, we investigate whether a single convolutional neural network (CNN) can be trained to perform different segmentation tasks.A single CNN is trained to segment six tissues in MR brain images, the pectoral muscle in MR breast images, and the coronary arteries in cardiac CTA. The CNN therefore learns to identify the imaging modality, the visualised anatomical structures, and the tissue classes.For each of the three tasks (brain MRI, breast MRI and cardiac CTA), this combined training procedure resulted in a segmentation performance equivalent to that of a CNN trained specifically for that task, demonstrating the high capacity of CNN architectures. Hence, a single system could be used in clinical practice to automatically perform diverse segmentation tasks without task-specific training.

Pim Moeskops, Jelmer M. Wolterink, Bas H. M. van der Velden, Kenneth G. A. Gilhuijs, Tim Leiner, Max A. Viergever, Ivana Išgum

Iterative Multi-domain Regularized Deep Learning for Anatomical Structure Detection and Segmentation from Ultrasound Images

Accurate detection and segmentation of anatomical structures from ultrasound images are crucial for clinical diagnosis and biometric measurements. Although ultrasound imaging has been widely used with superiorities such as low cost and portability, the fuzzy border definition and existence of abounding artifacts pose great challenges for automatically detecting and segmenting the complex anatomical structures. In this paper, we propose a multi-domain regularized deep learning method to address this challenging problem. By leveraging the transfer learning from cross domains, the feature representations are effectively enhanced. The results are further improved by the iterative refinement. Moreover, our method is quite efficient by taking advantage of a fully convolutional network, which is formulated as an end-to-end learning framework of detection and segmentation. Extensive experimental results on a large-scale database corroborated that our method achieved a superior detection and segmentation accuracy, outperforming other methods by a significant margin and demonstrating competitive capability even compared to human performance.

Hao Chen, Yefeng Zheng, Jin-Hyeong Park, Pheng-Ann Heng, S. Kevin Zhou

Gland Instance Segmentation by Deep Multichannel Side Supervision

In this paper, we propose a new image instance segmentation method that segments individual glands (instances) in colon histology images. This is a task called instance segmentation that has recently become increasingly important. The problem is challenging since not only do the glands need to be segmented from the complex background, they are also required to be individually identified. Here we leverage the idea of image-to-image prediction in recent deep learning by building a framework that automatically exploits and fuses complex multichannel information, regional and boundary patterns, with side supervision (deep supervision on side responses) in gland histology images. Our proposed system, deep multichannel side supervision (DMCS), alleviates heavy feature design due to the use of convolutional neural networks guided by side supervision. Compared to methods reported in the 2015 MICCAI Gland Segmentation Challenge, we observe state-of-the-art results based on a number of evaluation metrics.

Yan Xu, Yang Li, Mingyuan Liu, Yipei Wang, Maode Lai, Eric I-Chao Chang

Enhanced Probabilistic Label Fusion by Estimating Label Confidences Through Discriminative Learning

Multiple-atlas segmentation has recently shown success in automatic segmentation of brain images. It consists in registering the labelmaps from a set of atlases to the anatomy of a target image, and then fusing the multiple labelmaps into a consensus segmentation on the target image. Accurately estimating the confidence of each atlas decision is key for the success of label fusion. Common approaches either rely on local patch similarity, probabilistic statistical frameworks or a combination of both. We present a probabilistic label fusion framework that takes into account label confidence at each point. Maximum likelihood atlas confidences are estimated by explicitly modelling the relationship between image appearance and segmentation errors. We also propose a novel type of label-dependent appearance features based on atlas labelmaps. Our results indicate that the proposed label fusion framework achieves state-of-the-art performance in the segmentation of subcortical structures.

Oualid M. Benkarim, Gemma Piella, Miguel Angel González Ballester, Gerard Sanroma

Feature Sensitive Label Fusion with Random Walker for Atlas-Based Image Segmentation

In this paper, a novel label fusion method is proposed and formulated on a graph, which embraces both label priors from atlases and anatomical priors from target image. To represent a pixel in a comprehensive way, three kinds of feature vectors are generated, including intensity, gradient and structural signature. Feature Sensitive Label Prior (FSLP), which takes both the consistency and variety of different features into consideration, is proposed to gather atlas priors. As FSLP is a non-convex problem, one heuristic approach is further designed to solve it efficiently. Moreover, based on anatomical knowledge, parts of the target pixels are also employed as graph seeds to assist the label fusion process. The experiments carried out on two publicly available databases give results to demonstrate that the proposed method can obtain better segmentation quality.

Siqi Bao, Albert C. S. Chung

Deep Fusion Net for Multi-atlas Segmentation: Application to Cardiac MR Images

Atlas selection and label fusion are two major challenges in multi-atlas segmentation. In this paper, we propose a novel deep fusion net for better solving these challenges. Deep fusion net is a deep architecture by concatenating a feature extraction subnet and a non-local patch-based label fusion (NL-PLF) subnet in a single network. This network is trained end-to-end for automatically learning deep features achieving optimal performance in a NL-PLF framework. The learned deep features are further utilized in defining a similarity measure for atlas selection. Experimental results on Cardiac MR images for left ventricular segmentation demonstrate that our approach is effective both in atlas selection and multi-atlas label fusion, and achieves state of the art in performance.

Heran Yang, Jian Sun, Huibin Li, Lisheng Wang, Zongben Xu

Prior-Based Coregistration and Cosegmentation

We propose a modular and scalable framework for dense coregistration and cosegmentation with two key characteristics: first, we substitute ground truth data with the semantic map output of a classifier; second, we combine this output with population deformable registration to improve both alignment and segmentation. Our approach deforms all volumes towards consensus, taking into account image similarities and label consistency. Our pipeline can incorporate any classifier and similarity metric. Results on two datasets, containing annotations of challenging brain structures, demonstrate the potential of our method.

Mahsa Shakeri, Enzo Ferrante, Stavros Tsogkas, Sarah Lippé, Samuel Kadoury, Iasonas Kokkinos, Nikos Paragios

Globally Optimal Label Fusion with Shape Priors

Multi-atlas label fusion methods have gained popularity in a variety of segmentation tasks given their attractive performance. Graph-based segmentation methods are widely used given their global optimality guarantee. We propose a novel approach, GOLF, that combines the strengths of these two approaches. GOLF incorporates shape priors to the label-fusion problem and provides a globally optimal solution even for the multi-label scenario, while also leveraging the highly accurate posterior maps from a multi-atlas label fusion approach. We demonstrate GOLF for the joint segmentation of the left and right pairs of caudate, putamen, globus pallidus and nucleus accumbens. Compared to the FreeSurfer and FIRST approaches, GOLF is significantly more accurate on all reported indices for all 8 structures. We also present comparisons to a multi-atlas approach, which reveals further insights on the contributions of the different components of the proposed framework.

Ipek Oguz, Satyananda Kashyap, Hongzhi Wang, Paul Yushkevich, Milan Sonka

Joint Segmentation and CT Synthesis for MRI-only Radiotherapy Treatment Planning

Accurate knowledge of organ location and tissue attenuation properties are the two essential components to perform radiotherapy treatment planning (RTP). Computed tomography (CT) has been the modality of choice for RTP as it easily provides electron density information. However, its low soft tissue contrast limits the accuracy of organ delineation. On the contrary, magnetic resonance (MR) provides images with excellent soft tissue contrast but its use for RTP is limited by the fact that it does not readily provide tissue attenuation information.In this work we propose a multi-atlas information propagation scheme that jointly segments the organs at risk and generates pseudo CT data from MR images. We demonstrate that the proposed framework is able to automatically generate accurate pseudo CT images and segmentations in the pelvic region, bypassing the need for CT scan for accurate RTP.

Ninon Burgos, Filipa Guerreiro, Jamie McClelland, Simeon Nill, David Dearnaley, Nandita deSouza, Uwe Oelfke, Antje-Christin Knopf, Sébastien Ourselin, M. Jorge Cardoso

Regression Forest-Based Atlas Localization and Direction Specific Atlas Generation for Pancreas Segmentation

This paper proposes a fully automated atlas-based pancreas segmentation method from CT volumes utilizing atlas localization by regression forest and atlas generation using blood vessel information. Previous probabilistic atlas-based pancreas segmentation methods cannot deal with spatial variations that are commonly found in the pancreas well. Also, shape variations are not represented by an averaged atlas. We propose a fully automated pancreas segmentation method that deals with two types of variations mentioned above. The position and size of the pancreas is estimated using a regression forest technique. After localization, a patient-specific probabilistic atlas is generated based on a new image similarity that reflects the blood vessel position and direction information around the pancreas. We segment it using the EM algorithm with the atlas as prior followed by the graph-cut. In evaluation results using 147 CT volumes, the Jaccard index and the Dice overlap of the proposed method were 62.1 % and 75.1 %, respectively. Although we automated all of the segmentation processes, segmentation results were superior to the other state-of-the-art methods in the Dice overlap.

Masahiro Oda, Natsuki Shimizu, Ken’ichi Karasawa, Yukitaka Nimura, Takayuki Kitasaka, Kazunari Misawa, Michitaka Fujiwara, Daniel Rueckert, Kensaku Mori

Accounting for the Confound of Meninges in Segmenting Entorhinal and Perirhinal Cortices in T1-Weighted MRI

Quantification of medial temporal lobe (MTL) cortices, including entorhinal cortex (ERC) and perirhinal cortex (PRC), from in vivo MRI is desirable for studying the human memory system as well as in early diagnosis and monitoring of Alzheimer’s disease. However, ERC and PRC are commonly over-segmented in T1-weighted (T1w) MRI because of the adjacent meninges that have similar intensity to gray matter in T1 contrast. This introduces errors in the quantification and could potentially confound imaging studies of ERC/PRC. In this paper, we propose to segment MTL cortices along with the adjacent meninges in T1w MRI using an established multi-atlas segmentation framework together with super-resolution technique. Experimental results comparing the proposed pipeline with existing pipelines support the notion that a large portion of meninges is segmented as gray matter by existing algorithms but not by our algorithm. Cross-validation experiments demonstrate promising segmentation accuracy. Further, agreement between the volume and thickness measures from the proposed pipeline and those from the manual segmentations increase dramatically as a result of accounting for the confound of meninges. Evaluated in the context of group discrimination between patients with amnestic mild cognitive impairment and normal controls, the proposed pipeline generates more biologically plausible results and improves the statistical power in discriminating groups in absolute terms comparing to other techniques using T1w MRI. Although the performance of the proposed pipeline is inferior to that using T2-weighted MRI, which is optimized to image MTL sub-structures, the proposed pipeline could still provide important utilities in analyzing many existing large datasets that only have T1w MRI available.

Long Xie, Laura E. M. Wisse, Sandhitsu R. Das, Hongzhi Wang, David A. Wolk, Jose V. Manjón, Paul A. Yushkevich

7T-Guided Learning Framework for Improving the Segmentation of 3T MR Images

The emerging era of ultra-high-field MRI using 7T MRI scanners dramatically improved sensitivity, image resolution, and tissue contrast when compared to 3T MRI scanners in examining various anatomical structures. The advantages of these high-resolution MR images include higher segmentation accuracy of MRI brain tissues. However, currently, accessibility to 7T MRI scanners remains much more limited than 3T MRI scanners due to technological and economical constraints. Hence, we propose in this work the first learning-based model that improves the segmentation of an input 3T MR image with any conventional segmentation method, through the reconstruction of a higher-quality 7T-like MR image, without actually acquiring an ultra-high-field 7T MRI. Our proposed framework comprises two main steps. First, we estimate a non-linear mapping from 3T MRI to 7T MRI space, using random forest regression model with novel weighting and ensembling schemes, to reconstruct initial 7T-like MR images. Second, we use a group sparse representation with a new pre-selection approach to further refine the 7T-like MR image reconstruction. We evaluated our 7T MRI reconstruction results along with their segmentation results using 13 subjects acquired with both 3T and 7T MR images. For tissue segmentation, we applied two widely used segmentation methods (FAST and SPM) to perform the experiments. Our results showed (1) the improvement of WM, GM and CSF brain tissues segmentation results when guided by reconstructed 7T-like images compared to 3T MR images, and (2) the outperformance of the proposed 7T MRI reconstruction method when compared to other state-of-the-art methods.

Khosro Bahrami, Islem Rekik, Feng Shi, Yaozong Gao, Dinggang Shen

Multivariate Mixture Model for Cardiac Segmentation from Multi-Sequence MRI

Cardiac segmentation is commonly a prerequisite for functional analysis of the heart, such as to identify and quantify the infarcts and edema from the normal myocardium using the late-enhanced (LE) and T2-weighted MRI. The automatic delineation of myocardium is however challenging due to the heterogeneous intensity distributions and indistinct boundaries in the images. In this work, we present a multivariate mixture model (MvMM) for text classification, which combines the complementary information from multi-sequence (MS) cardiac MRI and perform the segmentation of them simultaneously. The expectation maximization (EM) method is adopted to estimate the segmentation and model parameters from the log-likelihood (LL) of the mixture model, where a probabilistic atlas is used for initialization. Furthermore, to correct the intra- and inter-image misalignments, we formulate the MvMM with transformations, which are embedded into the LL framework and thus can be optimized by the iterative conditional mode approach. We applied MvMM for segmentation of eighteen subjects with three sequences and obtained promising results. We compared with two conventional methods, and the improvements of segmentation performance on LE and T2 MRI were evident and statistically significant by MvMM.

Xiahai Zhuang

Fast Fully Automatic Segmentation of the Human Placenta from Motion Corrupted MRI

Recently, magnetic resonance imaging has revealed to be important for the evaluation of placenta’s health during pregnancy. Quantitative assessment of the placenta requires a segmentation, which proves to be challenging because of the high variability of its position, orientation, shape and appearance. Moreover, image acquisition is corrupted by motion artifacts from both fetal and maternal movements. In this paper we propose a fully automatic segmentation framework of the placenta from structural T2-weighted scans of the whole uterus, as well as an extension in order to provide an intuitive pre-natal view into this vital organ. We adopt a 3D multi-scale convolutional neural network to automatically identify placental candidate pixels. The resulting classification is subsequently refined by a 3D dense conditional random field, so that a high resolution placental volume can be reconstructed from multiple overlapping stacks of slices. Our segmentation framework has been tested on 66 subjects at gestational ages 20–38 weeks achieving a Dice score of $$71.95\pm 19.79\,\%$$ for healthy fetuses with a fixed scan sequence and $$66.89\pm 15.35\,\%$$ for a cohort mixed with cases of intrauterine fetal growth restriction using varying scan parameters.

Amir Alansary, Konstantinos Kamnitsas, Alice Davidson, Rostislav Khlebnikov, Martin Rajchl, Christina Malamateniou, Mary Rutherford, Joseph V. Hajnal, Ben Glocker, Daniel Rueckert, Bernhard Kainz

Multi-organ Segmentation Using Vantage Point Forests and Binary Context Features

Dense segmentation of large medical image volumes using a labelled training dataset requires strong classifiers. Ensembles of random decision trees have been shown to achieve good segmentation accuracies with very fast computation times. However, smaller anatomical structures such as muscles or organs with high shape variability present a challenge to them, especially when relying on axis-parallel split functions, which make finding joint relations among features difficult. Recent work has shown that structural and contextual information can be well captured using a large number of simple pairwise intensity comparisons stored in binary vectors. In this work, we propose to overcome current limitations of random forest classifiers by devising new decision trees, which use the entire feature vector at each split node and may thus be able to find representative patterns in high-dimensional feature spaces. Our approach called vantage point forests is related to cluster trees that have been successfully applied to space partitioning. It can be further improved by discarding training samples with a large Hamming distance compared to the test sample. Our method achieves state-of-the-art segmentation accuracy of $$\ge $$90 % Dice for liver and kidneys in abdominal CT, with significant improvements over random forest, in under a minute.

Mattias P. Heinrich, Maximilian Blendowski

Multiple Object Segmentation and Tracking by Bayes Risk Minimization

Motion analysis of cells and subcellular particles like vesicles, microtubules or membrane receptors is essential for understanding various processes, which take place in living tissue. Manual detection and tracking is usually infeasible due to large number of particles. In addition the images are often distorted by noise caused by limited resolution of optical microscopes, which makes the analysis even more challenging. In this paper we formulate the task of detection and tracking of small objects as a Bayes risk minimization. We introduce a novel spatio-temporal probabilistic graphical model which models the dynamics of individual particles as well as their relations and propose a loss function suitable for this task. Performance of our method is evaluated on artificial but highly realistic data from the 2012 ISBI Particle Tracking Challenge [8]. We show that our approach is fully comparable or even outperforms state-of-the-art methods.

Tomáš Sixta, Boris Flach

Crowd-Algorithm Collaboration for Large-Scale Endoscopic Image Annotation with Confidence

With the recent breakthrough success of machine learning based solutions for automatic image annotation, the availability of reference image annotations for algorithm training is one of the major bottlenecks in medical image segmentation and many other fields. Crowdsourcing has evolved as a valuable option for annotating large amounts of data while sparing the resources of experts, yet, segmentation of objects from scratch is relatively time-consuming and typically requires an initialization of the contour. The purpose of this paper is to investigate whether the concept of crowd-algorithm collaboration can be used to simultaneously (1) speed up crowd annotation and (2) improve algorithm performance based on the feedback of the crowd. Our contribution in this context is two-fold: Using benchmarking data from the MICCAI 2015 endoscopic vision challenge we show that atlas forests extended by a novel superpixel-based confidence measure are well-suited for medical instrument segmentation in laparoscopic video data. We further demonstrate that the new algorithm and the crowd can mutually benefit from each other in a collaborative annotation process. Our method can be adapted to various applications and thus holds high potential to be used for large-scale low-cost data annotation.

L. Maier-Hein, T. Ross, J. Gröhl, B. Glocker, S. Bodenstedt, C. Stock, E. Heim, M. Götz, S. Wirkert, H. Kenngott, S. Speidel, K. Maier-Hein

Emphysema Quantification on Cardiac CT Scans Using Hidden Markov Measure Field Model: The MESA Lung Study

Cardiac computed tomography (CT) scans include approximately 2/3 of the lung and can be obtained with low radiation exposure. Large cohorts of population-based research studies reported high correlations of emphysema quantification between full-lung (FL) and cardiac CT scans, using thresholding-based measurements. This work extends a hidden Markov measure field (HMMF) model-based segmentation method for automated emphysema quantification on cardiac CT scans. We show that the HMMF-based method, when compared with several types of thresholding, provides more reproducible emphysema segmentation on repeated cardiac scans, and more consistent measurements between longitudinal cardiac and FL scans from a diverse pool of scanner types and thousands of subjects with ten thousands of scans.

Jie Yang, Elsa D. Angelini, Pallavi P. Balte, Eric A. Hoffman, Colin O. Wu, Bharath A. Venkatesh, R. Graham Barr, Andrew F. Laine

Cutting Out the Middleman: Measuring Nuclear Area in Histopathology Slides Without Segmentation

The size of nuclei in histological preparations from excised breast tumors is predictive of patient outcome (large nuclei indicate poor outcome). Pathologists take into account nuclear size when performing breast cancer grading. In addition, the mean nuclear area (MNA) has been shown to have independent prognostic value. The straightforward approach to measuring nuclear size is by performing nuclei segmentation. We hypothesize that given an image of a tumor region with known nuclei locations, the area of the individual nuclei and region statistics such as the MNA can be reliably computed directly from the image data by employing a machine learning model, without the intermediate step of nuclei segmentation. Towards this goal, we train a deep convolutional neural network model that is applied locally at each nucleus location, and can reliably measure the area of the individual nuclei and the MNA. Furthermore, we show how such an approach can be extended to perform combined nuclei detection and measurement, which is reminiscent of granulometry.

Mitko Veta, Paul J. van Diest, Josien P. W. Pluim

Subtype Cell Detection with an Accelerated Deep Convolution Neural Network

Robust cell detection in histopathological images is a crucial step in the computer-assisted diagnosis methods. In addition, recent studies show that subtypes play an significant role in better characterization of tumor growth and outcome prediction. In this paper, we propose a novel subtype cell detection method with an accelerated deep convolution neural network. The proposed method not only detects cells but also gives subtype cell classification for the detected cells. Based on the subtype cell detection results, we extract subtype cell related features and use them in survival prediction. We demonstrate that our proposed method has excellent subtype cell detection performance and our proposed subtype cell features can achieve more accurate survival prediction.

Sheng Wang, Jiawen Yao, Zheng Xu, Junzhou Huang

Imaging Biomarker Discovery for Lung Cancer Survival Prediction

Solid tumors are heterogeneous tissues composed of a mixture of cells and have special tissue architectures. However, cellular heterogeneity, the differences in cell types are generally not reflected in molecular profilers or in recent histopathological image-based analysis of lung cancer, rendering such information underused. This paper presents the development of a computational approach in H&E stained pathological images to quantitatively describe cellular heterogeneity from different types of cells. In our work, a deep learning approach was first used for cell subtype classification. Then we introduced a set of quantitative features to describe cellular information. Several feature selection methods were used to discover significant imaging biomarkers for survival prediction. These discovered imaging biomarkers are consistent with pathological and biological evidence. Experimental results on two lung cancer data sets demonstrated that survival models bsuilt from the clinical imaging biomarkers have better prediction power than state-of-the-art methods using molecular profiling data and traditional imaging biomarkers.

Jiawen Yao, Sheng Wang, Xinliang Zhu, Junzhou Huang

3D Segmentation of Glial Cells Using Fully Convolutional Networks and k-Terminal Cut

Glial cells play an important role in regulating synaptogenesis, development of blood-brain barrier, and brain tumor metastasis. Quantitative analysis of glial cells can offer new insights to many studies. However, the complicated morphology of the protrusions of glial cells and the entangled cell-to-cell network cause significant difficulties to extracting quantitative information in images. In this paper, we present a new method for instance-level segmentation of glial cells in 3D images. First, we obtain accurate voxel-level segmentation by leveraging the recent advances of fully convolutional networks (FCN). Then we develop a k-terminal cut algorithm to disentangle the complex cell-to-cell connections. During the cell cutting process, to better capture the nature of glial cells, a shape prior computed based on a multiplicative Voronoi diagram is exploited. Extensive experiments using real 3D images show that our method has superior performance over the state-of-the-art methods.

Lin Yang, Yizhe Zhang, Ian H. Guldner, Siyuan Zhang, Danny Z. Chen

Detection of Differentiated vs. Undifferentiated Colonies of iPS Cells Using Random Forests Modeled with the Multivariate Polya Distribution

In this paper we propose a novel method for automatic detection of undifferentiated vs. differentiated colonies of iPS cells, which is able to achieve excellent accuracy of detection using only a few training images. Local patches in the images are represented through the responses of texture-layout filters over texton maps and learned using Random Forests. Additionally, we propose a novel method for probabilistic modeling of the information available at the leaves of the individual trees in the forest, based on the multivariate Polya distribution.

Bisser Raytchev, Atsuki Masuda, Masatoshi Minakawa, Kojiro Tanaka, Takio Kurita, Toru Imamura, Masashi Suzuki, Toru Tamaki, Kazufumi Kaneda

Detecting 10,000 Cells in One Second

In this paper, we present a generalized distributed deep neural network architecture to detect cells in whole-slide high-resolution histopathological images, which usually hold $$10^{8}$$ to $$10^{10}$$ pixels. Our framework can adapt and accelerate any deep convolutional neural network pixel-wise cell detector to perform whole-slide cell detection within a reasonable time limit. We accelerate the convolutional neural network forwarding through a sparse kernel technique, eliminating almost all of the redundant computation among connected patches. Since the disk I/O becomes a bottleneck when the image size scale grows larger, we propose an asynchronous prefetching technique to diminish a large portion of the disk I/O time. An unbalanced distributed sampling strategy is proposed to enhance the scalability and communication efficiency in distributed computing. Blending advantages of the sparse kernel, asynchronous prefetching and distributed sampling techniques, our framework is able to accelerate the conventional convolutional deep learning method by nearly 10, 000 times with same accuracy. Specifically, our method detects cells in a $$10^{8}$$-pixel ($$10^4\times 10^4$$) image in 20 s (approximately 10, 000 cells per second) on a single workstation, which is an encouraging result in whole-slide imaging practice.

Zheng Xu, Junzhou Huang

A Hierarchical Convolutional Neural Network for Mitosis Detection in Phase-Contrast Microscopy Images

We propose a Hierarchical Convolution Neural Network (HCNN) for mitosis event detection in time-lapse phase contrast microscopy. Our method contains two stages: first, we extract candidate spatial-temporal patch sequences in the input image sequences which potentially contain mitosis events. Then, we identify if each patch sequence contains mitosis event or not using a hieratical convolutional neural network. In the experiments, we validate the design of our proposed architecture and evaluate the mitosis event detection performance. Our method achieves 99.1 % precision and 97.2 % recall in very challenging image sequences of multipolar-shaped C3H10T1/2 mesenchymal stem cells and outperforms other state-of-the-art methods. Furthermore, the proposed method does not depend on hand-crafted feature design or cell tracking. It can be straightforwardly adapted to event detection of other different cell types.

Yunxiang Mao, Zhaozheng Yin

Erratum to: A Learning-Free Approach to Whole Spine Vertebra Localization in MRI

Marko Rak, Klaus-Dietz Tönnies

Backmatter

Weitere Informationen

Premium Partner

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.

Whitepaper

- ANZEIGE -

Best Practices für die Mitarbeiter-Partizipation in der Produktentwicklung

Unternehmen haben das Innovationspotenzial der eigenen Mitarbeiter auch außerhalb der F&E-Abteilung erkannt. Viele Initiativen zur Partizipation scheitern in der Praxis jedoch häufig. Lesen Sie hier  - basierend auf einer qualitativ-explorativen Expertenstudie - mehr über die wesentlichen Problemfelder der mitarbeiterzentrierten Produktentwicklung und profitieren Sie von konkreten Handlungsempfehlungen aus der Praxis.
Jetzt gratis downloaden!

Bildnachweise