Skip to main content

About this book

This book constitutes revised selected papers from the Third International MICCAI Brainlesion Workshop, BrainLes 2017, as well as the International Multimodal Brain Tumor Segmentation, BraTS, and White Matter Hyperintensities, WMH, segmentation challenges, which were held jointly at the Medical Image computing for Computer Assisted Intervention Conference, MICCAI, in Quebec City, Canada, in September 2017.

The 40 papers presented in this volume were carefully reviewed and selected from 46 submissions. They were organized in topical sections named: brain lesion image analysis; brain tumor image segmentation; and ischemic stroke lesion image segmentation.

Table of Contents


Invited Talks


Dice Overlap Measures for Objects of Unknown Number: Application to Lesion Segmentation

The Dice overlap ratio is commonly used to evaluate the performance of image segmentation algorithms. While Dice overlap is very useful as a standardized quantitative measure of segmentation accuracy in many applications, it offers a very limited picture of segmentation quality in complex segmentation tasks where the number of target objects is not known a priori, such as the segmentation of white matter lesions or lung nodules. While Dice overlap can still be used in these applications, segmentation algorithms may perform quite differently in ways not reflected by differences in their Dice score. Here we propose a new set of evaluation techniques that offer new insights into the behavior of segmentation algorithms. We illustrate these techniques with a case study comparing two popular multiple sclerosis (MS) lesion segmentation algorithms: OASIS and LesionTOADS.

Ipek Oguz, Aaron Carass, Dzung L. Pham, Snehashis Roy, Nagesh Subbana, Peter A. Calabresi, Paul A. Yushkevich, Russell T. Shinohara, Jerry L. Prince

Lesion Detection, Segmentation and Prediction in Multiple Sclerosis Clinical Trials

A variety of automatic segmentation techniques have been successfully applied to the delineation of larger T2 lesions in patient MRI in the context of Multiple Sclerosis (MS), assisting in the estimation of lesion volume, a common clinical measure of disease activity and stage. In the context of clinical trials, however, a wider number of metrics are required to determine the “burden of disease” and activity in order to measure treatment efficacy. These include: (1) the number and volume of T2 lesions in MRI, (2) the number of new and enlarging T2 volumes in longitudinal MRI, and (3) the number of gadolinium enhancing lesions in T1 MRI, the portion of lesions that enhance in T1w MRI after injection with a contrast agent, often associated with active inflammations. In this context, accurate lesion detection must ensure that even small lesions (e.g. 3 to 10 voxels) are detected as they are prevalent in trials. Manual or semi-manual approaches are too time-consuming, inconsistent and expensive to be practical in large clinical trials. To this end, we present a series of fully-automatic, probabilistic machine learning frameworks to detect and segment all lesions in patient MRI, and show their accuracy and robustness in large multi-center, multi-scanner, clinical trial datasets. Several of these algorithms have been placed into a commercial software analysis pipeline, where they have assisted in improving the efficiency and precision of the development of most new MS treatments worldwide. Recent work has shown how a new Bag-of-Lesions brain representation can be used in the context of clinical trials to automatically predict the probability of future disease activity and potential treatment responders, leading to the possibility of personalized medicine.

Andrew Doyle, Colm Elliott, Zahra Karimaghaloo, Nagesh Subbanna, Douglas L. Arnold, Tal Arbel

Brain Lesion Image Analysis


Automated Segmentation of Multiple Sclerosis Lesions Using Multi-dimensional Gated Recurrent Units

We analyze the performance of multi-dimensional gated recurrent units on automated lesion segmentation in multiple sclerosis. The segmentation of these pathologic structures is not trivial, since location, shape and size can be arbitrary. Furthermore, the inherent class imbalance of about 1 lesion voxel to 10 000 healthy voxels further exacerbates the correct segmentation. We introduce a new MD-GRU setup, using established techniques from the deep learning community as well as our own adaptations. We evaluate these modifications by comparing them to a standard MD-GRU network. We demonstrate that using data augmentation, selective sampling, residual learning and/or DropConnect on the RNN state can produce better segmentation results. Reaching rank #1 in the ISBI 2015 longitudinal multiple sclerosis lesion segmentation challenge, we show that a setup which combines these techniques can outperform the state of the art in automated lesion segmentation.

Simon Andermatt, Simon Pezold, Philippe C. Cattin

Joint Intensity Fusion Image Synthesis Applied to Multiple Sclerosis Lesion Segmentation

We propose a new approach to Multiple Sclerosis lesion segmentation that utilizes synthesized images. A new method of image synthesis is considered: joint intensity fusion (JIF). JIF synthesizes an image from a library of deformably registered and intensity normalized atlases. Each location in the synthesized image is a weighted average of the registered atlases; atlas weights vary spatially. The weights are determined using the joint label fusion (JLF) framework. The primary methodological contribution is the application of JLF to MRI signal directly rather than labels. Synthesized images are then used as additional features in a lesion segmentation task using the OASIS classifier, a logistic regression model on intensities from multiple modalities. The addition of JIF synthesized images improved the Dice-Sorensen coefficient (relative to manually drawn gold standards) of lesion segmentations over the standard model segmentations by $$0.0462 \pm 0.0050$$0.0462±0.0050 (mean ± standard deviation) at optimal threshold over all subjects and 10 separate training/testing folds.

Greg M. Fleishman, Alessandra Valcarcel, Dzung L. Pham, Snehashis Roy, Peter A. Calabresi, Paul Yushkevich, Russell T. Shinohara, Ipek Oguz

MARCEL (Inter-Modality Affine Registration with CorrELation Ratio): An Application for Brain Shift Correction in Ultrasound-Guided Brain Tumor Resection

Tissue deformation during brain tumor removal often renders the original surgical plan invalid. This can greatly affect the quality of resection, and thus threaten the patient’s survival rate. Therefore, correction of such deformation is needed, which can be achieved through image registration between pre- and intra-operative images. We proposed a novel automatic inter-modal affine registration technique based on the correlation ratio (CR) similarity metric. The technique was demonstrated through registering intra-operative ultrasound (US) scans with magnetic resonance (MR) images of patients, who underwent brain gliomas resection. By using landmark-based mean target registration errors (TRE) for evaluation, our technique has achieved a result of 2.32 ± 0.68 mm from the initial 5.13 ± 2.78 mm.

Nima Masoumi, Yiming Xiao, Hassan Rivaz

Generalised Wasserstein Dice Score for Imbalanced Multi-class Segmentation Using Holistic Convolutional Networks

The Dice score is widely used for binary segmentation due to its robustness to class imbalance. Soft generalisations of the Dice score allow it to be used as a loss function for training convolutional neural networks (CNN). Although CNNs trained using mean-class Dice score achieve state-of-the-art results on multi-class segmentation, this loss function does neither take advantage of inter-class relationships nor multi-scale information. We argue that an improved loss function should balance misclassifications to favour predictions that are semantically meaningful. This paper investigates these issues in the context of multi-class brain tumour segmentation. Our contribution is threefold. (1) We propose a semantically-informed generalisation of the Dice score for multi-class segmentation based on the Wasserstein distance on the probabilistic label space. (2) We propose a holistic CNN that embeds spatial information at multiple scales with deep supervision. (3) We show that the joint use of holistic CNNs and generalised Wasserstein Dice score achieves segmentations that are more semantically meaningful for brain tumour segmentation.

Lucas Fidon, Wenqi Li, Luis C. Garcia-Peraza-Herrera, Jinendra Ekanayake, Neil Kitchen, Sébastien Ourselin, Tom Vercauteren

Overall Survival Time Prediction for High Grade Gliomas Based on Sparse Representation Framework

Accurate prognosis for high grade glioma (HGG) is of great clinical value since it would provide optimized guidelines for treatment planning. Previous imaging-based survival prediction generally relies on some features guided by clinical experiences, which limits the full utilization of biomedical image. In this paper, we propose a sparse representation-based radiomics framework to predict overall survival (OS) time of HGG. Firstly, we develop a patch-based sparse representation method to extract the high-throughput tumor texture features. Then, we propose to combine locality preserving projection and sparse representation to select discriminating features. Finally, we treat the OS time prediction as a classification task and apply sparse representation to classification. Experiment results show that, with 10-fold cross-validation, the proposed method achieves the accuracy of 94.83% and 95.69% by using T1 contrast-enhanced and T2 weighted magnetic resonance images, respectively.

Guoqing Wu, Yuanyuan Wang, Jinhua Yu

Traumatic Brain Lesion Quantification Based on Mean Diffusivity Changes

We report the evaluation of an automated method for quantification of brain tissue damage, caused by a severe traumatic brain injury, using mean diffusivity computed from MR diffusion images. Our automatic results obtained on realistic phantoms and real patient images 10 days post-event provided by nine different centers were coherent with four expert manually identified lesions. For realistic phantoms automated method scores were equal to 0.77, 0.77 and 0.83 for Dice, Precision and Sensibility respectively compared to 0.78, 0.72 and 0.86 for the experts. The inter correlation class (ICC) was 0.79. For 7/9 real cases 0.57, 0.50 and 0.70 were respectively obtained for automated method compared to 0.60, 0.52 and 0.78 for experts with ICC = 0.71. Additionally, we detail the quality control module used to pool data from various image provider centers. This study clearly demonstrates the validity of the proposed automated method to eventually compute in a multi-centre project, the lesional load following brain trauma based on MD changes.

Christophe Maggia, Thomas Mistral, Senan Doyle, Florence Forbes, Alexandre Krainik, Damien Galanaud, Emmanuelle Schmitt, Stéphane Kremer, Irène Troprès, Emmanuel L. Barbier, Jean-François Payen, Michel Dojat

Pairwise, Ordinal Outlier Detection of Traumatic Brain Injuries

Because mild Traumatic Brain Injuries (mTBI) are heterogeneous, classification methods perform outlier detection from a model of healthy tissue. Such a model is challenging to construct. Instead, we utilize region-specific pairwise (person-to-person) comparisons. Each person-region is characterized by a distribution of Fractional Anisotropy and comparisons are made via Median, Mean, Bhattacharya and Kullback-Liebler distances. Additionally, we examine an ordinal decision rule which compares a subject’s n$$^\mathrm{{th}}$$th most atypical region to a healthy control’s. Ordinal comparison is motivated by mTBI’s heterogeneity; each mTBI has some set of damaged tissue which is not necessarily spatially consistent. These improvements correctly distinguish Persistent Post-Concussive Symptoms in a small dataset but achieve only a .74 AUC in identifying mTBI subjects with milder symptoms. Finally, we perform subject-specific simulations which characterize which injuries are detected and which are missed.

Matt Higger, Martha Shenton, Sylvain Bouix

Sub-acute and Chronic Ischemic Stroke Lesion MRI Segmentation

Automatic segmentation of chronic stroke lesion from magnetic resonance images (MRI) is motivated by the increasing need for reproducible and repeatable endpoints in clinical trials. The task is non-trivial, due to a number of confounding factors, including heterogeneous lesion intensity, irregular shape, and large deformations that render the conventional use of prior probabilistic atlases challenging. In this paper, we introduce a hidden Markov random field model that avails of a novel prior probabilistic vascular territory atlas to describe the natural vascular constraints in the brain. The vascular territory atlas is deformed in a joint registration-segmentation framework to overcome subject-specific morphological variability. T1-w and Flair sequences are used to populate our model, and a variational approach is implemented to find a solution. The performance of our model is demonstrated on two datasets, and compared to manual delineations by expert raters.

Senan Doyle, Florence Forbes, Assia Jaillard, Olivier Heck, Olivier Detante, Michel Dojat

Brain Tumor Segmentation Using an Adversarial Network

Recently, the convolutional neural network (CNN) has been successfully applied to the task of brain tumor segmentation. However, the effectiveness of a CNN-based method is limited by the small receptive field, and the segmentation results don’t perform well in the spatial contiguity. Therefore, many attempts have been made to strengthen the spatial contiguity of the network output. In this paper, we proposed an adversarial training approach to train the CNN network. A discriminator network is trained along with a generator network which produces the synthetic segmentation results. The discriminator network is encouraged to discriminate the synthetic labels from the ground truth labels. Adversarial adjustments provided by the discriminator network are fed back to the generator network to help reduce the differences between the synthetic labels and the ground truth labels and reinforce the spatial contiguity with high-order loss terms. The presented method is evaluated on the Brats2017 training dataset. The experiment results demonstrate that the presented method could enhance the spatial contiguity of the segmentation results and improve the segmentation accuracy.

Zeju Li, Yuanyuan Wang, Jinhua Yu

Brain Cancer Imaging Phenomics Toolkit (brain-CaPTk): An Interactive Platform for Quantitative Analysis of Glioblastoma

Quantitative research, especially in the field of radio(geno)mics, has helped us understand fundamental mechanisms of neurologic diseases. Such research is integrally based on advanced algorithms to derive extensive radiomic features and integrate them into diagnostic and predictive models. To exploit the benefit of such complex algorithms, their swift translation into clinical practice is required, currently hindered by their complicated nature. brain-CaPTk is a modular platform, with components spanning across image processing, segmentation, feature extraction, and machine learning, that facilitates such translation, enabling quantitative analyses without requiring substantial computational background. Thus, brain-CaPTk can be seamlessly integrated into the typical quantification, analysis and reporting workflow of a radiologist, underscoring its clinical potential. This paper describes currently available components of brain-CaPTk and example results from their application in glioblastoma.

Saima Rathore, Spyridon Bakas, Sarthak Pati, Hamed Akbari, Ratheesh Kalarot, Patmaa Sridharan, Martin Rozycki, Mark Bergman, Birkan Tunc, Ragini Verma, Michel Bilello, Christos Davatzikos

Brain Tumor Image Segmentation


Deep Learning Based Multimodal Brain Tumor Diagnosis

Brain tumor segmentation plays an important role in the disease diagnosis. In this paper, we proposed deep learning frameworks, i.e. MvNet and SPNet, to address the challenges of multimodal brain tumor segmentation. The proposed multi-view deep learning framework (MvNet) uses three multi-branch fully-convolutional residual networks (Mb-FCRN) to segment multimodal brain images from different view-point, i.e. slices along x, y, z axis. The three sub-networks produce independent segmentation results and vote for the final outcome. The SPNet is a CNN-based framework developed to predict the survival time of patients. The proposed deep learning frameworks was evaluated on BraTS 17 validation set and achieved competing results for tumor segmentation While Dice scores of 0.88, 0.75 0.71 were achieved for whole tumor, enhancing tumor and tumor core, respectively, an accuracy of 0.55 was obtained for survival prediction.

Yuexiang Li, Linlin Shen

Multimodal Brain Tumor Segmentation Using Ensemble of Forest Method

In this paper, we have proposed a cascaded ensemble method based on Random Forest, named as Ensemble-of-Forest, (EoF). Instead of classifying huge amount of data with a single forest, we proposed two stage ensemble method for Multimodal Brain Tumor Segmentation problem. Identification of Tumor region and its sub-regions poses challenge in terms of variations in intensity, location etc. from patient to patient. We identify the initial region of interest (ROI) by linear combination of FLAIR and T2 modality. For each training scan/ROI, we define a Random Forest as first stage of ensemble method. For a test ROI, collect a set of similarly seen ROI and hence forest based on mutual information criteria and collect majority voting to classify voxels in it. We have reported results on BRATS 2017 dataset in this paper.

Ashish Phophalia, Pradipta Maji

Pooling-Free Fully Convolutional Networks with Dense Skip Connections for Semantic Segmentation, with Application to Brain Tumor Segmentation

Segmentation of medical images requires multi-scale information, combining local boundary detection with global context. State-of-the-art convolutional neural network (CNN) architectures for semantic segmentation are often composed of a downsampling path which computes features at multiple scales, followed by an upsampling path, required to recover those features at the same scale as the input image. Skip connections allow features discovered in the downward path to be integrated in the upward path. The downsampling mechanism is typically a pooling operation. However, pooling was introduced in CNNs to enable translation invariance, which is not desirable in segmentation tasks. For this reason, we propose an architecture, based on the recently proposed Densenet, for semantic segmentation, in which pooling has been replaced with dilated convolutions. We also present a variant approach, used in the 2017 BRATS challenge, in which a cascade of densely connected nets is used to first exclude non-brain tissue, and then segment tumor structures. We present results on the validation dataset of the Multimodal Brain Tumor Segmentation Challenge 2017.

Richard McKinley, Alain Jungo, Roland Wiest, Mauricio Reyes

Automatic Brain Tumor Segmentation Using Cascaded Anisotropic Convolutional Neural Networks

A cascade of fully convolutional neural networks is proposed to segment multi-modal Magnetic Resonance (MR) images with brain tumor into background and three hierarchical regions: whole tumor, tumor core and enhancing tumor core. The cascade is designed to decompose the multi-class segmentation problem into a sequence of three binary segmentation problems according to the subregion hierarchy. The whole tumor is segmented in the first step and the bounding box of the result is used for the tumor core segmentation in the second step. The enhancing tumor core is then segmented based on the bounding box of the tumor core segmentation result. Our networks consist of multiple layers of anisotropic and dilated convolution filters, and they are combined with multi-view fusion to reduce false positives. Residual connections and multi-scale predictions are employed in these networks to boost the segmentation performance. Experiments with BraTS 2017 validation set show that the proposed method achieved average Dice scores of 0.7859, 0.9050, 0.8378 for enhancing tumor core, whole tumor and tumor core, respectively. The corresponding values for BraTS 2017 testing set were 0.7831, 0.8739, and 0.7748, respectively.

Guotai Wang, Wenqi Li, Sébastien Ourselin, Tom Vercauteren

3D Brain Tumor Segmentation Through Integrating Multiple 2D FCNNs

The Magnetic Resonance Images (MRI) which can be used to segment brain tumors are 3D images. To make use of 3D information, a method that integrates the segmentation results of 3 2D Fully Convolutional Neural Networks (FCNNs), each of which is trained to segment brain tumor images from axial, coronal, and sagittal views respectively, is applied in this paper. Integrating multiple FCNN models by fusing their segmentation results rather than by fusing into one deep network makes sure that each FCNN model can still be tested by 2D slices, guaranteeing the testing efficiency. An averaging strategy is applied to do the fusing job. This method can be easily extended to integrate more FCNN models which are trained to segment brain tumor images from more views, without retraining the FCNN models that we already have. In addition, 3D Conditional Random Fields (CRFs) are applied to optimize our fused segmentation results. Experimental results show that, integrating the segmentation results of multiple 2D FCNNs obviously improves the segmentation accuracy, and 3D CRF greatly reduces false positives and improves the accuracy of tumor boundaries.

Xiaomei Zhao, Yihong Wu, Guidong Song, Zhenye Li, Yazhuo Zhang, Yong Fan

MRI Brain Tumor Segmentation and Patient Survival Prediction Using Random Forests and Fully Convolutional Networks

In this paper, we propose a learning based method for automated segmentation of brain tumor in multimodal MRI images, which incorporates two sets of machine-learned and hand-crafted features. Fully convolutional networks (FCN) forms the machine-learned features and texton based histograms are considered as hand-crafted features. Random forest (RF) is used to classify the MRI image voxels into normal brain tissues and different parts of tumors. The volumetric features from the segmented tumor tissues and patient age applying to an RF is used to predict the survival time. The method was evaluated on MICCAI-BRATS 2017 challenge dataset. The mean Dice overlap measures for segmentation of validation dataset are 0.86, 0.78 and 0.66 for whole tumor, core and enhancing tumor, respectively. The validation Hausdorff values are 7.61, 8.70 and 3.76. For the survival prediction task, the classification accuracy, pairwise mean square error and Spearman rank are 0.485, 198749 and 0.334, respectively.

Mohammadreza Soltaninejad, Lei Zhang, Tryphon Lambrou, Guang Yang, Nigel Allinson, Xujiong Ye

Automatic Segmentation and Overall Survival Prediction in Gliomas Using Fully Convolutional Neural Network and Texture Analysis

In this paper, we use a Fully Convolutional Neural Network (FCNN) for the segmentation of gliomas from Magnetic Resonance Images (MRI). A fully automatic, voxel based classification was achieved by training a 23 layer deep FCNN on 2-D slices extracted from patient volumes. The network was trained on slices extracted from 130 patients and validated on 50 patients. For the task of survival prediction, texture and shape based features were extracted from T1 post contrast volume to train an Extremely Gradient Boosting (XGBoost) regressor. On the BraTS 2017 validation set, the proposed scheme achieved a mean whole tumor, tumor core and active dice score of 0.83, 0.69 and 0.69 respectively, while for the task of overall survival prediction, the proposed scheme achieved an accuracy of 52%.

Varghese Alex, Mohammed Safwan, Ganapathy Krishnamurthi

Multimodal Brain Tumor Segmentation Using 3D Convolutional Networks

Volume segmentation is one of the most time consuming and therefore error prone tasks in the field of medicine. The construction of a good segmentation requires cross-validation from highly trained professionals. In order to address this problem we propose the use of 3D deep convolutional networks (DCN). Using a 2 step procedure we first segment whole the tumor from a low resolution volume and then feed a second step which makes the fine tissue segmentation. The advantages of using 3D-DCN is that it extracts 3D features form all neighbouring voxels. In this method all parameters are self-learned during a single training procedure and its accuracy can improve by feeding new examples to the trained network. The training dice-loss value reach 0.85 and 0.9 for the coarse and fine segmentation networks respectively. The obtained validation and testing mean dice for the Whole Tumor class are 0.86 and 0.82 respectively.

R. G. Rodríguez Colmeiro, C. A. Verrastro, T. Grosges

A Conditional Adversarial Network for Semantic Segmentation of Brain Tumor

Automated brain lesion detection is an important and very challenging clinical diagnostic task, due to the lesions’different sizes, shapes, contrasts, and locations. Recently deep learning has shown promising progresses in many application fields, thereby motivating us to apply this technique for such an important problem. In this paper, we propose an automatic end-to-end trainable architecture for heterogeneous brain tumor segmentation through adversarial training for the BraTS-2017 challenge. Inspired by classical generative adversarial network, the proposed network has two components: the “Discriminator” and the “Generator”. We use a patient-wise fully convolutional neural networks (FCNs) as the segmentor network to generate segmentation label maps. The discriminator network is patient-wise fully convolutional neural networks (FCNs) with L1 loss that discriminates segmentation maps coming from the ground truth or from the segmentor network. We propose an end-to-end trainable CNNs for survival day prediction based on deep learning techniques. The experimental results demonstrate the ability of the propose approaches for both tasks of BraTS-2017 challenge. Our patient-wise cGAN achieved competitive results in the BraTS-2017 challenges.

Mina Rezaei, Konstantin Harmuth, Willi Gierke, Thomas Kellermeier, Martin Fischer, Haojin Yang, Christoph Meinel

Dilated Convolutions for Brain Tumor Segmentation in MRI Scans

We present a novel method to detect and segment brain tumors in Magnetic Resonance Imaging scans using a novel network based on the Dilated Residual Network. Dilated convolutions provide efficient multi-scale analysis for dense prediction tasks without losing resolution by downsampling the input. To the best of our knowledge, our work is the first to evaluate a dilated residual network for brain tumor segmentation in magnetic resonance imaging scans. We train and evaluate our method on the Brain Tumor Segmentation (BraTS) 2017 challenge dataset. To address the severe label imbalance in the data, we adopt a balanced, patch-based sampling approach for training. An ablation study establishes the importance of residual connections in the performance of our network.

Marc Moreno Lopez, Jonathan Ventura

Residual Encoder and Convolutional Decoder Neural Network for Glioma Segmentation

A deep learning approach to glioma segmentation is presented. An encoder and decoder pair deep learning network is designed which takes T1, T2, T1-CE (contrast enhanced) and T2-Flair (fluid attenuation inversion recovery) images as input and outputs the segmented labels. The encoder is a 49 layer deep residual learning architecture that encodes the $$240\,\times \,240\,\times \,4$$240×240×4 input images into $$8\,\times \,8\,\times \,2048$$8×8×2048 feature maps. The decoder network takes these feature maps and extract the segmented labels. The decoder network is fully convolutional network consisting of convolutional and upsampling layers. Additionally, the input images are downsampled using bilinear interpolation and are inserted into the decoder network through concatenation. This concatenation step provides spatial information of the tumor to the decoder, which was lost due to pooling/downlsampling during encoding. The network is trained on the BRATS-17 training dataset and validated on the validation dataset. The dice score, sensitivity and specificity of the segmented whole tumor, core tumor and enhancing tumor is computed on validation dataset. The mean dice score for whole tumor, core tumor and enhancing tumor for validation dataset were 0.824, 0.627 and 0.575, respectively.

Kamlesh Pawar, Zhaolin Chen, N. Jon Shah, Gary Egan

TPCNN: Two-Phase Patch-Based Convolutional Neural Network for Automatic Brain Tumor Segmentation and Survival Prediction

The aim of this paper is to integrate some advanced statistical methods with modern deep learning methods for tumor segmentation and survival time prediction in the BraTS 2017 challenge. The goals of the BraTS 2017 challenge are to utilize multi-institutional pre-operative MRI scans to segment out different tumor subregions and then to use tumor information to predict patient’s overall survival. We build a two-phase patch-based convolutional neural network (TPCNN) model to classify all the pixels in the brain and further refine the segmentation results by using XGBoost and a post-processing procedure. The segmentation results are then used to extract various informative radiomic features for prediction of the survival time by using the XGBoost method.

Fan Zhou, Tengfei Li, Heng Li, Hongtu Zhu

Brain Tumor Segmentation and Radiomics Survival Prediction: Contribution to the BRATS 2017 Challenge

Quantitative analysis of brain tumors is critical for clinical decision making. While manual segmentation is tedious, time consuming and subjective, this task is at the same time very challenging to solve for automatic segmentation methods. In this paper we present our most recent effort on developing a robust segmentation algorithm in the form of a convolutional neural network. Our network architecture was inspired by the popular U-Net and has been carefully modified to maximize brain tumor segmentation performance. We use a dice loss function to cope with class imbalances and use extensive data augmentation to successfully prevent overfitting. Our method beats the current state of the art on BraTS 2015, is one of the leading methods on the BraTS 2017 validation set (dice scores of 0.896, 0.797 and 0.732 for whole tumor, tumor core and enhancing tumor, respectively) and achieves very good Dice scores on the test set (0.858 for whole, 0.775 for core and 0.647 for enhancing tumor). We furthermore take part in the survival prediction subchallenge by training an ensemble of a random forest regressor and multilayer perceptrons on shape features describing the tumor subregions. Our approach achieves 52.6% accuracy, a Spearman correlation coefficient of 0.496 and a mean square error of 209607 on the test set.

Fabian Isensee, Philipp Kickingereder, Wolfgang Wick, Martin Bendszus, Klaus H. Maier-Hein

Multi-modal PixelNet for Brain Tumor Segmentation

Brain tumor segmentation using multi-modal MRI data sets is important for diagnosis, surgery and follow up evaluation. In this paper, a convolutional neural network (CNN) with hypercolumns features (e.g. PixelNet) utilizes for automatic brain tumor segmentation containing low and high-grade glioblastomas. Though pixel level convolutional predictors like CNNs, are computationally efficient, such approaches are not statistically efficient during learning precisely because spatial redundancy limits the information learned from neighboring pixels. PixelNet extracts features from multiple layers that correspond to the same pixel and samples a modest number of pixels across a small number of images for each SGD (Stochastic gradient descent) batch update. PixelNet has achieved whole tumor dice accuracy 87.6% and 85.8% for validation and testing data respectively in BraTS 2017 challenge.

Mobarakol Islam, Hongliang Ren

Brain Tumor Segmentation Using Dense Fully Convolutional Neural Network

Manual segmentation of brain tumor is often time consuming and the performance of the segmentation varies based on the operators experience. This leads to the requisition of a fully automatic method for brain tumor segmentation. In this paper, we propose the usage of the 100 layer Tiramisu architecture for the segmentation of brain tumor from multi modal MR images, which is evolved by integrating a densely connected fully convolutional neural network (FCNN), followed by post-processing using a Dense Conditional Random Field (DCRF). The network consists of blocks of densely connected layers, transition down layers in down-sampling path and transition up layers in up-sampling path. The method was tested on dataset provided by Multi modal Brain Tumor Segmentation Challenge (BraTS) 2017. The training data is composed of 210 high-grade brain tumor and 74 low-grade brain tumor cases. The proposed network achieves a mean whole tumor, tumor core & active tumor dice score of 0.87, 0.68 & 0.65. Respectively on the BraTS ’17 validation set and 0.83, 0.65 & 0.65 on the Brats ’17 test set.

Mazhar Shaikh, Ganesh Anand, Gagan Acharya, Abhijit Amrutkar, Varghese Alex, Ganapathy Krishnamurthi

Brain Tumor Segmentation in MRI Scans Using Deeply-Supervised Neural Networks

Gliomas are the most frequent primary brain tumors in adults. Improved quantification of the various aspects of a glioma requires accurate segmentation of the tumor in magnetic resonance images (MRI). Since the manual segmentation is time-consuming and subject to human error and irreproducibility, automatic segmentation has received a lot of attention in recent years. This paper presents a fully automated segmentation method which is capable of automatic segmentation of brain tumor from multi-modal MRI scans. The proposed method is comprised of a deeply-supervised neural network based on Holistically-Nested Edge Detection (HED) network. The HED method, which is originally developed for the binary classification task of image edge detection, is extended for multiple-class segmentation. The classes of interest include the whole tumor, tumor core, and enhancing tumor. The dataset provided by 2017 Multimodal Brain Tumor Image Segmentation Benchmark (BraTS) challenge is used in this work for training the neural network and performance evaluations. Experiments on BraTS 2017 challenge datasets demonstrate that the method performs well compared to the existing works. The assessments revealed the Dice scores of 0.86, 0.60, and 0.69 for whole tumor, tumor core, and enhancing tumor classes, respectively.

Reza Pourreza, Ying Zhuge, Holly Ning, Robert Miller

Brain Tumor Segmentation and Parsing on MRIs Using Multiresolution Neural Networks

Brain lesion segmentation is a critical application of computer vision to the biomedical image analysis. The difficulty is derived from the great variance between instances, and the high computational cost of processing three dimensional data. We introduce a neural network for brain tumor semantic segmentation that parses their internal structures and is capable of processing volumetric data from multiple MRI modalities simultaneously. As a result, the method is able to learn from small training datasets. We develop an architecture that has four parallel pathways with residual connections. It receives patches from images with different spatial resolutions and analyzes them independently. The results are then combined using fully-connected layers to obtain a semantic segmentation of the brain tumor. We evaluated our method using the 2017 BraTS Challenge dataset, reaching average dice coefficients of $$89\%$$89%, $$88\%$$88% and $$86\%$$86% over the training, validation and test images, respectively.

Laura Silvana Castillo, Laura Alexandra Daza, Luis Carlos Rivera, Pablo Arbeláez

Brain Tumor Segmentation Using Deep Fully Convolutional Neural Networks

In this study, brain tumor substructures are segmented using 2D fully convolutional neural networks. A number of modifications such as double convolution layers, inception modules, and dense modules were added to a U-Net to achieve a deep architecture and test if the increased depth improves the performance. The experiments show that the deep architectures improve the performance. Also, the performance is enhanced from ensembling across the models trained on images in different orientations and ensembling across the models with different architectures. Even without any data augmentation, the ensembled model achieves a competitive performance and generalizes well on a new dataset. The resulting mean 3D Dice scores (ET/WT/TC) on the BRATS17 validation and test sets are 0.75/0.88/0.73 and 0.72/0.86/0.73.

Geena Kim

Glioblastoma and Survival Prediction

Glioblastoma is a stage IV highly invasive astrocytoma tumor. Its heterogeneous appearance in MRI poses a critical challenge in diagnosis, prognosis and survival prediction. This work proposes an automated survival prediction method by utilizing different types of texture and other features. The method tests feature significance and prognostic values, and then utilizes the most significant features with a Random Forest regression model to perform survival prediction. We use 163 cases from BraTS17 training dataset for evaluation of the proposed model. A 10-fold cross validation offers normalized root mean square error of 30% for the training dataset and the cross-validated accuracy of 67%, respectively. Finally, the proposed model ranked first in the Survival Prediction task for global Brain Tumor Segmentation Challenge (BraTS) 2017 and an accuracy of 57.9% is achieved.

Zeina A. Shboul, Lasitha Vidyaratne, Mahbubul Alam, Khan M. Iftekharuddin

MRI Augmentation via Elastic Registration for Brain Lesions Segmentation

Datasets for medical image segmentation usually contain a very limited number of training examples. However, deep learning methods prove to be very competitive for such data analysis problems. Surprisingly, quite limited data augmentation is used during training. We presume that it’s due to historical reasons: standardization and normalization of medical images dominate over methods for increasing the size of a training set by artificial transformation of images. We assume that it is partly caused by the absence of methods which preserve properties of adequately preprocessed medical images. In this paper, we propose a new method for brain MRI augmentation, which allows us to map a lesion from an original image to a healthy brain. We compare the performance of U-Net and DeepMedic, two popular deep learning architectures, using the proposed method, a set of classical image augmentation methods, and a combination of both approaches. Our results suggest that at least one of the individual strategies, as well as their combination, provide an increase in accuracy of brain lesions segmentation if the training sample is relatively small.

Egor Krivov, Maxim Pisov, Mikhail Belyaev

Cascaded V-Net Using ROI Masks for Brain Tumor Segmentation

In this work we approach the brain tumor segmentation problem with a cascade of two CNNs inspired in the V-Net architecture [13], reformulating residual connections and making use of ROI masks to constrain the networks to train only on relevant voxels. This architecture allows dense training on problems with highly skewed class distributions, such as brain tumor segmentation, by focusing training only on the vecinity of the tumor area. We report results on BraTS2017 Training and Validation sets.

Adrià Casamitjana, Marcel Catà, Irina Sánchez, Marc Combalia, Verónica Vilaplana

Brain Tumor Segmentation Using a 3D FCN with Multi-scale Loss

In this work, we use a 3D Fully Connected Network (FCN) architecture for brain tumor segmentation. Our method includes a multi-scale loss function on predictions given at each resolution of the FCN. Using this approach, the higher resolution features can be combined with the initial segmentation at a lower resolution so that the FCN models context in both the image and label domains. The model is trained using a multi-scale loss function and a curriculum on sample weights is employed to address class imbalance. We achieved competitive results during the testing phase of the BraTS 2017 Challenge for segmentation with Dice scores of 0.710, 0.860, and 0.783 for enhancing tumor, whole tumor, and tumor core, respectively.

Andrew Jesson, Tal Arbel

Brain Tumor Segmentation Using a Multi-path CNN Based Method

In this paper an automatic brain tumor segmentation approach based on a multi-path Convolutional Neural Network (CNN) is presented. Proposition of the method was motivated by the success of multi-path CNNs, DeepMedic[1] and the method presented in [2], where the local and contextual pieces of information for segmentation were obtained from multi-scale regions. In addition to that, the method exploits the fact that very often tumor introduces high asymmetry to the brain. In order to help model in distinguishing between brain lesions and healthy brain structures such as sulci, gyri and ventricles, the model is provided with spatial information, as well. The model’s training and hyper-parameter tuning were performed on the BraTS 2017 training dataset, model’s validation was done on the BraTS 2017 validation dataset and the final results are reported on the BraTS 2017 testing dataset. The average Dice scores obtained on the testing dataset are 0.6049, 0.8436 and 0.6938 for enhancing tumor, whole tumor and tumor core, respectively.

Sara Sedlar

3D Deep Neural Network-Based Brain Tumor Segmentation Using Multimodality Magnetic Resonance Sequences

Brain tumor segmentation plays a pivotal role in clinical practice and research settings. In this paper, we propose a 3D deep neural network-based algorithm for joint brain tumor detection and intra-tumor structure segmentation, including necrosis, edema, non-enhancing and enhancing tumor, using multimodal magnetic resonance imaging sequences. An ensemble of cascaded U-Nets is designed to detect the tumor and a deep convolutional neural network is constructed for patch-based intra-tumor structure segmentation. This algorithm has been evaluated on the BraTS 2017 Challenge dataset and achieved Dice similarity coefficients of 0.81, 0.69 and 0.55 in the segmentation of whole tumor, core tumor and enhancing tumor, respectively. Our results suggest that the proposed algorithm has promising performance in automated brain tumor segmentation.

Yan Hu, Yong Xia

Automated Brain Tumor Segmentation on Magnetic Resonance Images and Patient’s Overall Survival Prediction Using Support Vector Machines

This study is aimed to develop two algorithms for glioma tumor segmentation and patient’s overall survival (OS) prediction with machine learning approaches. The segmentation algorithm is fully automated to accurately and efficiently delineate the whole tumor on a magnetic resonance imaging (MRI) scan for radiotherapy treatment planning. The survival algorithm predicts the OS for glioblastoma multiforme (GBM) patients based on regression and classification principles. Multi-institutional BRATS’2017 data of MRI scans from 477 patients with high-grade and lower-grade glioma (HGG/LGG) used in this study. Clinical patient survival data of 291 glioblastoma multiforme (GBM) were available in the provided data. Support vector machines (SVMs) were used to develop both algorithms. The segmentation chain comprises pre-processing with a goal of noise removal, feature extraction of the image intensity, segmentation process using a non-linear classifier with ‘Gaussian’ kernel, and post-processing to enhance the segmentation morphology. The OS prediction algorithm sequence involves two steps; extraction of patient’s age, and segmented tumor’s size and its location features; prediction process using a non-linear classifier and a linear regression model with ‘Gaussian’ kernels. The algorithms were trained, validated and tested on BRATS’2017’s training, validation, and testing datasets. Average Dice for the whole tumor segmentation obtained on the validation and testing datasets is 0.53 ± 0.31 (median 0.60) which indicates the consistency of the proposed algorithm on the new “unseen” data. For OS prediction, the mean accuracy is 0.49 for the validation dataset and 0.35 for the testing dataset based on regression principle; whereas an overall accuracy of 1.00 achieved in classification into short, medium, and long-survivor classes for a designed validation dataset. The computational time for the automated segmentation algorithm took approximately 3 min. In its present form, the segmentation tool is fully automated, fast, and provides a reasonable segmentation accuracy on the multi-institutional dataset.

Alexander F. I. Osman

Ensembles of Multiple Models and Architectures for Robust Brain Tumour Segmentation

Deep learning approaches such as convolutional neural nets have consistently outperformed previous methods on challenging tasks such as dense, semantic segmentation. However, the various proposed networks perform differently, with behaviour largely influenced by architectural choices and training settings. This paper explores Ensembles of Multiple Models and Architectures (EMMA) for robust performance through aggregation of predictions from a wide range of methods. The approach reduces the influence of the meta-parameters of individual models and the risk of overfitting the configuration to a particular database. EMMA can be seen as an unbiased, generic deep learning model which is shown to yield excellent performance, winning the first position in the BRATS 2017 competition among 50+ participating teams.

K. Kamnitsas, W. Bai, E. Ferrante, S. McDonagh, M. Sinclair, N. Pawlowski, M. Rajchl, M. Lee, B. Kainz, D. Rueckert, B. Glocker

Tumor Segmentation from Multimodal MRI Using Random Forest with Superpixel and Tensor Based Feature Extraction

Identification and localization of brain tumor tissues plays an important role in diagnosis and treatment planning of gliomas. A fully automated superpixel wise two-stage tumor tissue segmentation algorithm using random forest is proposed in this paper. First stage is used to identify total tumor and the second stage to segment sub-regions. Features for random forest classifier are extracted by constructing a tensor from multimodal MRI data and applying multi-linear singular value decomposition. The proposed method is tested on BRATS 2017 validation and test dataset. The first stage model has a Dice score of 83% for the whole tumor on the validation dataset. The total model achieves a performance of 77%, 50% and 61% Dice scores for whole tumor, enhancing tumor and tumor core, respectively on the test dataset.

H. N. Bharath, S. Colleman, D. M. Sima, S. Van Huffel

Towards Uncertainty-Assisted Brain Tumor Segmentation and Survival Prediction

Uncertainty measures of medical image analysis technologies, such as deep learning, are expected to facilitate their clinical acceptance and synergies with human expertise. Therefore, we propose a full-resolution residual convolutional neural network (FRRN) for brain tumor segmentation and examine the principle of Monte Carlo (MC) Dropout for uncertainty quantification by focusing on the Dropout position and rate. We further feed the resulting brain tumor segmentation into a survival prediction model, which is built on age and a subset of 26 image-derived geometrical features such as volume, volume ratios, surface, surface irregularity and statistics of the enhancing tumor rim width. The results show comparable segmentation performance between MC Dropout models and a standard weight scaling Dropout model. A qualitative evaluation further suggests that informative uncertainty can be obtained by applying MC Dropout after each convolution layer. For survival prediction, results suggest only using few features besides age. In the BraTS17 challenge, our method achieved the 2nd place in the survival task and completed the segmentation task in the 3rd best-performing cluster of statistically different approaches.

Alain Jungo, Richard McKinley, Raphael Meier, Urspeter Knecht, Luis Vera, Julián Pérez-Beteta, David Molina-García, Víctor M. Pérez-García, Roland Wiest, Mauricio Reyes

Ischemic Stroke Lesion Image Segmentation


WMH Segmentation Challenge: A Texture-Based Classification Approach

This Grand Challenge at MICCAI 2017 aims to directly compare methods for the automatic segmentation of White Matter Hyperintensities (WMH) of presumed vascular origin. Our method automatically segment WMH by using texture-based classification of pixels within the brain white matter. It uses no a priori information about the WMH size, contrast or location. The main goal is to compute the probability of each pixel being normal or WMH tissue, by generating a probability map. Based on this probability map, we can automatically segment the WMHs.

Mariana Bento, Roberto de Souza, Roberto Lotufo, Richard Frayne, Letícia Rittner

White Matter Hyperintensities Segmentation in a Few Seconds Using Fully Convolutional Network and Transfer Learning

In this paper, we propose a fast automatic method that segments white matter hyperintensities (WMH) in 3D brain MR images, using a fully convolutional network (FCN) and transfer learning. This FCN is the Visual Geometry Group neural network (VGG for short) pre-trained on ImageNet for natural image classification, and fine tuned with the training dataset of the MICCAI WMH Challenge. We consider three images for each slice of the volume to segment: the T1 slice, the FLAIR slice, and the result of a morphological operator that emphasizes small bright structures. These three 2D images are assembled to form a 2D color image, that inputs the FCN to obtain the 2D segmentation of the corresponding slice. We process all slices, and stack the results to form the 3D output segmentation. With such a technique, the segmentation of WMH on a 3D brain volume takes about 10 s including pre-processing. Our technique was ranked 6-th over 20 participants at the MICCAI WMH Challenge.

Yongchao Xu, Thierry Géraud, Élodie Puybareau, Isabelle Bloch, Joseph Chazalon


Additional information

Premium Partner

    Image Credits