Skip to main content

2021 | Book

Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries

6th International Workshop, BrainLes 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, October 4, 2020, Revised Selected Papers, Part II


About this book

This two-volume set LNCS 12658 and 12659 constitutes the thoroughly refereed proceedings of the 6th International MICCAI Brainlesion Workshop, BrainLes 2020, the International Multimodal Brain Tumor Segmentation (BraTS) challenge, and the Computational Precision Medicine: Radiology-Pathology Challenge on Brain Tumor Classification (CPM-RadPath) challenge. These were held jointly at the 23rd Medical Image Computing for Computer Assisted Intervention Conference, MICCAI 2020, in Lima, Peru, in October 2020.*

The revised selected papers presented in these volumes were organized in the following topical sections: brain lesion image analysis (16 selected papers from 21 submissions); brain tumor image segmentation (69 selected papers from 75 submissions); and computational precision medicine: radiology-pathology challenge on brain tumor classification (6 selected papers from 6 submissions).

*The workshop and challenges were held virtually.

Table of Contents


Brain Tumor Segmentation

Lightweight U-Nets for Brain Tumor Segmentation

Automated brain tumor segmentation is a vital topic due to its clinical applications. We propose to exploit a lightweight U-Net-based deep architecture called Skinny for this task—it was originally employed for skin detection from color images, and benefits from a wider spatial context. We train multiple Skinny networks over all image planes (axial, coronal, and sagittal), and form an ensemble containing such models. The experiments showed that our approach allows us to obtain accurate brain tumor delineation from multi-modal magnetic resonance images.

Tomasz Tarasiewicz, Michal Kawulok, Jakub Nalepa
Efficient Brain Tumour Segmentation Using Co-registered Data and Ensembles of Specialised Learners

Gliomas are the most common and aggressive form of all brain tumours, leading to a very short survival time at their highest grade. Hence, swift and accurate treatment planning is key. Magnetic resonance imaging (MRI) is a widely used imaging technique for the assessment of these tumours but the large amount of data generated by them prevents rapid manual segmentation, the task of dividing visual input into tumorous and non-tumorous regions. Hence, reliable automatic segmentation methods are required. This paper proposes, tests and validates two different approaches to achieving this. Firstly, it is hypothesised that co-registering multiple MRI modalities into a single volume will result in a more time and memory efficient approach which captures the same, if not more, information resulting in accurate segmentation. Secondly, it is hypothesised that training models independently on different MRI modalities allow models to specialise on certain labels or regions, which can then be ensembled to achieve improved predictions. These hypotheses were tested by training and evaluating 3D U-Net models on the BraTS 2020 data set. The experiments show that these hypotheses are indeed valid.

Beenitaben Shah, Harish Tayyar Madabushi
Efficient MRI Brain Tumor Segmentation Using Multi-resolution Encoder-Decoder Networks

In this paper, we propose an automated three dimensional (3D) deep learning approach for the segmentation of gliomas in pre-operative brain MRI scans. We introduce a state-of-the-art multi-resolution architecture based on encoder-decoder which comprise of separate branches to incorporate local high-resolution image features and wider low-resolution contextual information. We also used a unified multi-task loss function to provide end-to-end segmentation training. For the task of survival prediction, we propose a regression algorithm based on random forests to predict the survival days for the patients. Our proposed network is fully automated and designed to take input as patches that can work on input images of any arbitrary size. We trained our proposed network on the BraTS 2020 challenge dataset that consists of 369 training cases, and then validated on 125 unseen validation datasets, and tested on 166 unseen cases from the testing dataset using a blind testing approach. The quantitative and qualitative results demonstrate that our proposed network provides efficient segmentation of brain tumors. The mean Dice overlap measures for automatic brain tumor segmentation of the validation dataset against ground truth are 0.87, 0.80, and 0.66 for the whole tumor, core, and enhancing tumor, respectively. The corresponding results for the testing dataset are 0.78, 0.70, and 0.66, respectively. The accuracy measures of the proposed model for the survival prediction tasks are 0.45 and 0.505 for the validation and testing datasets, respectively.

Mohammadreza Soltaninejad, Tony Pridmore, Michael Pound
Trialing U-Net Training Modifications for Segmenting Gliomas Using Open Source Deep Learning Framework

Automatic brain segmentation has the potential to save time and resources for researchers and clinicians. We aimed to improve upon previously proposed methods by implementing the U-Net model and trialing various modifications to the training and inference strategies. The trials were performed and tested on the Multimodal Brain Tumor Segmentation dataset that provides MR images of brain tumors along with manual segmentations for hundreds of subjects. The U-Net models were trained on a training set of MR images from 369 subjects and then tested against a validation set of images from 125 subjects. The proposed modifications included predicting the labeled region contours, permutations of the input data via rotation and reflection, grouping labels together, as well as creating an ensemble of models. The ensemble of models provided the best results compared to any of the other methods, but the other modifications did not demonstrate improvement. Future work will look at reducing the level of the training augmentation so that the models are better able to generalize to the validation set. Overall, our open source deep learning framework allowed us to quickly implement and test multiple U-Net training modifications. The code for this project is available at .

David G. Ellis, Michele R. Aizenberg
HI-Net: Hyperdense Inception 3D UNet for Brain Tumor Segmentation

The brain tumor segmentation task aims to classify tissue into the whole tumor (WT), tumor core (TC) and enhancing tumor (ET) classes using multimodel MRI images. Quantitative analysis of brain tumors is critical for clinical decision making. While manual segmentation is tedious, time-consuming, and subjective, this task is at the same time very challenging to automatic segmentation methods. Thanks to the powerful learning ability, convolutional neural networks (CNNs), mainly fully convolutional networks, have shown promising brain tumor segmentation. This paper further boosts the performance of brain tumor segmentation by proposing hyperdense inception 3D UNet (HI-Net), which captures multi-scale information by stacking factorization of 3D weighted convolutional layers in the residual inception block. We use hyper dense connections among factorized convolutional layers to extract more contexual information, with the help of features reusability. We use a dice loss function to cope with class imbalances. We validate the proposed architecture on the multi-modal brain tumor segmentation challenges (BRATS) 2020 testing dataset. Preliminary results on the BRATS 2020 testing set show that achieved by our proposed approach, the dice (DSC) scores of ET, WT, and TC are 0.79457, 0.87494, and 0.83712, respectively.

Saqib Qamar, Parvez Ahmad, Linlin Shen
HNF-Net for Brain Tumor Segmentation Using Multimodal MR Imaging: 2nd Place Solution to BraTS Challenge 2020 Segmentation Task

In this paper, we propose a Hybrid High-resolution and Non-local Feature Network (H $$^2$$ 2 NF-Net) to segment brain tumor in multimodal MR images. Our H $$^2$$ 2 NF-Net uses the single and cascaded HNF-Nets to segment different brain tumor sub-regions and combines the predictions together as the final segmentation. We trained and evaluated our model on the Multimodal Brain Tumor Segmentation Challenge (BraTS) 2020 dataset. The results on the test set show that the combination of the single and cascaded models achieved average Dice scores of 0.78751, 0.91290, and 0.85461, as well as Hausdorff distances ( $$95\%$$ 95 % ) of 26.57525, 4.18426, and 4.97162 for the enhancing tumor, whole tumor, and tumor core, respectively. Our method won the second place in the BraTS 2020 challenge segmentation task out of nearly 80 participants.

Haozhe Jia, Weidong Cai, Heng Huang, Yong Xia
2D Dense-UNet: A Clinically Valid Approach to Automated Glioma Segmentation

Brain tumour segmentation is a requirement of many quantitative MRI analyses involving glioma. This paper argues that 2D slice-wise approaches to brain tumour segmentation may be more compatible with current MRI acquisition protocols than 3D methods because clinical MRI is most commonly a slice-based modality. A 2D Dense-UNet segmentation model was trained on the BraTS 2020 dataset. Mean Dice values achieved on the test dataset were: 0.859 (WT), 0.788 (TC) and 0.766 (ET). Median test data Dice values were: 0.902 (WT), 0.887 (TC) and 0.823 (ET). Results were comparable to previous high performing BraTS entries. 2D segmentation may have advantages over 3D methods in clinical MRI datasets where volumetric sequences are not universally available.

Hugh McHugh, Gonzalo Maso Talou, Alan Wang
Attention U-Net with Dimension-Hybridized Fast Data Density Functional Theory for Automatic Brain Tumor Image Segmentation

In the article, we proposed a hybridized method for brain tumor image segmentation by fusing topological heterogeneities of images and the attention mechanism in the neural networks. The three-dimensional image datasets were first pre-processed using the histogram normalization for the standardization of pixel intensities. Then the normalized images were parallel fed into the procedures of affine transformations and feature pre-extractions. The technique of fast data density functional theory (fDDFT) was adopted for the topological feature extractions. Under the framework of fDDFT, 3-dimensional topological features were extracted and then used for the 2-dimensional tumor image segmentation, then those 2-dimensional significant images are reconstructed back to the 3-dimensional intensity feature maps by utilizing physical perceptrons. The undesired image components would be filtered out in this procedure. Thus, at the pre-processing stage, the proposed framework provided dimension-hybridized intensity feature maps and image sets after the affine transformations simultaneously. Then the feature maps and the transformed images were concatenated and then became the inputs of the attention U-Net. By employing the concept of gate controlling of the data flow, the encoder can perform as a masked feature tracker to concatenate the features produced from the decoder. Under the proposed algorithmic scheme, we constructed a fast method of dimension-hybridized feature pre-extraction for the training procedure in the neural network. Thus, the model size as well as the computational complexity might be reduced safely by applying the proposed algorithm.

Zi-Jun Su, Tang-Chen Chang, Yen-Ling Tai, Shu-Jung Chang, Chien-Chang Chen
MVP U-Net: Multi-View Pointwise U-Net for Brain Tumor Segmentation

It is a challenging task to segment brain tumors from multi-modality MRI scans. How to segment and reconstruct brain tumors more accurately and faster remains an open question. The key is to effectively model spatial-temporal information that resides in the input volumetric data. In this paper, we propose Multi-View Pointwise U-Net (MVP U-Net) for brain tumor segmentation. Our segmentation approach follows encoder-decoder based 3D U-Net architecture, among which, the 3D convolution is replaced by three 2D multi-view convolutions in three orthogonal views (axial, sagittal, coronal) of the input data to learn spatial features and one pointwise convolution to learn channel features. Further, we modify the Squeeze-and-Excitation (SE) block properly and introduce it into our original MVP U-Net after the concatenation section. In this way, the generalization ability of the model can be improved while the number of parameters can be reduced. In BraTS 2020 testing dataset, the mean Dice scores of the proposed method were 0.715, 0.839, and 0.768 for enhanced tumor, whole tumor, and tumor core, respectively. The results show the effectiveness of the proposed MVP U-Net with the SE block for multi-modal brain tumor segmentation.

Changchen Zhao, Zhiming Zhao, Qingrun Zeng, Yuanjing Feng
Glioma Segmentation with 3D U-Net Backed with Energy-Based Post-Processing

This paper proposes a glioma segmentation method based on neural networks. The base of the network is a UNet, expanded by residual blocks. Several preprocessing steps were applied before training, such as intensity normalization, high intensity cutting, cropping, and random flips. 2D and 3D solutions are implemented and tested, and results show that the 3D network outperforms 2D directions, therefore we stayed with 3D directions.The novelty of the method is the energy-based post-processing. Snakes [10], and conditional random fields (CRF) [11] were applied to the neural network’s predictions. Snake or active contour needs an initial outline around the object – e.g. the network’s prediction outline - and it can correct the contours of the tumor based on calculating the energy minimum, based on the intensity values at a given area. CRF is a specific type of graphical model, it uses the network’s prediction and the raw image features to estimate the posterior distribution (the tumor contour) using energy function minimization.The proposed methods are evaluated within the framework of the BRATS 2020 challenge. Measured on the test dataset the mean dice scores of the whole tumor (WT), tumor core (TC) and enhancing tumor (ET) are 86.9%, 83.2% and 81.8% respectively. The results show high performance and promising future work in tumor segmentation, even outside of the brain.

Richard Zsamboki, Petra Takacs, Borbala Deak-Karancsi
nnU-Net for Brain Tumor Segmentation

We apply nnU-Net to the segmentation task of the BraTS 2020 challenge. The unmodified nnU-Net baseline configuration already achieves a respectable result. By incorporating BraTS-specific modifications regarding postprocessing, region-based training, a more aggressive data augmentation as well as several minor modifications to the nnU-Net pipeline we are able to improve its segmentation performance substantially. We furthermore re-implement the BraTS ranking scheme to determine which of our nnU-Net variants best fits the requirements imposed by it. Our method took the first place in the BraTS 2020 competition with Dice scores of 88.95, 85.06 and 82.03 and HD95 values of 8.498,17.337 and 17.805 for whole tumor, tumor core and enhancing tumor, respectively.

Fabian Isensee, Paul F. Jäger, Peter M. Full, Philipp Vollmuth, Klaus H. Maier-Hein
A Deep Random Forest Approach for Multimodal Brain Tumor Segmentation

Locating brain tumor and its various sub-regions are crucial for treating tumor in humans. The challenge lies in taking cues for identification of tumors having different size, shape, and location in the brain using multimodal data. Numerous work has been done in the recent past in BRATS challenge [16]. In this work, an ensemble based approach using Deep Random Forest [23] in incremental learning mechanism is deployed. The proposed approach divides data and features into disjoint subsets and learn in chunk as cascading architecture of multi layer RFs. Each layer is also a combination of RFs to use sample of the data to learn diversity present. Given the huge amount of data, the proposed approach is fast and paralleled. In addition, we have proposed new kind of Local Binary Pattern (LBP) features with rotation. Also, few more handcrafted are designed primarily texture based features, appearance based features, statistical based features. The experiments are performed only on MICCAI BRATS 2020 dataset.

Sameer Shaikh, Ashish Phophalia
Brain Tumor Segmentation and Associated Uncertainty Evaluation Using Multi-sequences MRI Mixture Data Preprocessing

The brain tumor segmentation is one of the crucial tasks nowadays among other directions and domains where daily clinical workflow requires to put a lot of efforts while studying computer tomography (CT) or structural magnetic resonance imaging (MRI) scans of patients with various pathologies. MRI is the most common method of primary detection, non-invasive diagnostics and a source of recommendations for further treatment of brain diseases. The brain is a complex structure, different areas of which have different functional significance.In this paper, we extend the previous research work on the robust pre-processing methods which allow to consider all available information from MRI scans by the composition of T1, T1C, T2 and T2-Flair sequences in the unique input. Such approach enriches the input data for the segmentation process and helps to improve the accuracy of the segmentation and associated uncertainty evaluation performance.Proposed in this paper method also demonstrates strong improvement on the segmentation problem. This conclusion was done with respect to Dice metric, Sensitivity and Specificity compare to identical training/validation procedure based only on any single sequence and regardless of the chosen neural network architecture.Obtained results demonstrate significant performance improvement while combining three MRI sequences in the 3-channel RGB like image for considered tasks of brain tumor segmentation. In this work we provide the comparison of various gradient descent optimization methods and of the different backbone architectures.

Vladimir Groza, Bair Tuchinov, Evgeniya Amelina, Evgeniy Pavlovskiy, Nikolay Tolstokulakov, Mikhail Amelin, Sergey Golushko, Andrey Letyagin
A Deep Supervision CNN Network for Brain Tumor Segmentation

The brain tumor segmentation is essential for diagnosis and treatment of brain diseases. However, most of current 3D deep learning technologies require large number of magnetic resonance images (MRIs). In order to make full use of small dataset like BraTS 2020, we propose a deep supervision-based 2D residual U-net for efficient and automatic brain tumor segmentation. In our network, residual blocks are used to alleviate the gradient dispersion caused by excessive depth of network, while multiple deep supervision branches are used as the regularization of the network, they can improve the training stability and enable the encoder to extract richer visual features. The CBICA’s IPP’s evaluation of the segmentation results verifies the effectiveness of our method. The average Dice of ET, WT and TC are 0.7593, 0.8726 and 0.7879 respectively.

Shiqiang Ma, Zehua Zhang, Jiaqi Ding, Xuejian Li, Jijun Tang, Fei Guo
Multi-threshold Attention U-Net (MTAU) Based Model for Multimodal Brain Tumor Segmentation in MRI Scans

Gliomas are one of the most frequent brain tumors and are classified into high grade and low grade gliomas. The segmentation of various regions such as tumor core, enhancing tumor etc. plays an important role in determining severity and prognosis. Here, we have developed a multi-threshold model based on attention U-Net for identification of various regions of the tumor in magnetic resonance imaging (MRI). We propose a multi-path segmentation and built three separate models for the different regions of interest. The proposed model achieved mean Dice Coefficient of 0.59, 0.72, and 0.61 for enhancing tumor, whole tumor and tumor core respectively on the training dataset. The same model gave mean Dice Coefficient of 0.57, 0.73, and 0.61 on the validation dataset and 0.59, 0.72, and 0.57 on the test dataset .

Navchetan Awasthi, Rohit Pardasani, Swati Gupta
Multi-stage Deep Layer Aggregation for Brain Tumor Segmentation

Gliomas are among the most aggressive and deadly brain tumors. This paper details the proposed Deep Neural Network architecture for brain tumor segmentation from Magnetic Resonance Images. The architecture consists of a cascade of three Deep Layer Aggregation neural networks, where each stage elaborates the response using the feature maps and the probabilities of the previous stage, and the MRI channels as inputs. The neuroimaging data are part of the publicly available Brain Tumor Segmentation (BraTS) 2020 challenge dataset, where we evaluated our proposal in the BraTS 2020 Validation and Test sets. In the Test set, the experimental results achieved a Dice score of 0.8858, 0.8297 and 0.7900, with an Hausdorff Distance of 5.32 mm, 22.32 mm and 20.44 mm for the whole tumor, core tumor and enhanced tumor, respectively.

Carlos A. Silva, Adriano Pinto, Sérgio Pereira, Ana Lopes
Glioma Segmentation Using Ensemble of 2D/3D U-Nets and Survival Prediction Using Multiple Features Fusion

Automatic segmentation of gliomas from brain Magnetic Resonance Imaging (MRI) volumes is an essential step for tumor detection. Various 2D Convolutional Neural Network (2D-CNN) and its 3D variant, known as 3D-CNN based architectures, have been proposed in previous studies, which are used to capture contextual information. The 3D models capture depth information, making them an automatic choice for glioma segmentation from 3D MRI images. However, the 2D models can be trained in a relatively shorter time, making their parameter tuning relatively easier. Considering these facts, we tried to propose an ensemble of 2D and 3D models to utilize their respective benefits better. After segmentation, prediction of Overall Survival (OS) time was performed on segmented tumor sub-regions. For this task, multiple radiomic and image-based features were extracted from MRI volumes and segmented sub-regions. In this study, radiomic and image-based features were fused to predict the OS time of patients. Experimental results on BraTS 2020 testing dataset achieved a dice score of 0.79 on Enhancing Tumor (ET), 0.87 on Whole Tumor (WT), and 0.83 on Tumor Core (TC). For OS prediction task, results on BraTS 2020 testing leaderboard achieved an accuracy of 0.57, Mean Square Error (MSE) of 392,963.189, Median SE of 162,006.3, and Spearman R correlation score of −0.084.

Muhammad Junaid Ali, Muhammad Tahir Akram, Hira Saleem, Basit Raza, Ahmad Raza Shahid
Generalized Wasserstein Dice Score, Distributionally Robust Deep Learning, and Ranger for Brain Tumor Segmentation: BraTS 2020 Challenge

Training a deep neural network is an optimization problem with four main ingredients: the design of the deep neural network, the per-sample loss function, the population loss function, and the optimizer. However, methods developed to compete in recent BraTS challenges tend to focus only on the design of deep neural network architectures, while paying less attention to the three other aspects. In this paper, we experimented with adopting the opposite approach. We stuck to a generic and state-of-the-art 3D U-Net architecture and experimented with a non-standard per-sample loss function, the generalized Wasserstein Dice loss, a non-standard population loss function, corresponding to distributionally robust optimization, and a non-standard optimizer, Ranger. Those variations were selected specifically for the problem of multi-class brain tumor segmentation. The generalized Wasserstein Dice loss is a per-sample loss function that allows taking advantage of the hierarchical structure of the tumor regions labeled in BraTS. Distributionally robust optimization is a generalization of empirical risk minimization that accounts for the presence of underrepresented subdomains in the training dataset. Ranger is a generalization of the widely used Adam optimizer that is more stable with small batch size and noisy labels. We found that each of those variations of the optimization of deep neural networks for brain tumor segmentation leads to improvements in terms of Dice scores and Hausdorff distances. With an ensemble of three deep neural networks trained with various optimization procedures, we achieved promising results on the validation dataset and the testing dataset of the BraTS 2020 challenge. Our ensemble ranked fourth out of 78 for the segmentation task of the BraTS 2020 challenge with mean Dice scores of 88.9, 84.1, and 81.4, and mean Hausdorff distances at $$95\%$$ 95 % of 6.4, 19.4, and 15.8 for the whole tumor, the tumor core, and the enhancing tumor.

Lucas Fidon, Sébastien Ourselin, Tom Vercauteren
3D Semantic Segmentation of Brain Tumor for Overall Survival Prediction

Glioma, a malignant brain tumor, requires immediate treatment to improve the survival of patients. The heterogeneous nature of Glioma makes the segmentation difficult, especially for sub-regions like necrosis, enhancing tumor, non-enhancing tumor, and edema. Deep neural networks like full convolution neural networks and an ensemble of fully convolution neural networks are successful for Glioma segmentation. The paper demonstrates the use of a 3D fully convolution neural network with a three-layer encoder-decoder approach. The dense connections within the layer help in diversified feature learning. The network takes 3D patches from $$T_{1}$$ T 1 , $$T_{2}$$ T 2 , $$T_{1}c$$ T 1 c , and FLAIR modalities as input. The loss function combines dice loss and focal loss functions. The Dice similarity coefficient for training and validation set is 0.88, 0.83, 0.78 and 0.87, 0.75, 0.76 for the whole tumor, tumor core and enhancing tumor, respectively. The network achieves comparable performance with other state-of-the-art ensemble approaches. The random forest regressor trains on the shape, volumetric, and age features extracted from ground truth for overall survival prediction. The regressor achieves an accuracy of 56.8% and 51.7% on the training and validation sets.

Rupal R. Agravat, Mehul S. Raval
Segmentation, Survival Prediction, and Uncertainty Estimation of Gliomas from Multimodal 3D MRI Using Selective Kernel Networks

Segmentation of gliomas into distinct sub-regions can help guide clinicians in tasks such as surgical planning, prognosis, and treatment response assessment. Manual delineation is time-consuming and prone to inter-rater variability. In this work, we propose a deep learning based automatic segmentation method that takes T1-pre, T1-post, T2, and FLAIR MRI as input and outputs a segmentation map of the sub-regions of interest (enhancing tumor (ET), whole tumor (WT), and tumor core (TC)). Our U-Net based architecture incorporates a modified selective kernel block to enable the network to adjust its receptive field via an attention mechanism, enabling more robust segmentation of gliomas of all appearances, shapes, and scales. Using this approach on the official BraTS 2020 testing set, we obtain Dice scores of .822, .889, and .834, and Hausdorff distances (95%) of 11.588, 4.812, and 21.984 for ET, WT, and TC, respectively. For prediction of overall survival, we extract deep features from the bottleneck layer of this network and train a Cox Proportional Hazards model, obtaining .495 accuracy. For uncertainty prediction, we achieve AUCs of .850, .914, and .854 for ET, WT, and TC, respectively, which earned us third place for this task.

Jay Patel, Ken Chang, Katharina Hoebel, Mishka Gidwani, Nishanth Arun, Sharut Gupta, Mehak Aggarwal, Praveer Singh, Bruce R. Rosen, Elizabeth R. Gerstner, Jayashree Kalpathy-Cramer
3D Brain Tumor Segmentation and Survival Prediction Using Ensembles of Convolutional Neural Networks

Convolutional Neural Networks (CNNs) are the state of the art in many medical image applications, including brain tumor segmentation. However, no successful studies using CNNs have been reported for survival prediction in glioma patients. In this work, we present two different solutions: tumor segmentation and the other for survival prediction. We proposed using an ensemble of asymmetric U-Net like architectures to improve segmentation results in the enhancing tumor region and the use of a DenseNet model for survival prognosis. We quantitatively compare deep learning with classical regression and classification models based on radiomics features and growth tumor models features for survival prediction on the BraTS 2020 database, and we provide an insight into the limitations of these models to accurately predict survival. Our method's current performance on the BraTS 2020 test set is dice scores of 0.80, 0.87, and 0.80 for enhancing tumor, whole tumor, and tumor core, respectively, with an overall dice of 0.82. For the survival prediction task, we got a 0.57 accuracy. In addition, we proposed a voxel-wise uncertainty estimation of our segmentation method that can be used effectively to improve brain tumor segmentation.

S. Rosas González, I. Zemmoura, C. Tauber
Brain Tumour Segmentation Using Probabilistic U-Net

We describe our approach towards the segmentation task of the BRATS 2020 challenge. We use the Probabilistic UNet to explore the effect of sampling different segmentation maps, which may be useful to experts when the opinions of different experts vary. We use 2D segmentation models and approach the problem in a slice-by-slice manner. To explore the possibility of designing robust models, we use self attention in the UNet, and the prior and posterior networks, and explore the effect of varying the number of attention blocks on the quality of the segmentation. Our model achieves Dice scores of 0.81898 on Whole Tumour, 0.71681 on Tumour Core, and 0.68893 on Enhancing Tumour on the Validation data, and 0.7988 on Whole Tumour, 0.7771 on Tumour Core, and 0.7249 on Enhancing Tumour on the Testing data. Our code is available at .

Chinmay Savadikar, Rahul Kulhalli, Bhushan Garware
Segmenting Brain Tumors from MRI Using Cascaded 3D U-Nets

In this paper, we exploit a cascaded 3D U-Net architecture to perform detection and segmentation of brain tumors (low- and high-grade gliomas) from multi-modal magnetic resonance scans. First, we detect tumors in a binary-classification setting, and they later undergo multi-class segmentation. To provide high-quality generalization, we investigate several regularization techniques that help improve the segmentation performance obtained for the unseen scans, and benefit from the expert knowledge of a senior radiologist captured in a form of several post-processing routines. Our preliminary experiments elaborated over the BraTS’20 validation set revealed that our approach delivers high-quality tumor delineation.

Krzysztof Kotowski, Szymon Adamski, Wojciech Malara, Bartosz Machura, Lukasz Zarudzki, Jakub Nalepa
A Deep Supervised U-Attention Net for Pixel-Wise Brain Tumor Segmentation

Glioblastoma (GBM) is one of the leading causes of cancer death. The imaging diagnostics are critical for all phases in the treatment of brain tumor. However, manually-checked output by a radiologist has several limitations such as tedious annotation, time consuming and subjective biases, which influence the outcome of a brain tumor affected region. Therefore, the development of an automatic segmentation framework has attracted lots of attention from both clinical and academic researchers. Recently, most state-of-the-art algorithms are derived from deep learning methodologies such as the U-net, attention network. In this paper, we propose a deep supervised U-Attention Net framework for pixel-wise brain tumor segmentation, which combines the U-net, Attention network and a deep supervised multistage layer. Subsequently, we are able to achieve a low resolution and high resolution feature representations even for small tumor regions. Preliminary results of our method on training data have mean dice coefficients of about 0.75, 0.88, and 0.80; on the other hand, validation data achieve a mean dice coefficient of 0.67, 0.86, and 0.70, for enhancing tumor (ET), whole tumor (WT), and tumor core (TC) respectively .

Jia Hua Xu, Wai Po Kevin Teng, Xiong Jun Wang, Andreas Nürnberger
A Two-Stage Atrous Convolution Neural Network for Brain Tumor Segmentation and Survival Prediction

Glioma is a type of heterogeneous tumor originating in the brain, characterized by the coexistence of multiple subregions with different phenotypic characteristics, which further determine heterogeneous profiles, likely to respond variably to treatment. Identifying spatial variations of gliomas is necessary for targeted therapy. The current paper proposes a neural network composed of heterogeneous building blocks to identify the different histologic sub-regions of gliomas in multi-parametric MRIs and further extracts radiomic features to estimate a patient’s prognosis. The model is evaluated on the BraTS 2020 dataset.

Radu Miron, Ramona Albert, Mihaela Breaban
TwoPath U-Net for Automatic Brain Tumor Segmentation from Multimodal MRI Data

A novel encoder-decoder deep learning network called TwoPath U-Net for multi-class automatic brain tumor segmentation task is presented. The network uses cascaded local and global feature extraction paths in the down-sampling path of the network which allows the network to learn different aspects of both the low-level feature and high-level features. The proposed network architecture using a full image and patches input technique was used on the BraTS2020 training dataset. We tested the network performance using the BraTS2019 validation dataset and obtained the mean dice score of 0.76, 0.64, and 0.58 and the Hausdorff distance 95% of 25.05, 32.83, and 37.57 for the whole tumor, tumor core and enhancing tumor regions.

Keerati Kaewrak, John Soraghan, Gaetano Di Caterina, Derek Grose
Brain Tumor Segmentation and Survival Prediction Using Automatic Hard Mining in 3D CNN Architecture

We utilize 3-D fully convolutional neural networks (CNN) to segment gliomas and its constituents from multimodal Magnetic Resonance Images (MRI). The architecture uses dense connectivity patterns to reduce the number of weights and residual connection and is initialized with weights obtained from training this model with BraTS 2018 dataset. Hard mining is done during training to train for the difficult cases of segmentation tasks by increasing the dice similarity coefficient (DSC) threshold to choose the hard cases as epoch increases. On the BraTS2020 validation data (n = 125), this architecture achieved a tumor core, whole tumor, and active tumor dice of 0.744, 0.876, 0.714, respectively. On the test dataset, we get an increment in DSC of tumor core and active tumor by approximately 7%. In terms of DSC, our network performances on the BraTS 2020 test data are 0.775, 0.815, and 0.85 for enhancing tumor, tumor core, and whole tumor, respectively. Overall survival of a subject is determined using conventional machine learning from rediomics features obtained using generated segmentation mask. Our approach has achieved 0.448 and 0.452 as the accuracy on the validation and test dataset.

Vikas Kumar Anand, Sanjeev Grampurohit, Pranav Aurangabadkar, Avinash Kori, Mahendra Khened, Raghavendra S. Bhat, Ganapathy Krishnamurthi
Some New Tricks for Deep Glioma Segmentation

This manuscript outlines the design of methods, and initial progress on automatic detection of glioma from MRI images using deep neural networks, all applied and evaluated for the 2020 Brain Tumor Segmentation (BraTS) Challenge. Our approach builds on existing work using U-net architectures, and evaluates a variety deep learning techniques including model averaging and adaptive learning rates.

Chase Duncan, Francis Roxas, Neel Jani, Jane Maksimovic, Matthew Bramlet, Brad Sutton, Sanmi Koyejo
PieceNet: A Redundant UNet Ensemble

Segmentation of gliomas is essential to aid clinical diagnosis and treatment; however, imaging artifacts and heterogeneous shape complicate this task. In the last few years, researchers have shown the effectiveness of 3D UNets on this problem. They have found success using 3D patches to predict the class label for the center voxel; however, even a single patch-based UNet may miss representations that another UNet could learn. To circumvent this issue, I developed PieceNet, a deep learning model using a novel ensemble of patch-based 3D UNets. In particular, I used uncorrected modalities to train a standard 3D UNet for all label classes as well as one 3D UNet for each individual label class. Initial results indicate this 4-network ensemble is potentially a superior technique to a traditional patch-based 3D UNet on uncorrected images; however, further work needs to be done to allow for more competitive enhancing tumor segmentation. Moreover, I developed a linear probability model using radiomic and non-imaging features that predicts post-surgery survival.

Vikas L. Bommineni
Cerberus: A Multi-headed Network for Brain Tumor Segmentation

The automated analysis of medical images requires robust and accurate algorithms that address the inherent challenges of identifying heterogeneous anatomical and pathological structures, such as brain tumors, in large volumetric images. In this paper, we present Cerberus, a single lightweight convolutional neural network model for the segmentation of fine-grained brain tumor regions in multichannel MRIs. Cerberus has an encoder-decoder architecture that takes advantage of a shared encoding phase to learn common representations for these regions and, then, uses specialized decoders to produce detailed segmentations. Cerberus learns to combine the weights learned for each category to produce a final multi-label segmentation. We evaluate our approach on the official test set of the Brain Tumor Segmentation Challenge 2020, and we obtain dice scores of 0.807 for enhancing tumor, 0.867 for whole tumor and 0.826 for tumor core.

Laura Daza, Catalina Gómez, Pablo Arbeláez
An Automatic Overall Survival Time Prediction System for Glioma Brain Tumor Patients Based on Volumetric and Shape Features

An automatic overall survival time prediction system for Glioma brain tumor patients is proposed and developed based on volumetric, location, and shape features. The proposed automatic prediction system consists of three stages: segmentation of brain tumor sub-regions; features extraction; and overall survival time predictions. A deep learning structure based on a modified 3 Dimension (3D) U-Net is proposed to develop an accurate segmentation model to identify and localize the three Glioma brain tumor sub-regions: gadolinium (GD)-enhancing tumor, peritumoral edema, and necrotic and non-enhancing tumor core (NCR/NET). The best performance of a segmentation model is achieved by the modified 3D U-Net based on an Accumulated Encoder (U-Net AE) with a Generalized Dice-Loss (GDL) function trained by the ADAM optimization algorithm. This model achieves Average Dice-Similarity (ADS) scores of 0.8898, 0.8819, and 0.8524 for Whole Tumor (WT), Tumor Core (TC), and Enhancing Tumor (ET), respectively, in the train dataset of the Multimodal Brain Tumor Segmentation challenge (BraTS) 2020. Various combinations of volumetric (based on brain functionality regions), shape, and location features are extracted to train an overall survival time classification model using a Neural Network (NN). The model classifies the data into three classes: short-survivors, mid-survivors, and long-survivors. An information fusion strategy based on features-level fusion and decision-level fusion is used to produce the best prediction model. The best performance is achieved by the ensemble model and shape features model with accuracies of (55.2%) on the BraTS 2020 validation dataset. The ensemble model achieves a competitive accuracy (55.1%) on the BraTS 2020 test dataset.

Lina Chato, Pushkin Kachroo, Shahram Latifi
Squeeze-and-Excitation Normalization for Brain Tumor Segmentation

In this paper we described our approach for glioma segmentation in multi-sequence magnetic resonance imaging (MRI) in the context of the MICCAI 2020 Brain Tumor Segmentation Challenge (BraTS). We proposed an architecture based on U-Net with a new computational unit termed “SE Norm” that brought significant improvements in segmentation quality. Our approach obtained competitive results on the validation (Dice scores of 0.780, 0.911, 0.863) and test (Dice scores of 0.805, 0.887, 0.843) sets for the enhanced tumor, whole tumor and tumor core sub-regions. The full implementation and trained models are available at .

Andrei Iantsen, Vincent Jaouen, Dimitris Visvikis, Mathieu Hatt
Modified MobileNet for Patient Survival Prediction

Glioblastoma is a type of malignant tumor that varies significantly in size, shape, and location. The study of this type of tumor, one of which is about predicting the patient’s survival ability, is beneficial for the treatment of patients. However, the supporting data for the survival prediction model are minimal, so the best methods are needed for handling it. In this study, we propose an architecture for predicting patient survival using MobileNet combined with a linear survival prediction model (SPM). Several variations of MobileNet are tested to obtain the best results. Variations tested include modification of MobileNet V1 with freeze or unfreeze layers, and modification of MobileNet V2 with freeze or unfreeze layers connected to SPM. The dataset used for the trial came from BraTS 2020. A modification based on the MobileNet V2 architecture with the freezing layer was selected from the test results. The results of testing this proposed architecture with 95 training data and 23 validation data resulted in an MSE Loss of 78374.17. The online test results with the validation dataset 29 resulted in an MSE loss value of 149764.866 with an accuracy of 0.345. Testing with the testing dataset resulted in increased accuracy of 0.402. These results are promising for better architectural development.

Agus Subhan Akbar, Chastine Fatichah, Nanik Suciati
Memory Efficient 3D U-Net with Reversible Mobile Inverted Bottlenecks for Brain Tumor Segmentation

We propose combining memory saving techniques with traditional U-Net architectures to increase the complexity of the models on the Brain Tumor Segmentation (BraTS) challenge. The BraTS challenge consists of a 3D segmentation of a 240 $$\times $$ × 240 $$\times $$ × 155 $$\times $$ × 4 input image into a set of tumor classes. Because of the large volume and need for 3D convolutional layers, this task is very memory intensive. To address this, prior approaches use smaller cropped images while constraining the model’s depth and width. Our 3D U-Net uses a reversible version of the mobile inverted bottleneck block defined in MobileNetV2, MnasNet and the more recent EfficientNet architectures to save activation memory during training. Using reversible layers enables the model to recompute input activations given the outputs of that layer, saving memory by eliminating the need to store activations during the forward pass. The inverted residual bottleneck block uses lightweight depthwise separable convolutions to reduce computation by decomposing convolutions into a pointwise convolution and a depthwise convolution. Further, this block inverts traditional bottleneck blocks by placing an intermediate expansion layer between the input and output linear 1 $$\times $$ × 1 convolution, reducing the total number of channels. Given a fixed memory budget, with these memory saving techniques, we are able to train image volumes up to 3x larger, models with 25% more depth, or models with up to 2x the number of channels than a corresponding non-reversible network.

Mihir Pendse, Vithursan Thangarasa, Vitaliy Chiley, Ryan Holmdahl, Joel Hestness, Dennis DeCoste
Brain Tumor Segmentation and Survival Prediction Using Patch Based Modified 3D U-Net

Brain tumor segmentation is a vital clinical requirement. In recent years, the developments of the prevalence of deep learning in medical image processing have been experienced. Automated brain tumor segmentation can reduce the diagnosis time and increase the potential of clinical intervention. In this work, we have used a patch selection methodology based on modified U-Net deep learning architecture with appropriate normalization and patch selection methods for the brain tumor segmentation task in BraTS 2020 challenge. Two-phase network training was implemented with patch selection methods. The performance of our deep learning-based brain tumor segmentation approach was done on CBICA’s Image Processing Portal. We achieved a Dice score of 0.795, 0.886, 0.827 in the testing phase, for the enhancing tumor, whole tumor, and tumor core respectively. The segmentation outcome with various radiomic features was used for Overall survival (OS) prediction. For OS prediction we achieved an accuracy of 0.570 for the testing phase. The algorithm can further be improved for tumor inter-class segmentation and OS prediction with various network implementation strategies. As the OS prediction results are based on segmentation, there is a scope of improvement in the segmentation and OS prediction thereby.

Bhavesh Parmar, Mehul Parikh
DR-Unet104 for Multimodal MRI Brain Tumor Segmentation

In this paper we propose a 2D deep residual Unet with 104 convolutional layers (DR-Unet104) for lesion segmentation in brain MRIs. We make multiple additions to the Unet architecture, including adding the ‘bottleneck’ residual block to the Unet encoder and adding dropout after each convolution block stack. We verified the effect of including the regularization of dropout with small rate (e.g. 0.2) on the architecture, and found a dropout of 0.2 improved the overall performance compared to no dropout, or a dropout of 0.5. We evaluated the proposed architecture as part of the Multimodal Brain Tumor Segmentation (BraTS) 2020 Challenge and compared our method to DeepLabV3+ with a ResNet-V2–152 backbone. We found the DR-Unet104 achieved a mean dice score coefficient of 0.8862, 0.6756 and 0.6721 for validation data, whole tumor, enhancing tumor and tumor core respectively, an overall improvement on 0.8770, 0.65242 and 0.68134 achieved by DeepLabV3+. Our method produced a final mean DSC of 0.8673, 0.7514 and 0.7983 on whole tumor, enhancing tumor and tumor core on the challenge’s testing data. We produce a competitive lesion segmentation architecture, despite only using 2D convolutions, having the added benefit that it can be used on lower power computers than a 3D architecture. The source code and trained model for this work is openly available at .

Jordan Colman, Lei Zhang, Wenting Duan, Xujiong Ye
Glioma Sub-region Segmentation on Multi-parameter MRI with Label Dropout

Gliomas are the most common primary brain tumor, the accurate segmentation of clinical sub-regions including enhancing tumor (ET), tumor core (TC) and whole tumor (WT) has great clinical importance throughout the diagnosis, treatment planning, delivery and prognosis. Machine learning algorithms particularly neural network based methods have been successful in many medical image segmentation applications. In this paper, we trained a patch based 3D UNet model with a hybrid loss between soft dice loss, generalized dice loss and multi-class cross-entropy loss. We also proposed a label dropout process that randomly discards inner segment labels and their corresponding network output during training to overcome the heavy class imbalance issue. On the BraTs 2020 final test data, we achieved 0.823, 0.886 and 0.843 for ET, WT and TC respectively.

Kun Cheng, Caihao Hu, Pengyu Yin, Qianlan Su, Guancheng Zhou, Xian Wu, Xiaohui Wang, Wei Yang
Variational-Autoencoder Regularized 3D MultiResUNet for the BraTS 2020 Brain Tumor Segmentation

Tumor segmentation is an important research topic in medical image segmentation. With the fast development of deep learning in computer vision, automated segmentation of brain tumors using deep neural networks becomes increasingly popular. U-Net is the most widely-used network in the applications of automated image segmentation. Many well-performed models are built based on U-Net. In this paper, we devise a model that combines the variational-autoencoder regularuzed 3D U-Net model [10] and the MultiResUNet model [7]. The model is trained on the 2020 Multimodal Brain Tumor Segmentation Challenge (BraTS) dataset and predicts on the validation set. Our result shows that the modified 3D MultiResUNet performs better than the previous 3D U-Net.

Jiarui Tang, Tengfei Li, Hai Shu, Hongtu Zhu
Learning Dynamic Convolutions for Multi-modal 3D MRI Brain Tumor Segmentation

Accurate automated brain tumor segmentation with 3D Magnetic Resonance Image (MRIs) liberates doctors from tedious annotation work and further monitors and provides prompt treatment of the disease. Many recent Deep Convolutional Neural Networks (DCNN) achieve tremendous success on medical image analysis, especially tumor segmentation, while they usually use static networks without considering the inherent diversity of multi-modal inputs. In this paper, we introduce a dynamic convolutional module into brain tumor segmentation and help to learn input-adaptive parameters for specific multi-modal images. To the best of our knowledge, this is the first work to adopt dynamic convolutional networks to segment brain tumor with 3D MRI data. In addition, we employ multiple branches to learn low-level features from multi-modal inputs in an end-to-end fashion. We further investigate boundary information and propose a boundary-aware module to enforce our model to pay more attention to important pixels. Experimental results on the testing dataset and cross-validation dataset split from the training dataset of BraTS 2020 Challenge demonstrate that our proposed framework obtains competitive Dice scores compared with state-of-the-art approaches.

Qiushi Yang, Yixuan Yuan

Computational Precision Medicine: Radiology-Pathology Challenge on Brain Tumor Classification

Automatic Glioma Grading Based on Two-Stage Networks by Integrating Pathology and MRI Images

Glioma with a high incidence is one of the most common brain cancers. In the clinic, pathologist diagnoses the types of the glioma by observing the whole-slide images (WSIs) with different magnifications, which is time-consuming, laborious, and experience-dependent. The automatic grading of the glioma based on WSIs can provide aided diagnosis for clinicians. This paper proposes two fully convolutional networks, which are respectively used for WSIs and MRI images to achieve the automatic glioma grading (astrocytoma (lower-grade A), oligodendroglioma (middle-grade O), and glioblastoma (higher-grade G)). The final classification result is the probability average of the two networks. In the clinic and also in our multi-modalities image representation, grade A and O are difficult to distinguish. This work proposes a two-stage training strategy to exclude the distraction of the grade G and focuses on the classification of grade A and O. The experimental result shows that the proposed model achieves high glioma classification performance with the balanced accuracy of 0.889, Cohen’s Kappa of 0.903, and F1-score of 0.943 tested on the validation set.

Xiyue Wang, Sen Yang, Xiyi Wu
Brain Tumor Classification Based on MRI Images and Noise Reduced Pathology Images

Gliomas are the most common and severe malignant tumors of the brain. The diagnosis and grading of gliomas are typically based on MRI images and pathology images. To improve the diagnosis accuracy and efficiency, we intend to design a framework for computer-aided diagnosis combining the two modalities. Without loss of generality, we first take an individual network for each modality to get the features and fuse them to predict the subtype of gliomas. For MRI images, we directly take a 3D-CNN to extract features, supervised by a cross-entropy loss function. There are too many normal regions in abnormal whole slide pathology images (WSI), which affect the training of pathology features. We call these normal regions as noise regions and propose two ideas to reduce them. Firstly, we introduce a nucleus segmentation model trained on some public datasets. The regions that has a small number of nuclei are excluded in the subsequent training of tumor classification. Secondly, we take a noise-rank module to further suppress the noise regions. After the noise reduction, we train a gliomas classification model based on the rest regions and obtain the features of pathology images. Finally, we fuse the features of the two modalities by a linear weighted module. We evaluate the proposed framework on CPM-RadPath2020 and achieve the first rank on the validation set.

Baocai Yin, Hu Cheng, Fengyan Wang, Zengfu Wang
Multimodal Brain Tumor Classification

Cancer is a complex disease that provides various types of information depending on the scale of observation. While most tumor diagnostics are performed by observing histopathological slides, radiology images should yield additional knowledge towards the efficacy of cancer diagnostics. This work investigates a deep learning method combining whole slide images and magnetic resonance images to classify tumors. In particular, our solution comprises a powerful, generic and modular architecture for whole slide image classification. Experiments are prospectively conducted on the 2020 Computational Precision Medicine challenge, in a 3-classes unbalanced classification task. We report cross-validation (resp. validation) balanced-accuracy, kappa and f1 of 0.913, 0.897 and 0.951 (resp. 0.91, 0.90 and 0.94). For research purposes, including reproducibility and direct performance comparisons, our finale submitted models are usable off-the-shelf in a Docker image available at .

Marvin Lerousseau, Eric Deutsch, Nikos Paragios
A Hybrid Convolutional Neural Network Based-Method for Brain Tumor Classification Using mMRI and WSI

In this paper, we propose a hybrid deep learning-based method for brain tumor classification using whole slide images (WSIs) and multimodal magnetic resonance image (mMRI). It comprises two methods: a WSI-based method and a mMRI-based method. For the WSI-based method, many patches are sampled from the WSI for each category as the training dataset. However, not all the sampling patches are representative of the category to which their corresponding WSI belongs without the annotations by pathologists. Therefore, some error tolerance schemes were applied when training the classification model to achieve better generalization. For the mMRI-based method, we firstly apply a 3D convolutional neural network (3DCNN) on the multimodal magnetic resonance image (mMRI) for brain tumor segmentation, which distinguishes brain tumors from healthy tissues, then the segmented tumors are used for tumor subtype classification using 3DCNN. Lastly, an ensemble scheme using the two methods was performed to reach a consensus as the final predictions. We evaluate the proposed method with the patient dataset from Computational Precision Medicine: Radiology-Pathology Challenge (CPM: Rad-Path) on Brain Tumor Classification 2020. The performance of the prediction results on the validation set reached 0.886 in f1_micro, 0.801 in kappa, 0.8 in balance_acc, and 0.829 in the overall average. The experimental results show that the performance with the consideration of both MRI and WSI outperforms the performance using single type of image dataset. Accordingly, the fusion from two image datasets can provide more sufficient information in diagnosis for the system.

Linmin Pei, Wei-Wen Hsu, Ling-An Chiang, Jing-Ming Guo, Khan M. Iftekharuddin, Rivka Colen
CNN-Based Fully Automatic Glioma Classification with Multi-modal Medical Images

The accurate classification of gliomas is essential in clinical practice. It is valuable for clinical practitioners and patients to choose the appropriate management accordingly, promoting the development of personalized medicine. In the MICCAI 2020 Combined Radiology and Pathology Classification Challenge, 4 MRI sequences and a WSI image are provided for each patient. Participants are required to use the multi-modal images to predict the subtypes of glioma. In this paper, we proposed a fully automated pipeline for glioma classification. Our proposed model consists of two parts: feature extraction and feature fusion, which are respectively responsible for extracting representative features of images and making prediction. In specific, we proposed a segmentation-free self-supervised feature extraction network for 3D MRI volume. And a feature extraction model is designed for the H&E stained WSI by associating traditional image processing methods with convolutional neural network. Finally, we fuse the extracted features from multi-modal images and use a densely connected neural network to predict the final classification results. We evaluate the proposed model with F1-Score, Cohen’s Kappa, and Balanced Accuracy on the validation set, which achieves 0.943, 0.903, and 0.889 respectively.

Bingchao Zhao, Jia Huang, Changhong Liang, Zaiyi Liu, Chu Han
Glioma Classification Using Multimodal Radiology and Histology Data

Gliomas are brain tumours with a high mortality rate. There are various grades and sub-types of this tumour, and the treatment procedure varies accordingly. Clinicians and oncologists diagnose and categorise these tumours based on visual inspection of radiology and histology data. However, this process can be time-consuming and subjective. The computer-assisted methods can help clinicians to make better and faster decisions. In this paper, we propose a pipeline for automatic classification of gliomas into three sub-types: oligodendroglioma, astrocytoma, and glioblastoma, using both radiology and histopathology images. The proposed approach implements distinct classification models for radiographic and histologic modalities and combines them through an ensemble method. The classification algorithm initially carries out tile-level (for histology) and slice-level (for radiology) classification via a deep learning method, then tile/slice-level latent features are combined for a whole-slide and whole-volume sub-type prediction. The classification algorithm was evaluated using the data set provided in the CPM-RadPath 2020 challenge. The proposed pipeline achieved the F1-Score of 0.886, Cohen’s Kappa score of 0.811 and Balance accuracy of 0.860. The ability of the proposed model for end-to-end learning of diverse features enables it to give a comparable prediction of glioma tumour sub-types.

Azam Hamidinekoo, Tomasz Pieciak, Maryam Afzali, Otar Akanyeti, Yinyin Yuan
Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries
Alessandro Crimi
Ph.D. Spyridon Bakas
Copyright Year
Electronic ISBN
Print ISBN

Premium Partner