Skip to main content

2021 | Buch

Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries

6th International Workshop, BrainLes 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, October 4, 2020, Revised Selected Papers, Part I

insite
SUCHEN

Über dieses Buch

This two-volume set LNCS 12658 and 12659 constitutes the thoroughly refereed proceedings of the 6th International MICCAI Brainlesion Workshop, BrainLes 2020, the International Multimodal Brain Tumor Segmentation (BraTS) challenge, and the Computational Precision Medicine: Radiology-Pathology Challenge on Brain Tumor Classification (CPM-RadPath) challenge. These were held jointly at the 23rd Medical Image Computing for Computer Assisted Intervention Conference, MICCAI 2020, in Lima, Peru, in October 2020.*

The revised selected papers presented in these volumes were organized in the following topical sections: brain lesion image analysis (16 selected papers from 21 submissions); brain tumor image segmentation (69 selected papers from 75 submissions); and computational precision medicine: radiology-pathology challenge on brain tumor classification (6 selected papers from 6 submissions).

*The workshop and challenges were held virtually.

Inhaltsverzeichnis

Frontmatter

Invited Papers

Frontmatter
Glioma Diagnosis and Classification: Illuminating the Gold Standard

Accurate glioma classification is essential for optimal patient care, and requires integration of histological and immunohistochemical findings with molecular features, in the context of imaging and demographic information. This paper will introduce classic histologic features of gliomas in contrast to nonneoplastic brain parenchyma, describe the basic clinical algorithm used to classify infiltrating gliomas, and demonstrate how the classification is reflected in the diagnostic reporting structure. Key molecular features include IDH mutational status and 1p/19q codeletion. In addition, molecular changes may indicate poor prognosis despite lower grade histology, such as findings consistent with two grade IV infiltrating gliomas: molecular glioblastoma and diffuse midline gliomas. Detailed molecular characterization aids in optimization of treatment with the goal of improved patient outcomes.

MacLean P. Nasrallah
Multiple Sclerosis Lesion Segmentation - A Survey of Supervised CNN-Based Methods

Lesion segmentation is a core task for quantitative analysis of MRI scans of Multiple Sclerosis patients. The recent success of deep learning techniques in a variety of medical image analysis applications has renewed community interest in this challenging problem and led to a burst of activity for new algorithm development. In this survey, we investigate the supervised CNN-based methods for MS lesion segmentation. We decouple these reviewed works into their algorithmic components and discuss each separately. For methods that provide evaluations on public benchmark datasets, we report comparisons between their results.

Huahong Zhang, Ipek Oguz
Computational Diagnostics of GBM Tumors in the Era of Radiomics and Radiogenomics

Machine learning (ML) integrated with medical imaging has introduced new perspectives in precision diagnostics of GBM tumors, through radiomics and radiogenomics. This has raised hopes for developing non-invasive and in-vivo biomarkers for prediction of patient survival, tumor recurrence, or molecular characterization, and therefore, encouraging treatments tailored to individualized needs. Characterization of tumor infiltration based on pre-operative multi-parametric magnetic resonance imaging (MP-MRI) scans would help in predicting the loci of future tumor recurrence, and thereby aiding in planning the course of treatment for the patients, such as increasing the resection or escalating the dose of radiation. Specifying molecular properties of GBM tumors and prediction of their changes over time and with treatment would help characterize the molecular heterogeneity of a tumor, and potentially use a respective combination treatment. In this article, we will provide examples of our work on radiomics and radiogenomics, aiming to offer personalized treatments to patients with GBM tumors.

Anahita Fathi Kazerooni, Christos Davatzikos

Brain Lesion Image Analysis

Frontmatter
Automatic Segmentation of Non-tumor Tissues in Glioma MR Brain Images Using Deformable Registration with Partial Convolutional Networks

In brain tumor diagnosis and surgical planning, segmentation of tumor regions and accurate analysis of surrounding normal tissues are necessary for physicians. Pathological variability often renders difficulty to register a well-labeled normal atlas to such images and to automatic segment/label surrounding normal brain tissues. In this paper, we propose a new registration approach that first segments brain tumor using a U-Net and then simulates missed normal tissues within the tumor region using a partial convolutional network. Then, a standard normal brain atlas image is registered onto such tumor-removed images in order to segment/label the normal brain tissues. In this way, our new approach greatly reduces the effects of pathological variability in deformable registration and segments the normal tissues surrounding brain tumor well. In experiments, we used MICCAI BraTS2018 T1 and FLAIR images to evaluate the proposed algorithm. By comparing direct registration with the proposed algorithm, the results showed that the Dice coefficient for gray matters was significantly improved for surrounding normal brain tissues.

Zhongqiang Liu, Dongdong Gu, Yu Zhang, Xiaohuan Cao, Zhong Xue
Convolutional Neural Network with Asymmetric Encoding and Decoding Structure for Brain Vessel Segmentation on Computed Tomographic Angiography

Segmenting 3D brain vessels on computed tomographic angiography is critical for early diagnosis of stroke. However, traditional filter and optimization-based methods are ineffective in this challenging task due to imaging quality limits and structural complexity. And learning based methods are difficult to be used in this task due to extremely high time consumption in manually labeling and the lack of labelled open datasets. To address this, in this paper, we develop an asymmetric encoding and decoding-based convolutional neural network for accurate vessel segmentation on computed tomographic angiography. In the network, 3D encoding module is designed to comprehensively extract 3D vascular structure information. And three 2D decoding modules are designed to optimally identify vessels on each 2D plane, so that the network can learn more complex vascular structures, and has stronger ability to distinguish vessels from normal regions. What is more, to improve insufficient fine vessel segmentation caused by pixel-wise loss function, we develop a centerline loss to guide learning model to pay equal attention to small vessels and large vessels, so that the segmentation accuracy of small vessels can be improved. Compared to two state-of-the-art approaches, our model achieved superior performance, demonstrating the effectiveness of both whole vessel segmentation and small vessel maintenance.

Guoqing Wu, Liqiong Zhang, Xi Chen, Jixian Lin, Yuanyuan Wang, Jinhua Yu
Volume Preserving Brain Lesion Segmentation

Automatic brain lesion segmentation plays an important role in clinical diagnosis and treatment. Convolutional neural networks (CNNs) have become an increasingly popular tool for brain lesion segmentation due to its accuracy and efficiency. CNNs are generally trained with loss functions that measure the segmentation accuracy, such as the cross entropy loss and Dice loss. However, lesion load is a crucial measurement for disease analysis, and these loss functions do not guarantee that the volume of lesions given by CNNs agrees with that of the gold standard. In this work, we seek to address this challenge and propose volume preserving brain lesion segmentation, where a volume constraint is imposed on network outputs during the training process. Specifically, we design a differentiable mapping that approximates the volume of lesions using the segmentation probabilities. This mapping is then integrated into the training loss so that the preservation of brain lesion volume is encouraged. For demonstration, the proposed method was applied to ischemic stroke lesion segmentation, and experimental results show that our method better preserves the volume of brain lesions and improves the segmentation accuracy.

Yanlin Liu, Xiangzhu Zeng, Chuyang Ye
Microstructural Modulations in the Hippocampus Allow to Characterizing Relapsing-Remitting Versus Primary Progressive Multiple Sclerosis

Whether gray matter (GM) regions are differentially vulnerable in Relapsing-Remitting and Primary Progressive Multiple Sclerosis (RRMS and PPMS) is still unknown. The objective of this study was to evaluate morphometric and microstructural properties based on structural and diffusion magnetic resonance imaging (dMRI) data in these MS phenotypes, and verify if selective intra-pathological alterations characterise GM structures. Diffusion Tensor Imaging (DTI) and 3D Simple Harmonics Oscillator based Reconstruction and Estimation (3D-SHORE) models were used to fit the dMRI signals, and several features were subsequently extracted from the regional values distributions (e.g., mean, median, skewness). Statistical analyses were conducted to test for group differences and possible correlations with physical disability scores. Results highlighted 3D-SHORE sensitivity to microstructural differences in hippocampus, which was also significantly correlated to physical disability. Conversely, morphometric measurements did not reach any statistical significance. Our study emphasized the potential of dMRI, and in particular the importance of advanced models such as 3D-SHORE with respect to DTI in characterizing the two MS types. In addition, hippocampus has been revealed as particularly relevant in the distinction of RRMS from PPMS and calls for further investigation.

Lorenza Brusini, Ilaria Boscolo Galazzo, Muge Akinci, Federica Cruciani, Marco Pitteri, Stefano Ziccardi, Albulena Bajrami, Marco Castellaro, Ahmed M. A. Salih, Francesca B. Pizzini, Jorge Jovicich, Massimiliano Calabrese, Gloria Menegaz
Symmetric-Constrained Irregular Structure Inpainting for Brain MRI Registration with Tumor Pathology

Deformable registration of magnetic resonance images between patients with brain tumors and healthy subjects has been an important tool to specify tumor geometry through location alignment and facilitate pathological analysis. Since tumor region does not match with any ordinary brain tissue, it has been difficult to deformably register a patient’s brain to a normal one. Many patient images are associated with irregularly distributed lesions, resulting in further distortion of normal tissue structures and complicating registration’s similarity measure. In this work, we follow a multi-step context-aware image inpainting framework to generate synthetic tissue intensities in the tumor region. The coarse image-to-image translation is applied to make a rough inference of the missing parts. Then, a feature-level patch-match refinement module is applied to refine the details by modeling the semantic relevance between patch-wise features. A symmetry constraint reflecting a large degree of anatomical symmetry in the brain is further proposed to achieve better structure understanding. Deformable registration is applied between inpainted patient images and normal brains, and the resulting deformation field is eventually used to deform original patient data for the final alignment. The method was applied to the Multimodal Brain Tumor Segmentation (BraTS) 2018 challenge database and compared against three existing inpainting methods. The proposed method yielded results with increased peak signal-to-noise ratio, structural similarity index, inception score, and reduced L1 error, leading to successful patient-to-normal brain image registration.

Xiaofeng Liu, Fangxu Xing, Chao Yang, C.-C. Jay Kuo, Georges El Fakhri, Jonghye Woo
Multivariate Analysis is Sufficient for Lesion-Behaviour Mapping

Lesion-behaviour mapping aims at predicting individual behavioural deficits, given a certain pattern of brain lesions. It also brings fundamental insights on brain organization, as lesions can be understood as interventions on normal brain function. We focus here on the case of stroke. The most standard approach to lesion-behaviour mapping is mass-univariate analysis, but it is inaccurate due to correlations between the different brain regions induced by vascularisation. Recently, it has been claimed that multivariate methods are also subject to lesion-anatomical bias, and that a move towards a causal approach is necessary to eliminate that bias. In this paper, we reframe the lesion-behaviour brain mapping problem using classical causal inference tools. We show that, in the absence of additional clinical data and if only one region has an effect on the behavioural scores, suitable multivariate methods are sufficient to address lesion-anatomical bias. This is a commonly encountered situation when working with public datasets, which very often lack general health data. We support our claim with a set of simulated experiments using a publicly available lesion imaging dataset, on which we show that adequate multivariate models provide state-of-the art results.

Lucas Martin, Julie Josse, Bertrand Thirion
Label-Efficient Multi-task Segmentation Using Contrastive Learning

Obtaining annotations for 3D medical images is expensive and time-consuming, despite its importance for automating segmentation tasks. Although multi-task learning is considered an effective method for training segmentation models using small amounts of annotated data, a systematic understanding of various subtasks is still lacking. In this study, we propose a multi-task segmentation model with a contrastive learning based subtask and compare its performance with other multi-task models, varying the number of labeled data for training. We further extend our model so that it can utilize unlabeled data through the regularization branch in a semi-supervised manner. We experimentally show that our proposed method outperforms other multi-task methods including the state-of-the-art fully supervised model when the amount of annotated data is limited.

Junichiro Iwasawa, Yuichiro Hirano, Yohei Sugawara
Spatio-Temporal Learning from Longitudinal Data for Multiple Sclerosis Lesion Segmentation

Segmentation of Multiple Sclerosis (MS) lesions in longitudinal brain MR scans is performed for monitoring the progression of MS lesions. We hypothesize that the spatio-temporal cues in longitudinal data can aid the segmentation algorithm. Therefore, we propose a multi-task learning approach by defining an auxiliary self-supervised task of deformable registration between two time-points to guide the neural network toward learning from spatio-temporal changes. We show the efficacy of our method on a clinical dataset comprised of 70 patients with one follow-up study for each patient. Our results show that spatio-temporal information in longitudinal data is a beneficial cue for improving segmentation. We improve the result of current state-of-the-art by 2.6% in terms of overall score (p < 0.05). Code is publicly available ( https://github.com/StefanDenn3r/Spatio-temporal-MS-Lesion-Segmentation ).

Stefan Denner, Ashkan Khakzar, Moiz Sajid, Mahdi Saleh, Ziga Spiclin, Seong Tae Kim, Nassir Navab
MMSSD: Multi-scale and Multi-level Single Shot Detector for Brain Metastases Detection

Stereotactic radio surgery (SRS) is the preferred treatment for brain metastases (BM), in which the delineation of metastatic lesions is one of the critical steps. Taking into consideration that the BM always have clear boundary with surrounding tissues but very small volume, the difficulty of delineation is object detection instead of segmentation. In this paper, we presented a novel lesion detection framework, called Multi-scale and Multi-level Single Shot Detector (MMSSD), to detect the BM target accurately and effectively. In MMSSD, we took advantage of multi-scale feature maps, while paid more attention on the shallow layers for small objects. Specifically, first we only preserved the applicable large-and-middle-scale features in SSD, then generated new feature representations by multi-level feature fusion module, and finally made predictions on those feature maps. The proposed MMSSD framework was evaluated on the clinical dataset, and the experiment results demonstrated that our method outperformed existing popular detectors for BM detection.

Hui Yu, Wenjun Xia, Yan Liu, Xuejun Gu, Jiliu Zhou, Yi Zhang
Unsupervised 3D Brain Anomaly Detection

Anomaly detection (AD) is the identification of data samples that do not fit a learned data distribution. As such, AD systems can help physicians to determine the presence, severity, and extension of a pathology. Deep generative models, such as Generative Adversarial Networks (GANs), can be exploited to capture anatomical variability. Consequently, any outlier (i.e., sample falling outside of the learned distribution) can be detected as an abnormality in an unsupervised fashion. By using this method, we can not only detect expected or known lesions, but we can even unveil previously unrecognized biomarkers. To the best of our knowledge, this study exemplifies the first AD approach that can efficiently handle volumetric data and detect 3D brain anomalies in one single model. Our proposal is a volumetric and high-detail extension of the 2D f-AnoGAN model obtained by combining a state-of-the-art 3D GAN with refinement training steps. In experiments using non-contrast computed tomography images from traumatic brain injury (TBI) patients, the model detects and localizes TBI abnormalities with an area under the ROC curve of $$\sim $$ ∼ 75 $$\%$$ % . Moreover, we test the potential of the method for detecting other anomalies such as low quality images, preprocessing inaccuracies, artifacts, and even the presence of post-operative signs (such as a craniectomy or a brain shunt). The method has potential for rapidly labeling abnormalities in massive imaging datasets, as well as identifying new biomarkers.

Jaime Simarro Viana, Ezequiel de la Rosa, Thijs Vande Vyvere, David Robben, Diana M. Sima, CENTER-TBI Participants and Investigators
Assessing Lesion Segmentation Bias of Neural Networks on Motion Corrupted Brain MRI

Patient motion during the magnetic resonance imaging (MRI) acquisition process results in motion artifacts, which limits the ability of radiologists to provide a quantitative assessment of a condition visualized. Often times, radiologists either “see through” the artifacts with reduced diagnostic confidence, or the MR scans are rejected and patients are asked to be recalled and re-scanned. Presently, there are many published approaches that focus on MRI artifact detection and correction. However, the key question of the bias exhibited by these algorithms on motion corrupted MRI images is still unanswered. In this paper, we seek to quantify the bias in terms of the impact that different levels of motion artifacts have on the performance of neural networks engaged in a lesion segmentation task. Additionally, we explore the effect of a different learning strategy, curriculum learning, on the segmentation performance. Our results suggest that a network trained using curriculum learning is effective at compensating for different levels of motion artifacts, and improved the segmentation performance by $$\sim $$ ∼ 9%–15% ( $$p < 0.05$$ p < 0.05 ) when compared against a conventional shuffled learning strategy on the same motion data. Within each motion category, it either improved or maintained the dice score. To the best of our knowledge, we are the first to quantitatively assess the segmentation bias on various levels of motion artifacts present in a brain MRI image.

Tejas Sudharshan Mathai, Yi Wang, Nathan Cross
Estimating Glioblastoma Biophysical Growth Parameters Using Deep Learning Regression

Glioblastoma (GBM) is arguably the most aggressive, infiltrative, and heterogeneous type of adult brain tumor. Biophysical modeling of GBM growth has contributed to more informed clinical decision-making. However, deploying a biophysical model to a clinical environment is challenging since underlying computations are quite expensive and can take several hours using existing technologies. Here we present a scheme to accelerate the computation. In particular, we present a deep learning (DL)-based logistic regression model to estimate the GBM’s biophysical growth in seconds. This growth is defined by three tumor-specific parameters: 1) a diffusion coefficient in white matter (Dw), which prescribes the rate of infiltration of tumor cells in white matter, 2) a mass-effect parameter (Mp), which defines the average tumor expansion, and 3) the estimated time (T) in number of days that the tumor has been growing. Preoperative structural multi-parametric MRI (mpMRI) scans from $$n=135$$ n = 135 subjects of the TCGA-GBM imaging collection are used to quantitatively evaluate our approach. We consider the mpMRI intensities within the region defined by the abnormal FLAIR signal envelope for training one DL model for each of the tumor-specific growth parameters. We train and validate the DL-based predictions against parameters derived from biophysical inversion models. The average Pearson correlation coefficients between our DL-based estimations and the biophysical parameters are 0.85 for Dw, 0.90 for Mp, and 0.94 for T, respectively. This study unlocks the power of tumor-specific parameters from biophysical tumor growth estimation. It paves the way towards their clinical translation and opens the door for leveraging advanced radiomic descriptors in future studies by means of a significantly faster parameter reconstruction compared to biophysical growth modeling approaches.

Sarthak Pati, Vaibhav Sharma, Heena Aslam, Siddhesh P. Thakur, Hamed Akbari, Andreas Mang, Shashank Subramanian, George Biros, Christos Davatzikos, Spyridon Bakas
Bayesian Skip Net: Building on Prior Information for the Prediction and Segmentation of Stroke Lesions

Perfusion CT is widely used in acute ischemic stroke to determine eligibility for acute treatment, by defining an ischemic core and penumbra. In this work, we propose a novel way of building on prior information for the automatic prediction and segmentation of stroke lesions. To this end, we reformulate the task to identify differences from a prior segmentation by extending a three-dimensional Attention Gated Unet with a skip connection allowing only an unchanged prior to bypass most of the network. We show that this technique improves results obtained by a baseline Attention Gated Unet on both the Geneva Stroke Dataset and the ISLES 2018 dataset.

Julian Klug, Guillaume Leclerc, Elisabeth Dirren, Maria Giulia Preti, Dimitri Van De Ville, Emmanuel Carrera

Brain Tumor Segmentation

Frontmatter
Brain Tumor Segmentation Using Dual-Path Attention U-Net in 3D MRI Images

Semantic segmentation plays an essential role in brain tumor diagnosis and treatment planning. Yet, manual segmentation is a time-consuming task. That fact leads to hire the Deep Neural Networks to segment brain tumor. In this work, we proposed a variety of 3D U-Net, which can achieve comparable segmentation accuracy with less graphic memory cost. To be more specific, our model employs a modified attention block to refine the feature map representation along the skip-connection bridge, which consists of parallelly connected spatial and channel attention blocks. Dice coefficients for enhancing tumor, whole tumor, and tumor core reached 0.752, 0.879 and 0.779 respectively on the BRATS- 2020 valid dataset.

Wen Jun, Xu Haoxiang, Zhang Wang
Multimodal Brain Image Analysis and Survival Prediction Using Neuromorphic Attention-Based Neural Networks

Accurate analysis of brain tumors from 3D Magnetic Resonance Imaging (MRI) is necessary for the diagnosis and treatment planning, and the recent development using deep neural networks becomes of great clinical importance because of its effective and accurate performance. The 3D nature of multimodal MRI demands the large scale memory and computation, while the variety of 3D U-net is widely adopted for medical image segmentation. In this study, 2D U-net is applied to the tumor segmentation and survival period prediction, inspired by the neuromorphic neural network. The new method introduces the neuromorphic saliency map for enhancing the image analysis. By mimicking the visual cortex and implementing the neuromorphic preprocessing, the map of attention and saliency is generated and applied to improve the accurate and fast medical image analysis performance. Through the BraTS 2020 challenge, the performance of the renewed neuromorphic algorithm is evaluated and an overall review is conducted on the previous neuromorphic processing and other approach. The overall survival prediction accuracy is 55.2% for the validation data, and 43% for the test data.

Il Song Han
Context Aware 3D UNet for Brain Tumor Segmentation

Deep convolutional neural network (CNN) achieves remarkable performance for medical image analysis. UNet is the primary source in the performance of 3D CNN architectures for medical imaging tasks, including brain tumor segmentation. The skip connection in the UNet architecture concatenates features from both encoder and decoder paths to extract multi-contextual information from image data. The multi-scaled features play an essential role in brain tumor segmentation. However, the limited use of features can degrade the performance of the UNet approach for segmentation. In this paper, we propose a modified UNet architecture for brain tumor segmentation. In the proposed architecture, we used densely connected blocks in both encoder and decoder paths to extract multi-contextual information from the concept of feature reusability. In addition, residual-inception blocks (RIB) are used to extract the local and global information by merging features of different kernel sizes. We validate the proposed architecture on the multi-modal brain tumor segmentation challenge (BRATS) 2020 testing dataset. The dice (DSC) scores of the whole tumor (WT), tumor core (TC), and enhancing tumor (ET) are 89.12%, 84.74%, and 79.12%, respectively.

Parvez Ahmad, Saqib Qamar, Linlin Shen, Adnan Saeed
Brain Tumor Segmentation Network Using Attention-Based Fusion and Spatial Relationship Constraint

Delineating the brain tumor from magnetic resonance (MR) images is critical for the treatment of gliomas. However, automatic delineation is challenging due to the complex appearance and ambiguous outlines of tumors. Considering that multi-modal MR images can reflect different tumor biological properties, we develop a novel multi-modal tumor segmentation network (MMTSN) to robustly segment brain tumors based on multi-modal MR images. The MMTSN is composed of three sub-branches and a main branch. Specifically, the sub-branches are used to capture different tumor features from multi-modal images, while in the main branch, we design a spatial-channel fusion block (SCFB) to effectively aggregate multi-modal features. Additionally, inspired by the fact that the spatial relationship between sub-regions of the tumor is relatively fixed, e.g., the enhancing tumor is always in the tumor core, we propose a spatial loss to constrain the relationship between different sub-regions of tumor. We evaluate our method on the test set of multi-modal brain tumor segmentation challenge 2020 (BraTs2020). The method achieves 0.8764, 0.8243 and 0.773 Dice score for the whole tumor, tumor core and enhancing tumor, respectively.

Chenyu Liu, Wangbin Ding, Lei Li, Zhen Zhang, Chenhao Pei, Liqin Huang, Xiahai Zhuang
Modality-Pairing Learning for Brain Tumor Segmentation

Automatic brain tumor segmentation from multi-modality Magnetic Resonance Images (MRI) using deep learning methods plays an important role in assisting the diagnosis and treatment of brain tumor. However, previous methods mostly ignore the latent relationship among different modalities. In this work, we propose a novel end-to-end Modality-Pairing learning method for brain tumor segmentation. Paralleled branches are designed to exploit different modality features and a series of layer connections are utilized to capture complex relationships and abundant information among modalities. We also use a consistency loss to minimize the prediction variance between two branches. Besides, learning rate warmup strategy is adopted to solve the problem of the training instability and early over-fitting. Lastly, we use average ensemble of multiple models and some post-processing techniques to get final results. Our method is tested on the BraTS 2020 online testing dataset, obtaining promising segmentation performance, with average dice scores of 0.891, 0.842, 0.816 for the whole tumor, tumor core and enhancing tumor, respectively. We won the second place of the BraTS 2020 Challenge for the tumor segmentation task.

Yixin Wang, Yao Zhang, Feng Hou, Yang Liu, Jiang Tian, Cheng Zhong, Yang Zhang, Zhiqiang He
Transfer Learning for Brain Tumor Segmentation

Gliomas are the most common malignant brain tumors that are treated with chemoradiotherapy and surgery. Magnetic Resonance Imaging (MRI) is used by radiotherapists to manually segment brain lesions and to observe their development throughout the therapy. The manual image segmentation process is time-consuming and results tend to vary among different human raters. Therefore, there is a substantial demand for automatic image segmentation algorithms that produce a reliable and accurate segmentation of various brain tissue types. Recent advances in deep learning have led to convolutional neural network architectures that excel at various visual recognition tasks. They have been successfully applied to the medical context including medical image segmentation. In particular, fully convolutional networks (FCNs) such as the U-Net produce state-of-the-art results in the automatic segmentation of brain tumors. MRI brain scans are volumetric and exist in various co-registered modalities that serve as input channels for these FCN architectures. Training algorithms for brain tumor segmentation on this complex input requires large amounts of computational resources and is prone to overfitting. In this work, we construct FCNs with pretrained convolutional encoders. We show that we can stabilize the training process this way and achieve an improvement with respect to dice scores and Hausdorff distances. We also test our method on a privately obtained clinical dataset.

Jonas Wacker, Marcelo Ladeira, Jose Eduardo Vaz Nascimento
Efficient Embedding Network for 3D Brain Tumor Segmentation

3D medical image processing with deep learning greatly suffers from a lack of data. Thus, studies carried out in this field are limited compared to works related to 2D natural image analysis, where very large datasets exist. As a result, powerful and efficient 2D convolutional neural networks have been developed and trained. In this paper, we investigate a way to transfer the performance of a two-dimensional classification network for the purpose of three-dimensional semantic segmentation of brain tumors. We propose an asymmetric U-Net network by incorporating the EfficientNet model as part of the encoding branch. As the input data is in 3D, the first layers of the encoder are devoted to the reduction of the third dimension in order to fit the input of the EfficientNet network. Experimental results on validation and test data from the BraTS 2020 challenge demonstrate that the proposed method achieve promising performance.

Hicham Messaoudi, Ahror Belaid, Mohamed Lamine Allaoui, Ahcene Zetout, Mohand Said Allili, Souhil Tliba, Douraied Ben Salem, Pierre-Henri Conze
Segmentation of the Multimodal Brain Tumor Images Used Res-U-Net

Gliomas are the most common brain tumors, which have a high mortality. Magnetic resonance imaging (MRI) is useful to assess gliomas, in which segmentation of multimodal brain tissues in 3D medical images is of great significance for brain diagnosis. Due to manual job for segmentation is time-consuming, an automated and accurate segmentation method is required. How to segment multimodal brain accurately is still a challenging task. To address this problem, we employ residual neural blocks and a U-Net architecture to build a novel network. We have evaluated the performances of different primary residual neural blocks in building U-Net. Our proposed method was evaluated on the validation set of BraTS 2020, in which our model makes an effective segmentation for the complete, core and enhancing tumor regions in Dice Similarity Coefficient (DSC) metric (0.89, 0.78, 0.72). And in testing set, our model got the DSC results of 0.87, 0.82, 0.80. Residual convolutional block is especially useful to improve performance in building model. Our proposed method is inherently general and is a powerful tool to studies of medical images of brain tumors.

Jindong Sun, Yanjun Peng, Dapeng Li, Yanfei Guo
Vox2Vox: 3D-GAN for Brain Tumour Segmentation

Gliomas are the most common primary brain malignancies, with different degrees of aggressiveness, variable prognosis and various heterogeneous histological sub-regions, i.e., peritumoral edema, necrotic core, enhancing and non-enhancing tumour core. Although brain tumours can easily be detected using multi-modal MRI, accurate tumor segmentation is a challenging task. Hence, using the data provided by the BraTS Challenge 2020, we propose a 3D volume-to-volume Generative Adversarial Network for segmentation of brain tumours. The model, called Vox2Vox, generates realistic segmentation outputs from multi-channel 3D MR images, segmenting the whole, core and enhancing tumor with mean values of 87.20%, 81.14%, and 78.67% as dice scores and 6.44mm, 24.36 mm, and 18.95 mm for Hausdorff distance 95 percentile for the BraTS testing set after ensembling 10 Vox2Vox models obtained with a 10-fold cross-validation. The code is available at https://github.com/mdciri/Vox2Vox .

Marco Domenico Cirillo, David Abramian, Anders Eklund
Automatic Brain Tumor Segmentation with Scale Attention Network

Automatic segmentation of brain tumors is an essential but challenging step for extracting quantitative imaging biomarkers for accurate tumor detection, diagnosis, prognosis, treatment planning and assessment. Multimodal Brain Tumor Segmentation Challenge 2020 (BraTS 2020) provides a common platform for comparing different automatic algorithms on multi-parametric Magnetic Resonance Imaging (mpMRI) in tasks of 1) Brain tumor segmentation MRI scans; 2) Prediction of patient overall survival (OS) from pre-operative MRI scans; 3) Distinction of true tumor recurrence from treatment related effects and 4) Evaluation of uncertainty measures in segmentation. We participate the image segmentation challenge by developing a fully automatic segmentation network based on encoder-decoder architecture. In order to better integrate information across different scales, we propose a dynamic scale attention mechanism that incorporates low-level details with high-level semantics from feature maps at different scales. Our framework was trained using the 369 challenge training cases provided by BraTS 2020, and achieved an average Dice Similarity Coefficient (DSC) of 0.8828, 0.8433 and 0.8177, as well as $$95\%$$ 95 % Hausdorff distance (in millimeter) of 5.2176, 17.9697 and 13.4298 on 166 testing cases for whole tumor, tumor core and enhanced tumor, respectively, which ranked itself as the 3rd place among 693 registrations in the BraTS 2020 challenge.

Yading Yuan
Impact of Spherical Coordinates Transformation Pre-processing in Deep Convolution Neural Networks for Brain Tumor Segmentation and Survival Prediction

Pre-processing and Data Augmentation play an important role in Deep Convolutional Neural Networks (DCNN). Whereby several methods aim for standardization and augmentation of the dataset, we here propose a novel method aimed to feed DCNN with spherical space transformed input data that could better facilitate feature learning compared to standard Cartesian space images and volumes. In this work, the spherical coordinates transformation has been applied as a preprocessing method that, used in conjunction with normal MRI volumes, improves the accuracy of brain tumor segmentation and patient overall survival (OS) prediction on Brain Tumor Segmentation (BraTS) Challenge 2020 dataset. The LesionEncoder framework has been then applied to automatically extract features from DCNN models, achieving 0.586 accuracy of OS prediction on the validation data set, which is one of the best results according to BraTS 2020 leaderboard.

Carlo Russo, Sidong Liu, Antonio Di Ieva
Overall Survival Prediction for Glioblastoma on Pre-treatment MRI Using Robust Radiomics and Priors

Patients with Glioblastoma multiforme (GBM) have a very low overall survival (OS) time, due to the rapid growth an invasiveness of this brain tumor. As a contribution to the overall survival (OS) prediction task within the Brain Tumor Segmentation Challenge (BraTS), we classify the OS of GBM patients into overall survival classes based on information derived from pre-treatment Magnetic Resonance Imaging (MRI). The top-ranked methods from the past years almost exclusively used shape and position features. This is a remarkable contrast to the current advances in GBM radiomics showing a benefit of intensity-based features. This discrepancy may be caused by the inconsistent acquisition parameters in a multi-center setting. In this contribution, we test if normalizing the images based on the healthy tissue intensities enables the robust use of intensity features in this challenge. Based on these normalized images, we test the performance of 176 combinations of feature selection techniques and classifiers. Additionally, we test the incorporation of a sequence and robustness prior to limit the performance drop when models are applied to unseen data. The most robust performance on the training data (accuracy: $$0.52\pm 0.09$$ 0.52 ± 0.09 ) was achieved with random forest regression, but this accuracy could not be maintained on the test set.

Yannick Suter, Urspeter Knecht, Roland Wiest, Mauricio Reyes
Glioma Segmentation Using Encoder-Decoder Network and Survival Prediction Based on Cox Analysis

Glioma imaging analysis is a challenging task. In this paper, we used the encoder-decoder structure to complete the task of glioma segmentation. The most important characteristic of the presented segmentation structure is that it can extract more abundant features, and at the same time, it greatly reduces the amount of network parameters and the consumption of computing resources. Different textures, first order statistics and shape-based features were extracted from the BraTS 2020 dataset. Then, we use cox survival analysis to perform feature selection on the extracted features. Finally, we use randomforest regression model to predict the survival time of the patients. The result of survival prediction with five-fold cross-validation on the training dataset is better than the baseline system.

Enshuai Pang, Wei Shi, Xuan Li, Qiang Wu
Brain Tumor Segmentation with Self-ensembled, Deeply-Supervised 3D U-Net Neural Networks: A BraTS 2020 Challenge Solution

Brain tumor segmentation is a critical task for patient’s disease management. In order to automate and standardize this task, we trained multiple U-net like neural networks, mainly with deep supervision and stochastic weight averaging, on the Multimodal Brain Tumor Segmentation Challenge (BraTS) 2020 training dataset. Two independent ensembles of models from two different training pipelines were trained, and each produced a brain tumor segmentation map. These two labelmaps per patient were then merged, taking into account the performance of each ensemble for specific tumor subregions. Our performance on the online validation dataset with test time augmentation were as follows: Dice of 0.81, 0.91 and 0.85; Hausdorff (95%) of 20.6, 4, 3, 5.7 mm for the enhancing tumor, whole tumor and tumor core, respectively. Similarly, our solution achieved a Dice of 0.79, 0.89 and 0.84, as well as Hausdorff (95%) of 20.4, 6.7 and 19.5 mm on the final test dataset, ranking us among the top ten teams. More complicated training schemes and neural network architectures were investigated without significant performance gain at the cost of greatly increased training time. Overall, our approach yielded good and balanced performance for each tumor subregion. Our solution is open sourced at https://github.com/lescientifik/open_brats2020 .

Théophraste Henry, Alexandre Carré, Marvin Lerousseau, Théo Estienne, Charlotte Robert, Nikos Paragios, Eric Deutsch
Brain Tumour Segmentation Using a Triplanar Ensemble of U-Nets on MR Images

Gliomas appear with wide variation in their characteristics both in terms of their appearance and location on brain MR images, which makes robust tumour segmentation highly challenging, and leads to high inter-rater variability even in manual segmentations. In this work, we propose a triplanar ensemble network, with an independent tumour core prediction module, for accurate segmentation of these tumours and their sub-regions. On evaluating our method on the MICCAI Brain Tumor Segmentation (BraTS) challenge validation dataset, for tumour sub-regions, we achieved a Dice similarity coefficient of 0.77 for both enhancing tumour (ET) and tumour core (TC). In the case of the whole tumour (WT) region, we achieved a Dice value of 0.89, which is on par with the top-ranking methods from BraTS’17-19. Our method achieved an evaluation score that was the equal 5 $$^{\text {th}}$$ th highest value (with our method ranking in 10 $$^{\text {th}}$$ th place) in the BraTS’20 challenge, with mean Dice values of 0.81, 0.89 and 0.84 on ET, WT and TC regions respectively on the BraTS’20 unseen test dataset.

Vaanathi Sundaresan, Ludovica Griffanti, Mark Jenkinson
MRI Brain Tumor Segmentation Using a 2D-3D U-Net Ensemble

Three 2D networks, one for each patient-plane (axial, sagittal and coronal) plus a 3-D network were ensemble for tumor segmentation over MRI images, with final Dice scores of 0.75 for the enhancing tumor (ET), 0.81 whole tumor (WT) and 0.78 for tumor core (TC). A survival prediction model was design on Matlab, based on features extracted from the automatic segmentation. Gross tumor size and location seem to play a major role on survival prediction. A final accuracy of 0.617 was achieved.

Jaime Marti Asenjo, Alfonso Martinez-Larraz Solís
Multimodal Brain Tumor Segmentation and Survival Prediction Using a 3D Self-ensemble ResUNet

In this paper, we propose a 3D self-ensemble ResUNet (srUNet) deep neural network architecture for brain tumor segmentation and machine learning-based method for overall survival prediction of patients with gliomas. UNet architecture has been using for semantic image segmentation. It also been used for medical imaging segmentation, including brain tumor segmentation. In this work, we utilize the srUNet to differentiate brain tumors, then the segmented tumors are used for survival prediction. We apply the proposed method to the Multimodal Brain Tumor Segmentation Challenge (BraTS) 2020 validation dataset for both tumor segmentation and survival prediction. The tumor segmentation result shows dice score coefficient (DSC) of 0.7634, 0.899, and 0.816 for enhancing tumor (ET), whole tumor (WT), and tumor core (TC), respectively. For the survival prediction method, we achieve 56.4% classification accuracy with mean square error (MSE) 101697, and 55.2% accuracy with MSE 56169 for training and validation, respectively. In the testing phase, the proposed method offers the DSC of 0.786, 0.881, and 0.823, for ET, WT, and TC, respectively. It also achieves an accuracy of 0.43 for overall survival prediction.

Linmin Pei, A. K. Murat, Rivka Colen
MRI Brain Tumor Segmentation and Uncertainty Estimation Using 3D-UNet Architectures

Automation of brain tumor segmentation in 3D magnetic resonance images (MRIs) is key to assess the diagnostic and treatment of the disease. In recent years, convolutional neural networks (CNNs) have shown improved results in the task. However, high memory consumption is still a problem in 3D-CNNs. Moreover, most methods do not include uncertainty information, which is especially critical in medical diagnosis. This work studies 3D encoder-decoder architectures trained with patch-based techniques to reduce memory consumption and decrease the effect of unbalanced data. The different trained models are then used to create an ensemble that leverages the properties of each model, thus increasing the performance. We also introduce voxel-wise uncertainty information, both epistemic and aleatoric using test-time dropout (TTD) and data-augmentation (TTA) respectively. In addition, a hybrid approach is proposed that helps increase the accuracy of the segmentation. The model and uncertainty estimation measurements proposed in this work have been used in the BraTS’20 Challenge for task 1 and 3 regarding tumor segmentation and uncertainty estimation.

Laura Mora Ballestar, Veronica Vilaplana
Utility of Brain Parcellation in Enhancing Brain Tumor Segmentation and Survival Prediction

In this paper, we proposed a UNet-based brain tumor segmentation method and a linear model-based survival prediction method. The effectiveness of UNet has been validated in automatically segmenting brain tumors from multimodal magnetic resonance (MR) images. Rather than network architecture, we focused more on making use of additional information (brain parcellation), training and testing strategy (coarse-to-fine), and ensemble technique to improve the segmentation performance. We then developed a linear classification model for survival prediction. Different from previous studies that mainly employ features from brain tumor segmentation, we also extracted features from brain parcellation, which further improved the prediction accuracy. On the challenge testing dataset, the proposed approach yielded average Dice scores of 88.43%, 84.51%, and 78.93% for the whole tumor, tumor core, and enhancing tumor in the segmentation task and an overall accuracy of 0.533 in the survival prediction task.

Yue Zhang, Jiewei Wu, Weikai Huang, Yifan Chen, Ed. X. Wu, Xiaoying Tang
Uncertainty-Driven Refinement of Tumor-Core Segmentation Using 3D-to-2D Networks with Label Uncertainty

The BraTS dataset contains a mixture of high-grade and low-grade gliomas, which have a rather different appearance: previous studies have shown that performance can be improved by separated training on low-grade gliomas (LGGs) and high-grade gliomas (HGGs), but in practice this information is not available at test time to decide which model to use. By contrast with HGGs, LGGs often present no sharp boundary between the tumor core and the surrounding edema, but rather a gradual reduction of tumor-cell density.Utilizing our 3D-to-2D fully convolutional architecture, DeepSCAN, which ranked highly in the 2019 BraTS challenge and was trained using an uncertainty-aware loss, we separate cases into those with a confidently segmented core, and those with a vaguely segmented or missing core. Since by assumption every tumor has a core, we reduce the threshold for classification of core tissue in those cases where the core, as segmented by the classifier, is vaguely defined or missing.We then predict survival of high-grade glioma patients using a fusion of linear regression and random forest classification, based on age, number of distinct tumor components, and number of distinct tumor cores.We present results on the validation dataset of the Multimodal Brain Tumor Segmentation Challenge 2020 (segmentation and uncertainty challenge), and on the testing set, where the method achieved 4th place in Segmentation, 1st place in uncertainty estimation, and 1st place in Survival prediction.

Richard McKinley, Micheal Rebsamen, Katrin Dätwyler, Raphael Meier, Piotr Radojewski, Roland Wiest
Multi-decoder Networks with Multi-denoising Inputs for Tumor Segmentation

Automatic segmentation of brain glioma from multimodal MRI scans plays a key role in clinical trials and practice. Unfortunately, manual segmentation is very challenging, time-consuming, costly, and often inaccurate despite human expertise due to the high variance and high uncertainty in the human annotations. In the present work, we develop an end-to-end deep-learning-based segmentation method using a multi-decoder architecture by jointly learning three separate sub-problems using a partly shared encoder. We also propose to apply smoothing methods to the input images to generate denoised versions as additional inputs to the network. The validation performance indicates an improvement when using the proposed method. The proposed method was ranked 2nd in the task of Quantification of Uncertainty in Segmentation in the Brain Tumors in Multimodal Magnetic Resonance Imaging Challenge 2020.

Minh H. Vu, Tufve Nyholm, Tommy Löfstedt
MultiATTUNet: Brain Tumor Segmentation and Survival Multitasking

Segmentation of Glioma from three dimensional magnetic resonance imaging (MRI) is useful for diagnosis and surgical treatment of patients with brain tumor. Manual segmentation is expensive, requiring medical specialists. In the recent years, the Brain Tumor Segmentation Challenge (BraTS) has been calling researchers to submit automated glioma segmentation and survival prediction methods for evaluation and discussion over their public, multimodality MRI dataset, with manual annotations. This work presents an exploration of different solutions to the problem, using 3D UNets and self attention for multitasking both predictions and also training (2D) EfficientDet derived segmentations, with the best results submitted for the official challenge leaderboard. We show that end-to-end multitasking survival and segmentation, in this case, led to better results.

Diedre Carmo, Leticia Rittner, Roberto Lotufo
A Two-Stage Cascade Model with Variational Autoencoders and Attention Gates for MRI Brain Tumor Segmentation

Automatic MRI brain tumor segmentation is of vital importance for the disease diagnosis, monitoring, and treatment planning. In this paper, we propose a two-stage encoder-decoder based model for brain tumor subregional segmentation. Variational autoencoder regularization is utilized in both stages to prevent the overfitting issue. The second-stage network adopts attention gates and is trained additionally using an expanded dataset formed by the first-stage outputs. On the BraTS 2020 validation dataset, the proposed method achieves the mean Dice score of 0.9041, 0.8350, and 0.7958, and Hausdorff distance (95%) of 4.953, 6.299, 23.608 for the whole tumor, tumor core, and enhancing tumor, respectively. The corresponding results on the BraTS 2020 testing dataset are 0.8729, 0.8357, and 0.8205 for Dice score, and 11.4288, 19.9690, and 15.6711 for Hausdorff distance. The code is publicly available at https://github.com/shu-hai/two-stage-VAE-Attention-gate-BraTS2020 .

Chenggang Lyu, Hai Shu
Multidimensional and Multiresolution Ensemble Networks for Brain Tumor Segmentation

In this work, we developed multiple 2D and 3D segmentation models with multiresolution input to segment brain tumor components and then ensembled them to obtain robust segmentation maps. Ensembling reduced overfitting and resulted in a more generalized model. Multiparametric MR images of 335 subjects from the BRATS 2019 challenge were used for training the models. Further, we tested a classical machine learning algorithm with features extracted from the segmentation maps to classify subject survival range. Preliminary results on the BRATS 2019 validation dataset demonstrated excellent performance with DICE scores of 0.898, 0.784, 0.779 for the whole tumor (WT), tumor core (TC), and enhancing tumor (ET), respectively and an accuracy of 34.5% for predicting survival. The Ensemble of multiresolution 2D networks achieved 88.75%, 83.28% and 79.34% dice for WT, TC, and ET respectively in a test dataset of 166 subjects.

Gowtham Krishnan Murugesan, Sahil Nalawade, Chandan Ganesh, Ben Wagner, Fang F. Yu, Baowei Fei, Ananth J. Madhuranthakam, Joseph A. Maldjian
Cascaded Coarse-to-Fine Neural Network for Brain Tumor Segmentation

A cascaded framework of coarse-to-fine networks is proposed to segment brain tumor from multi-modality MR images into three subregions: enhancing tumor, whole tumor and tumor core. The framework is designed to decompose this multi-class segmentation into two sequential tasks according to hierarchical relationship among these regions. In the first task, a coarse-to-fine model based on Global Context Network predicts segmentation of whole tumor, which provides a bounding box of all three substructures to crop the input MR images. In the second task, cropped multi-modality MR images are fed into another two coarse-to-fine models based on NvNet trained on small patches to generate segmentation of tumor core and enhancing tumor, respectively. Experiments with BraTS 2020 validation set show that the proposed method achieves average Dice scores of 0.8003, 0.9123, 0.8630 for enhancing tumor, whole tumor and tumor core, respectively. The corresponding values for BraTS 2020 testing set were 0.81715, 0.88229, 0.83085, respectively.

Shuojue Yang, Dong Guo, Lu Wang, Guotai Wang
Low-Rank Convolutional Networks for Brain Tumor Segmentation

The automated segmentation of brain tumors is crucial for various clinical purposes from diagnosis to treatment planning to follow-up evaluations. The vast majority of effective models for tumor segmentation are based on convolutional neural networks with millions of parameters being trained. Such complex models can be highly prone to overfitting especially in cases where the amount of training data is insufficient. In this work, we devise a 3D U-Net-style architecture with residual blocks, in which low-rank constraints are imposed on weights of the convolutional layers in order to reduce overfitting. Within the same architecture, this helps to design networks with several times fewer parameters. We investigate the effectiveness of the proposed technique on the BraTS 2020 challenge.

Pooya Ashtari, Frederik Maes, Sabine Van Huffel
Automated Brain Tumour Segmentation Using Cascaded 3D Densely-Connected U-Net

Accurate brain tumour segmentation is a crucial step towards improving disease diagnosis and proper treatment planning. In this paper, we propose a deep-learning based method to segment a brain tumour into its subregions: whole tumour, tumour core and enhancing tumour. The proposed architecture is a 3D convolutional neural network based on a variant of the U-Net architecture of Ronneberger et al. [17] with three main modifications: (i) a heavy encoder, light decoder structure using residual blocks (ii) employment of dense blocks instead of skip connections, and (iii) utilization of self-ensembling in the decoder part of the network. The network was trained and tested using two different approaches: a multitask framework to segment all tumour subregions at the same time, and a three-stage cascaded framework to segment one subregion at a time. An ensemble of the results from both frameworks was also computed. To address the class imbalance issue, appropriate patch extraction was employed in a pre-processing step. Connected component analysis was utilized in the post-processing step to reduce the false positive predictions. Experimental results on the BraTS20 validation dataset demonstrates that the proposed model achieved average Dice Scores of 0.90, 0.83, and 0.78 for whole tumour, tumour core and enhancing tumour respectively.

Mina Ghaffari, Arcot Sowmya, Ruth Oliver
Segmentation then Prediction: A Multi-task Solution to Brain Tumor Segmentation and Survival Prediction

Accurate brain tumor segmentation and survival prediction are two fundamental but challenging tasks in the computer aided diagnosis of gliomas. Traditionally, these two tasks were performed independently, without considering the correlation between them. We believe that both tasks should be performed under a unified framework so as to enable them mutually benefit each other. In this paper, we propose a multi-task deep learning model called segmentation then prediction (STP), to segment brain tumors and predict patient overall survival time. The STP model is composed of a segmentation module and a survival prediction module. The former uses 3D U-Net as its backbone, and the latter uses both local and global features. The local features are extracted by the last layer of the segmentation encoder, while the global features are produced by a global branch, which uses 3D ResNet-50 as its backbone. The STP model is jointly optimized for two tasks. We evaluated the proposed STP model on the BraTS 2020 validation dataset and achieved an average Dice similarity coefficient (DSC) of 0.790, 0.910, 0.851 for the segmentation of enhanced tumor core, whole tumor, and tumor core, respectively, and an accuracy of 65.5% for survival prediction.

Guojing Zhao, Bowen Jiang, Jianpeng Zhang, Yong Xia
Enhancing MRI Brain Tumor Segmentation with an Additional Classification Network

Brain tumor segmentation plays an essential role in medical image analysis. In recent studies, deep convolution neural networks (DCNNs) are extremely powerful to tackle tumor segmentation tasks. We propose in this paper a novel training method that enhances the segmentation results by adding an additional classification branch to the network. The whole network was trained end-to-end on the Multimodal Brain Tumor Segmentation Challenge (BraTS) 2020 training dataset. On the BraTS’s test set, it achieved an average Dice score of $$80.57\%$$ 80.57 % , $$85.67\%$$ 85.67 % and $$82.00\%$$ 82.00 % , as well as Hausdorff distances $$(95\%)$$ ( 95 % ) of 14.22, 7.36 and 23.27, respectively for the enhancing tumor, the whole tumor and the tumor core.

Hieu T. Nguyen, Tung T. Le, Thang V. Nguyen, Nhan T. Nguyen
Self-training for Brain Tumour Segmentation with Uncertainty Estimation and Biophysics-Guided Survival Prediction

Gliomas are among the most common types of malignant brain tumours in adults. Given the intrinsic heterogeneity of gliomas, the multi-parametric magnetic resonance imaging (mpMRI) is the most effective technique for characterising gliomas and their sub-regions. Accurate segmentation of the tumour sub-regions on mpMRI is of clinical significance, which provides valuable information for treatment planning and survival prediction. Thanks to the recent developments on deep learning, the accuracy of automated medical image segmentation has improved significantly. In this paper, we leverage the widely used attention and self-training techniques to conduct reliable brain tumour segmentation and uncertainty estimation. Based on the segmentation result, we present a biophysics-guided prognostic model for the prediction of overall survival. Our method of uncertainty estimation has won the second place of the MICCAI 2020 BraTS Challenge.

Chengliang Dai, Shuo Wang, Hadrien Raynaud, Yuanhan Mo, Elsa Angelini, Yike Guo, Wenjia Bai
Backmatter
Metadaten
Titel
Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries
herausgegeben von
Alessandro Crimi
Ph.D. Spyridon Bakas
Copyright-Jahr
2021
Electronic ISBN
978-3-030-72084-1
Print ISBN
978-3-030-72083-4
DOI
https://doi.org/10.1007/978-3-030-72084-1

Premium Partner