Skip to main content

2019 | Buch

Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries

4th International Workshop, BrainLes 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 16, 2018, Revised Selected Papers, Part I

herausgegeben von: Alessandro Crimi, Spyridon Bakas, Hugo Kuijf, Farahani Keyvan, Mauricio Reyes, Theo van Walsum

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This two-volume set LNCS 11383 and 11384 constitutes revised selected papers from the 4th International MICCAI Brainlesion Workshop, BrainLes 2018, as well as the International Multimodal Brain Tumor Segmentation, BraTS, Ischemic Stroke Lesion Segmentation, ISLES, MR Brain Image Segmentation, MRBrainS18, Computational Precision Medicine, CPM, and Stroke Workshop on Imaging and Treatment Challenges, SWITCH, which were held jointly at the Medical Image Computing for Computer Assisted Intervention Conference, MICCAI, in Granada, Spain, in September 2018.
The 92 papers presented in this volume were carefully reviewed and selected from 95 submissions. They were organized in topical sections named: brain lesion image analysis; brain tumor image segmentation; ischemic stroke lesion image segmentation; grand challenge on MR brain segmentation; computational precision medicine; stroke workshop on imaging and treatment challenges.

Inhaltsverzeichnis

Frontmatter

Invited Talk

Frontmatter
Multimodal Patho-Connectomics of Brain Injury

The paper introduces the concept of patho-connectomics, an injury-specific connectome creation and analysis paradigm, that treats injuries as a diffuse disease pervading the whole brain network. The foundation of the “patho-connectomic” ideology of analysis is that no part of the brain can function in isolation, and abnormality in the brain network is a combination of structural and functional anomalies. Brain injuries introduce anomalies in this brain network that could affect the quality of brain tissue, break a pathway, and lead to disrupted connectivity in neural circuits. This in turn affects functionality. Thus, patho-connectomes go beyond the traditional connectome and include information of tissue quality and structural and functional connectivity, forming a comprehensive map of the brain network. Information from diffusion and functional MRI are combined to create these patho-connectomes. The creation and analysis of patho-connectomes are discussed in the case of brain tumors, that suffers from the challenges of mass effect and infiltration of the peritumoral region, which in turn affect the surgical and radiation plan, and in traumatic brain injury, where the exact injury may be difficult to determine, but the effect is diffuse manifesting in heterogenous symptoms. A network-based approach to analysis of both these forms of injury will help determine the effect of pathology on the whole brain, while incorporating recovery and plasticity. Thus, patho-connectomics with a broad network perspective on brain injuries, has the potential to cause a major paradigm shift in their research of brain injuries, facilitating subject specific analysis and paving the way for precision medicine.

Ragini Verma, Yusuf Osmanlioglu, Abdol Aziz Ould Ismail
CT Brain Perfusion: A Clinical Perspective

Computed tomography perfusion (CTP) is an important exam performed in neuroradiology that adds functional information regarding hemodynamics to that obtained from morphological imaging and thereby supports clinical decision-making in several vascular and non-vascular conditions.This paper outlines the clinical applications of CTP, its advantages over MRI and disadvantages. Factors affecting the results of CTP will also be discussed. Finally, a clinically oriented overview of the calculated perfusion parameters and their value will be provided.

Arsany Hakim, Roland Wiest
Adverse Effects of Image Tiling on Convolutional Neural Networks

Convolutional neural network models perform state of the art accuracy on image classification, localization, and segmentation tasks. A fully convolutional topology, such as U-Net, may be trained on images of one size and perform inference on images of another size. This feature allows researchers to work with images too large to fit into memory by simply dividing the image into small tiles, making predictions on these tiles, and stitching these tiles back together as the prediction of the whole image.We compare how a tiled prediction of a U-Net model compares to a prediction that is based on the whole image. Our results show that using tiling to perform inference results in a significant increase in both false positive and false negative predictions when compared to using the whole image for inference. We are able to modestly improve the predictions by increasing both tile size and amount of tile overlap, but this comes at a greater computational cost and still produces inferior results to using the whole image.Although tiling has been used to produce acceptable segmentation results in the past, we recommend performing inference on the whole image to achieve the best results and increase the state of the art accuracy for CNNs.

G. Anthony Reina, Ravi Panchumarthy
An Update on Machine Learning in Neuro-Oncology Diagnostics

Imaging biomarkers in neuro-oncology are used for diagnosis, prognosis and treatment response monitoring. Magnetic resonance imaging is typically used throughout the patient pathway because routine structural imaging provides detailed anatomical and pathological information and advanced techniques provide additional physiological detail.Following image feature extraction, machine learning allows accurate classification in a variety of scenarios. Machine learning also enables image feature extraction de novo although the low prevalence of brain tumours makes such approaches challenging.Much research is applied to determining molecular profiles, histological tumour grade and prognosis at the time that patients first present with a brain tumour. Following treatment, differentiating a treatment response from a post-treatment related effect is clinically important and also an area of study. Most of the evidence is low level having been obtained retrospectively and in single centres.

Thomas C. Booth

Brain Lesion Image Analysis

Frontmatter
MIMoSA: An Approach to Automatically Segment T2 Hyperintense and T1 Hypointense Lesions in Multiple Sclerosis

Magnetic resonance imaging (MRI) is crucial for in vivo detection and characterization of white matter lesions (WML) in multiple sclerosis (MS). The most widely established MRI outcome measure is the volume of hyperintense lesions on T2-weighted images (T2L). Unfortunately, T2L are non-specific for the level of tissue destruction and show a weak relationship to clinical status. Interest in lesions appearing hypointense on T1-weighted images (T1L) (“black holes”), which provide more specificity for axonal loss and a closer link to neurologic disability, has thus grown. The technical difficulty of T1L segmentation has led investigators to rely on time-consuming manual assessments prone to inter- and intra-rater variability. We implement MIMoSA, a current T2L automatic segmentation approach, to delineate T1L. Using cross-validation, MIMoSA proved robust for segmenting both T2L and T1L. For T2L, a Sørensen-Dice coefficient (DSC) of 0.6 and partial AUC (pAUC) up to 1% false positive rate of 0.69 were achieved. For T1L, 0.48 DSC and 0.63 pAUC were achieved. The correlation between EDSS and manual versus automatic volumes were similar for T1L (0.32 manual vs. 0.34 MIMoSA) and T2L (0.34 vs. 0.34).

Alessandra M. Valcarcel, Kristin A. Linn, Fariha Khalid, Simon N. Vandekar, Shahamat Tauhid, Theodore D. Satterthwaite, John Muschelli, Rohit Bakshi, Russell T. Shinohara
CNN Prediction of Future Disease Activity for Multiple Sclerosis Patients from Baseline MRI and Lesion Labels

New T2w and gadolineum-enhancing lesions in Magnetic Resonance Images (MRI) are indicators of new disease activity in Multiple Sclerosis (MS) patients. Predicting future disease activity could help predict the progression of the disease as well as efficacy of treatment. We introduce a convolutional neural network (CNN) framework for future MRI disease activity prediction in relapsing-remitting MS (RRMS) patients from multi-modal MR images at baseline and illustrate how the inclusion of T2w lesion labels at baseline can significantly improve prediction accuracy by drawing the attention of the network to the location of lesions. Next, we develop a segmentation network to automatically infer lesion labels when semi-manual expert lesion labels are unavailable. Both prediction and segmentation networks are trained and tested on a large, proprietary, multi-center, multi-modal, clinical trial dataset consisting of 1068 patients. Testing based on a dataset of 95 patients shows that our framework reaches very high performance levels (sensitivities of 80.11% and specificities of 79.16%) when semi-manual expert labels are included as input at baseline in addition to multi-modal MRI. Even with inferred lesion labels replacing semi-manual labels, the method significantly outperforms an identical end-to-end CNN which only includes baseline multi-modal MRI.

Nazanin Mohammadi Sepahvand, Tal Hassner, Douglas L. Arnold, Tal Arbel
Learning Data Augmentation for Brain Tumor Segmentation with Coarse-to-Fine Generative Adversarial Networks

There is a common belief that the successful training of deep neural networks requires many annotated training samples, which are often expensive and difficult to obtain especially in the biomedical imaging field. While it is often easy for researchers to use data augmentation to expand the size of training sets, constructing and generating generic augmented data that is able to teach the network the desired invariance and robustness properties using traditional data augmentation techniques is challenging in practice. In this paper, we propose a novel automatic data augmentation method that uses generative adversarial networks to learn augmentations that enable machine learning based method to learn the available annotated samples more efficiently. The architecture consists of a coarse-to-fine generator to capture the manifold of the training sets and generate generic augmented data. In our experiments, we show the efficacy of our approach on a Magnetic Resonance Imaging (MRI) image, achieving improvements of 3.5% Dice coefficient on the BRATS15 Challenge dataset as compared to traditional augmentation approaches. Also, our proposed method successfully boosts a common segmentation network to reach the state-of-the-art performance on the BRATS15 Challenge.

Tony C. W. Mok, Albert C. S. Chung
Multipath Densely Connected Convolutional Neural Network for Brain Tumor Segmentation

This paper presents a novel Multipath Densely Connected Convolutional Neural Network (MDCNN) for automatically segmenting glioma with unknown sizes, shapes and positions. Our network architecture is based on the Multipath Convolutional Neural Network [21], which considers both local and contextual patches of segmentation information, including original MRI images, symmetry information and spatial information. Motivated to reduce the feature loss induced by under-utility of feature maps, we propose to fuse feature maps from original local and contextual paths at three different units and introduce three more densely connected paths. Consequently, three auxiliary segmentation paths together with original local and contextural paths forms the complete segmentation network. The model’s training and validation are performed on the BraTS2017 dataset. Experimental results demonstrate that the proposed network is capable to effectively extract more accurate tumor locations and contours with improved stability.

Cong Liu, Weixin Si, Yinling Qian, Xiangyun Liao, Qiong Wang, Yong Guo, Pheng-Ann Heng
Multi-institutional Deep Learning Modeling Without Sharing Patient Data: A Feasibility Study on Brain Tumor Segmentation

Deep learning models for semantic segmentation of images require large amounts of data. In the medical imaging domain, acquiring sufficient data is a significant challenge. Labeling medical image data requires expert knowledge. Collaboration between institutions could address this challenge, but sharing medical data to a centralized location faces various legal, privacy, technical, and data-ownership challenges, especially among international institutions. In this study, we introduce the first use of federated learning for multi-institutional collaboration, enabling deep learning modeling without sharing patient data. Our quantitative results demonstrate that the performance of federated semantic segmentation models (Dice = 0.852) on multimodal brain scans is similar to that of models trained by sharing data (Dice = 0.862). We compare federated learning with two alternative collaborative learning methods and find that they fail to match the performance of federated learning.

Micah J. Sheller, G. Anthony Reina, Brandon Edwards, Jason Martin, Spyridon Bakas
Patient-Specific Registration of Pre-operative and Post-recurrence Brain Tumor MRI Scans

Registering brain magnetic resonance imaging (MRI) scans containing pathologies is challenging primarily due to large deformations caused by the pathologies, leading to missing correspondences between scans. However, the registration task is important and directly related to personalized medicine, as registering between baseline pre-operative and post-recurrence scans may allow the evaluation of tumor infiltration and recurrence. While many registration methods exist, most of them do not specifically account for pathologies. Here, we propose a framework for the registration of longitudinal image-pairs of individual patients diagnosed with glioblastoma. Specifically, we present a combined image registration/reconstruction approach, which makes use of a patient-specific principal component analysis (PCA) model of image appearance to register baseline pre-operative and post-recurrence brain tumor scans. Our approach uses the post-recurrence scan to construct a patient-specific model, which then guides the registration of the pre-operative scan. Quantitative and qualitative evaluations of our framework on 10 patient image-pairs indicate that it provides excellent registration performance without requiring (1) any human intervention or (2) prior knowledge of tumor location, growth or appearance.

Xu Han, Spyridon Bakas, Roland Kwitt, Stephen Aylward, Hamed Akbari, Michel Bilello, Christos Davatzikos, Marc Niethammer
Segmentation of Post-operative Glioblastoma in MRI by U-Net with Patient-Specific Interactive Refinement

Accurate volumetric change estimation of glioblastoma is very important for post-surgical treatment follow-up. In this paper, an interactive segmentation method was developed and evaluated with the aim to guide volumetric estimation of glioblastoma. U-Net based fully convolutional network is used for initial segmentation of glioblastoma from post contrast MR images. The max flow algorithm is applied on the probability map of U-Net to update the initial segmentation and the result is displayed to the user for interactive refinement. Network update is performed based on the corrected contour by considering patient specific learning to deal with large context variations among different images. The proposed method is evaluated on a clinical MR image database of 15 glioblastoma patients with longitudinal scan data. The experimental results depict an improvement of segmentation performance due to patient specific fine-tuning. The proposed method is computationally fast and efficient as compared to state-of-the-art interactive segmentation tools. This tool could be useful for post-surgical treatment follow-up with minimal user intervention.

Ashis Kumar Dhara, Kalyan Ram Ayyalasomayajula, Erik Arvids, Markus Fahlström, Johan Wikström, Elna-Marie Larsson, Robin Strand
Characterizing Peritumoral Tissue Using DTI-Based Free Water Elimination

Finding an accurate microstructural characterization of the peritumoral region is essential to distinguish between edema and infiltration, enabling the distinction between tumor types, and to improve tractography in this region. Characterization of healthy versus pathological tissue is a key concern when modeling tissue microstructure in the peritumoral area, which is muddled by the presence of free water (e.g., edema). Although diffusion MRI (dMRI) is being used to obtain the microstructural characterization of tissue, most methods are based on advanced dMRI acquisition schemes that are infeasible in the clinical environment, which predominantly uses diffusion tensor imaging (DTI), and are mostly for healthy tissue. In this paper, we propose a novel approach for microstructural characterization of peritumoral tissue, that involves multi-compartment modeling and a robust free water elimination (FWE) method to improve the estimation of free water in both healthy and pathological tissue. As FWE requires the fitting of two compartments, it is an ill-posed problem in DTI acquisitions. Solving this problem requires an optimization routine, which in turn relies on an initialization step for finding a solution, which we optimally choose to model the presence of edema and infiltration unlike existing schemes. We have validated the method extensively on simulated data, and applied it to data from brain tumor patients to demonstrate the improvement in tractography in the peritumoral region, which is important for surgical planning.

Abdol Aziz Ould Ismail, Drew Parker, Moises Hernandez-Fernandez, Steven Brem, Simon Alexander, Ofer Pasternak, Emmanuel Caruyer, Ragini Verma
Deep 2D Encoder-Decoder Convolutional Neural Network for Multiple Sclerosis Lesion Segmentation in Brain MRI

In this paper, we propose an automated segmentation approach based on a deep two-dimensional fully convolutional neural network to segment brain multiple sclerosis lesions from multimodal magnetic resonance images. The proposed model is made as a combination of two deep subnetworks. An encoding network extracts different feature maps at various resolutions. A decoding part upconvolves the feature maps combining them through shortcut connections during an upsampling procedure. To the best of our knowledge, the proposed model is the first slice-based fully convolutional neural network for the purpose of multiple sclerosis lesion segmentation. We evaluated our network on a freely available dataset from ISBI MS challenge with encouraging results from a clinical perspective.

Shahab Aslani, Michael Dayan, Vittorio Murino, Diego Sona
Shallow vs Deep Learning Architectures for White Matter Lesion Segmentation in the Early Stages of Multiple Sclerosis

In this work, we present a comparison of a shallow and a deep learning architecture for the automated segmentation of white matter lesions in MR images of multiple sclerosis patients. In particular, we train and test both methods on early stage disease patients, to verify their performance in challenging conditions, more similar to a clinical setting than what is typically provided in multiple sclerosis segmentation challenges. Furthermore, we evaluate a prototype naive combination of the two methods, which refines the final segmentation. All methods were trained on 32 patients, and the evaluation was performed on a pure test set of 73 cases. Results show low lesion-wise false positives (30%) for the deep learning architecture, whereas the shallow architecture yields the best Dice coefficient (63%) and volume difference (19%). Combining both shallow and deep architectures further improves the lesion-wise metrics (69% and 26% lesion-wise true and false positive rate, respectively).

Francesco La Rosa, Mário João Fartaria, Tobias Kober, Jonas Richiardi, Cristina Granziera, Jean-Philippe Thiran, Meritxell Bach Cuadra
Detection of Midline Brain Abnormalities Using Convolutional Neural Networks

Patients with mental diseases have an increased prevalence of abnormalities in midline brain structures. One of these abnormalities is the cavum septum pellucidum (CSP), which occurs when the septum pellucidum fails to fuse. The detection and study of these brain abnormalities in Magnetic Resonance Imaging requires a tedious and time-consuming process of manual image analysis. It is also problematic when the same abnormality is analyzed manually by different experts because different criteria can be applied. In this context, it would be useful to develop an automatic method for locating the abnormality and give the measure of its depth. In this work, we explore, for the first time in the literature, an automated detection method based on CNNs. In particular, we compare different CNN models and classical machine learning classification algorithms to face this problem on a dataset of 861 subjects (639 patients with mood or psychotic disorders and 223 healthy controls) and obtain very promising results, reaching over 99% of accuracy, sensitivity and specificity.

Aleix Solanes, Joaquim Radua, Laura Igual
Deep Autoencoding Models for Unsupervised Anomaly Segmentation in Brain MR Images

Reliably modeling normality and differentiating abnormal appearances from normal cases is a very appealing approach for detecting pathologies in medical images. A plethora of such unsupervised anomaly detection approaches has been made in the medical domain, based on statistical methods, content-based retrieval, clustering and recently also deep learning. Previous approaches towards deep unsupervised anomaly detection model local patches of normal anatomy with variants of Autoencoders or GANs, and detect anomalies either as outliers in the learned feature space or from large reconstruction errors. In contrast to these patch-based approaches, we show that deep spatial autoencoding models can be efficiently used to capture normal anatomical variability of entire 2D brain MR slices. A variety of experiments on real MR data containing MS lesions corroborates our hypothesis that we can detect and even delineate anomalies in brain MR images by simply comparing input images to their reconstruction. Results show that constraints on the latent space and adversarial training can further improve the segmentation performance over standard deep representation learning.

Christoph Baur, Benedikt Wiestler, Shadi Albarqouni, Nassir Navab
Brain Tumor Detection and Classification from Multi-sequence MRI: Study Using ConvNets

In this paper, we thoroughly investigate the power of Deep Convolutional Neural Networks (ConvNets) for classification of brain tumors using multi-sequence MR images. First we propose three ConvNets, which are trained from scratch, on MRI patches, slices, and multi-planar volumetric slices. The suitability of transfer learning for the task is next studied by applying two existing ConvNets models (VGGNet and ResNet) pre-trained on ImageNet dataset, through fine-tuning of the last few layers. Leave-one-patient-out (LOPO) testing scheme is used to evaluate the performance of the ConvNets. Results demonstrate that ConvNet achieves better accuracy in all cases where the model is trained on the multi-planar volumetric dataset. Unlike conventional models, it obtains a testing accuracy of $$97\%$$ 97 % without any additional effort towards extraction and selection of features. We also study the properties of self-learned kernels/filters in different layers, through visualization of the intermediate layer outputs.

Subhashis Banerjee, Sushmita Mitra, Francesco Masulli, Stefano Rovetta
Voxel-Wise Comparison with a-contrario Analysis for Automated Segmentation of Multiple Sclerosis Lesions from Multimodal MRI

We introduce a new framework for the automated and unsupervised segmentation of Multiple Sclerosis lesions from multimodal Magnetic Resonance images. It relies on a voxel-wise approach to detect local white matter abnormalities, with an a-contrario analysis, which takes into account local information. First, a voxel-wise comparison of multimodal patient images to a set of controls is performed. Then, region-based probabilities are estimated using an a-contrario approach. Finally, correction for multiple testing is performed. Validation was undertaken on a multi-site clinical dataset of 53 MS patients with various number and volume of lesions. We showed that the proposed framework outperforms the widely used FDR-correction for this type of analysis, particularly for low lesion loads.

Francesca Galassi, Olivier Commowick, Emmanuel Vallee, Christian Barillot
A Graph Based Similarity Measure for Assessing Altered Connectivity in Traumatic Brain Injury

Traumatic brain injury (TBI) arises from disruptions in the structural connectivity of brain, which further manifests itself as alterations in the functional connectivity, eventually leading to cognitive and behavioral deficits. Although patient-specific measures quantifying the severity of disease is crucial due to the heterogeneous character of the disease, neuroimaging based measures that can assess the level of injury in TBI using structural and functional connectivity is very scarce. Taking a graph theoretical approach, we propose a measure to quantify how dissimilar a TBI patient is relative to healthy subjects using their structural and functional connectomes. Over a TBI dataset with 39 moderate-to-severe TBI patients that are examined 3, 6, and 12 months post injury, and 35 healthy controls, we demonstrate that the dissimilarity scores obtained by the proposed measure distinguish patients from controls using both modalities. We also show that the dissimilarity scores significantly correlate with post-traumatic amnesia, processing speed, and executive function among TBI patients. Our results indicate the applicability of the proposed measure in quantitatively assessing the extent of injury. The measure is applicable to structural and functional connectivity, paving the way for a joint analysis in the future.

Yusuf Osmanlıoğlu, Jacob A. Alappatt, Drew Parker, Junghoon Kim, Ragini Verma
Multi-scale Convolutional-Stack Aggregation for Robust White Matter Hyperintensities Segmentation

Segmentation of both large and small white matter hyperintensities/lesions in brain MR images is a challenging task which has drawn much attention in recent years. We propose a multi-scale aggregation model framework to deal with volume-varied lesions. Firstly, we present a specifically-designed network for small lesion segmentation called Stack-Net, in which multiple convolutional layers are ‘one-by-one’ connected, aiming to preserve rich local spatial information of small lesions before the sub-sampling layer. Secondly, we aggregate multi-scale Stack-Nets with different receptive fields to learn multi-scale contextual information of both large and small lesions. Our model is evaluated on recent MICCAI WMH Challenge Dataset and outperforms the state-of-the-art on lesion recall and lesion F1-score under 5-fold cross validation. It claimed the first place on the hidden test set after independent evaluation by the challenge organizer. In addition, we further test our pre-trained models on a Multiple Sclerosis lesion dataset with 30 subjects under cross-center evaluation. Results show that the aggregation model is effective in learning multi-scale spatial information.

Hongwei Li, Jianguo Zhang, Mark Muehlau, Jan Kirschke, Bjoern Menze
Holistic Brain Tumor Screening and Classification Based on DenseNet and Recurrent Neural Network

We present a holistic brain tumor screening and classification method for detecting and distinguishing multiple types of brain tumors on MR images. The challenges arise from the significant variations of location, shape, size, and contrast of these tumors. The proposed algorithms start with feature extraction from axial slices using dense convolutional neural networks; the obtained sequential features of multiple frames are then fed into a recurrent neural network for classification. Different from most other brain tumor classification algorithms, our framework is free from manual or automatic region of interests segmentation. The results reported on a public dataset and a population of 422 proprietary MRI scans diagnosed as normal, gliomas, meningiomas and metastatic brain tumors demonstrate the effectiveness and efficiency of our method.

Yufan Zhou, Zheshuo Li, Hong Zhu, Changyou Chen, Mingchen Gao, Kai Xu, Jinhui Xu
3D Texture Feature Learning for Noninvasive Estimation of Gliomas Pathological Subtype

Pathological subtype saved as an important marker in gliomas has considerable diagnostic and prognostic values. However, previous identification of pathological subtype relies on tumor samples, which is invasive. In this paper, we proposed a 3D texture feature learning method which is based on sparse representation (SR) theory to noninvasively estimate the pathological subtype for gliomas. Firstly, we developed a 3D patch-based SR model to extract 3D tumor texture features form magnetic resonance (MR) images. Then, by considering the physical meaning and characteristics of the extracted features, instead of performing feature selection directly, we further extract some deep features describing the statistical difference of the texture features of different tumors for subtype estimation. 213 subjects are divide into cross validation cohort and independent testing cohort to validate the proposed method. The proposed method achieves encouraging performance, with the accuracy of 91.43% and 88.57% by using T1 contrast-enhanced and T2-Flair MR images, respectively.

Guoqing Wu, Yuanyuan Wang, Jinhua Yu
Pathology Segmentation Using Distributional Differences to Images of Healthy Origin

Fully supervised segmentation methods require a large training cohort of already segmented images, providing information at the pixel level of each image. We present a method to automatically segment and model pathologies in medical images, trained solely on data labelled on the image level as either healthy or containing a visual defect. We base our method on CycleGAN, an image-to-image translation technique, to translate images between the domains of healthy and pathological images. We extend the core idea with two key contributions. Implementing the generators as residual generators allows us to explicitly model the segmentation of the pathology. Realizing the translation from the healthy to the pathological domain using a variational autoencoder allows us to specify one representation of the pathology, as this transformation is otherwise not unique. Our model hence not only allows us to create pixelwise semantic segmentations, it is also able to create inpaintings for the segmentations to render the pathological image healthy. Furthermore, we can draw new unseen pathology samples from this model based on the distribution in the data. We show quantitatively, that our method is able to segment pathologies with a surprising accuracy being only slightly inferior to a state-of-the-art fully supervised method, although the latter has per-pixel rather than per-image training information. Moreover, we show qualitative results of both the segmentations and inpaintings. Our findings motivate further research into weakly-supervised segmentation using image level annotations, allowing for faster and cheaper acquisition of training data without a large sacrifice in segmentation accuracy.

Simon Andermatt, Antal Horváth, Simon Pezold, Philippe Cattin
Multi-stage Association Analysis of Glioblastoma Gene Expressions with Texture and Spatial Patterns

Glioblastoma is the most aggressive malignant primary brain tumor with a poor prognosis. Glioblastoma heterogeneous neuroimaging, pathologic, and molecular features provide opportunities for subclassification, prognostication, and the development of targeted therapies. Magnetic resonance imaging has the capability of quantifying specific phenotypic imaging features of these tumors. Additional insight into disease mechanism can be gained by exploring genetics foundations. Here, we use the gene expressions to evaluate the associations with various quantitative imaging phenomic features extracted from magnetic resonance imaging. We highlight a novel correlation by carrying out multi-stage genome-wide association tests at the gene-level through a non-parametric correlation framework that allows testing multiple hypotheses about the integrated relationship of imaging phenotype-genotype more efficiently and less expensive computationally. Our result showed several novel genes previously associated with glioblastoma and other types of cancers, as the LRRC46 (chromosome 17), EPGN (chromosome 4) and TUBA1C (chromosome 12), all associated with our radiographic tumor features.

Samar S. M. Elsheikh, Spyridon Bakas, Nicola J. Mulder, Emile R. Chimusa, Christos Davatzikos, Alessandro Crimi

Ischemic Stroke Lesion Image Segmentation

Frontmatter
Stroke Lesion Segmentation with 2D Novel CNN Pipeline and Novel Loss Function

Recently, CT perfusion (CTP) has been used to triage ischemic stroke patients in the early stage, because of its speed, availability, and lack of contraindications. But CTP data alone, even with the generated perfusion maps is not enough to describe the precise location of infarct core or penumbra. Considering the good performance demonstrated on Diffusion Weighted Imaging (DWI), We propose a CTP data analysis technique using Generative Adversarial Networks (GAN) [2] to generate DWI, and segment the regions of ischemic stroke lesion on top of the generated DWI based on convolutional neutral network (CNN) and a novel loss function. Specifically, our CNN structure consists of a generator, a discriminator and a segmentator. The generator synthesizes DWI from CT, which generates a high quality representation for subsequent segmentator. Meanwhile the discriminator competes with the generator to identify whether its input DWI is real or generated. And we propose a novel segmentation loss function that contains a weighted cross-entropy loss and generalized dice loss [1] to balance the positive and negative loss in the training phase. And the weighted cross entropy loss can highlight the area of stroke lesion to enhance the structural contrast. We also introduce other techniques in our network, like GN [13], Maxout [14] in the proposed network. Data augmentation is also used in the training phase. In our experiments, an average dice coefficient of 60.65% is achieved with four-fold cross-validation. From the results of our experiments, this novel network combined with the proposed loss function achieved a better performance in CTP data analysis than traditional methods using CTP perfusion parameters.

Pengbo Liu
Contra-Lateral Information CNN for Core Lesion Segmentation Based on Native CTP in Acute Stroke

Stroke is an important neuro-vascular disease, for which distinguishing necrotic from salvageable brain tissue is a useful, albeit challenging task. In light of the Ischemic Stroke Lesion Segmentation challenge (ISLES) of 2018 we propose a deep learning-based method to automatically segment necrotic brain tissue at the time of acute imaging based on CT perfusion (CTP) imaging. The proposed convolutional neural network (CNN) makes a voxelwise segmentation of the core lesion. In order to predict the tissue status in one voxel it processes CTP information from the surrounding spatial context from both this voxel and from a corresponding voxel at the contra-lateral side of the brain. The contra-lateral CTP information is obtained by registering the reflection w.r.t. a sagittal plane through the geometric center. Preprocessed training data was augmented during training and a five-fold cross-validation was used to experiment for the optimal hyperparameters. We used weighted binary cross-entropy and re-calibrated the probabilities upon prediction. The final segmentations were obtained by thresholding the probabilities at 0.50 from the model that performed best w.r.t. the Dice score during training. The proposed method achieves an average validation Dice score of 0.45. Our method slightly underperformed on the ISLES 2018 challenge test dataset with the average Dice score dropping to 0.38.

Jeroen Bertels, David Robben, Dirk Vandermeulen, Paul Suetens
Dense Multi-path U-Net for Ischemic Stroke Lesion Segmentation in Multiple Image Modalities

Delineating infarcted tissue in ischemic stroke lesions is crucial to determine the extend of damage and optimal treatment for this life-threatening condition. However, this problem remains challenging due to high variability of ischemic strokes’ location and shape.

Jose Dolz, Ismail Ben Ayed, Christian Desrosiers
Multi-scale Deep Convolutional Neural Network for Stroke Lesions Segmentation on CT Images

Ischemic stroke is the top cerebral vascular disease leading to disability and death worldwide. Accurate and automatic segmentation of lesions of stroke can assist diagnosis and treatment planning. However, manual segmentation is a time-consuming and subjective for neurologists. In this study, we propose a novel deep convolutional neural network, which is developed for the segmentation of stroke lesions from CT perfusion images. The main structure of network bases on U-shape. We embed the dense blocks into U-shape network, which can alleviate the over-fitting problem. In order to acquire more receptive fields, we use multi-kernel to divide the network into two paths, and use the dropout regularization method to achieve effective feature mapping. In addition, we use multi-scale features to obtain more spatial features, which will help improve segmentation performance. In the post-processing stage of soft segmentation, we use image median filtering to eliminate the specific noises and make the segmentation edge smoother. We evaluate our method in Ischemic Stroke Lesion Segmentations Challenge (ISLES) 2018. The results of our approach on the testing data places hight ranking.

Liangliang Liu, Shuai Yang, Li Meng, Min Li, Jianxin Wang
Ischemic Stroke Lesion Segmentation Using Adversarial Learning

Ischemic stroke occurs through a blockage of clogged blood vessels supplying blood to the brain. Segmentation of the stroke lesion is vital to improve diagnosis, outcome assessment and treatment planning. In this work, we propose a segmentation model with adversarial learning for ischemic lesion segmentation. We adopt U-Net with skip connection and dropout as segmentation baseline network and a fully connected network (FCN) as discriminator network. Discriminator network consists of 5 convolution layers followed by leaky-ReLU and an upsampling layer to rescale the output to the size of the input map. Training a segmentation network along with an adversarial network can detect and correct higher order inconsistencies between the segmentation maps produced by ground-truth and the Segmentor. We exploit three modalities (CT, DPWI, CBF) of acute computed tomography (CT) perfusion data provided in ISLES 2018 (Ischemic Stroke Lesion Segmentation) for ischemic lesion segmentation. Our model has achieved dice accuracy of 42.10% with the cross-validation of training and 39% with the testing data.

Mobarakol Islam, N. Rajiv Vaidyanathan, V. Jeya Maria Jose, Hongliang Ren
V-Net and U-Net for Ischemic Stroke Lesion Segmentation in a Small Dataset of Perfusion Data

Ischemic stroke is the result of an obstruction within a brain blood vessel, blocking the fresh blood flow, resulting in a tissue lesion. Early prediction of the ischemic stroke lesion region is important because it can help to choose the most suitable treatment. However, that is not trivial since current medical data, such as CT and MRI, have no explicit information about the future extension of the permanent lesion. A step towards efficiently using these data to predict the lesions is the use of Deep Convolutional Neural Networks as they are able to extract “hidden” information from the data when a reasonable labeled dataset is available and the deep networks are used properly. In order to try to extract this information, we have tested two different deep network architectures that are the state of the art in segmentation problems: V-net and U-net. In both networks, we tried different configurations, such as depth variations, pixel interpolations, MRI image combinations, among others. Experiments showed the following: normalizing the voxels sizes results in better training and predictions; deeper U-Net performs slightly better than the shallower U-Net, however it requires much more computation for only a small gain in accuracy; the inclusion of CT modality improved slightly the results; the use of only perfusion maps brought much better results than the use of raw perfusion data; smaller lesions are harder to detect properly.

Gustavo Retuci Pinheiro, Raphael Voltoline, Mariana Bento, Leticia Rittner
Integrated Extractor, Generator and Segmentor for Ischemic Stroke Lesion Segmentation

The challenge of Ischemic Stroke Lesion Segmentation 2018 asks for methods that allow the segmentation of stroke lesion based on acute CT perfusion data, and provided a data set of 103 stroke patients and matching expert segmentations. In this paper, a novel deep learning framework with extractor, generator and segmentor for ischemic stroke lesion segmentation has been proposed. Firstly, the extractor is to extract the feature map from processed perfusion weighted imaging (PWI). Secondly, the output of extractor, cerebral blood volume (CBV), cerebral blood flow (CBF), mean transit time (MTT) and time of peak of the residue function (Tmax), etc. as the input of the generator to generated the Diffusion weighted imaging (DWI) modality. Finally, the segmentor is to precisely segment the ischemic stroke lesion using the generated data. In order to overcome the over-fitting, the data augmentation (e.g. random rotations, random crop and radial distortion) is used in training phase. Therefore, generalized dice combined with cross entropy were used as loss function to handle unbalanced data. All networks are trained end-to-end from scratch using the 2018 Ischemic Stroke Lesion Challenge dataset which contains training set of 63 patients and testing set of 40 patients. Our method achieves state-of-the-art segmentation accuracy in the testing set.

Tao Song, Ning Huang
ISLES Challenge: U-Shaped Convolution Neural Network with Dilated Convolution for 3D Stroke Lesion Segmentation

In this paper, we propose the algorithm for stroke lesion segmentation based on a deep convolutional neural network (CNN). The model is based on U-shaped CNN, which has been applied successfully to other medical image segmentation tasks. The network architecture was derived from the model presented in Isensee et al. [1] and is capable of processing whole 3D images. The model incorporates the convolution layers through upsampled filters – also known as dilated convolution. This change enlarges filter’s field of the view and allows the net to integrate larger context into the computation. We add the dilated convolution into different parts of network architecture and study the impact on the overall model performance. The best model which uses the dilated convolution in the input of the net outperforms the original architecture in nearly all used evaluation metrics. The code and trained models can be found on the GitHub website: http://github.com/tureckova/ISLES2018/ .

Alzbeta Tureckova, Antonio J. Rodríguez-Sánchez
Fully Automatic Segmentation for Ischemic Stroke Using CT Perfusion Maps

We propose an algorithm for automatic segmentation of ischemic lesion using CT perfusion maps. Our method is based on encoder-decoder fully convolutional neural network approach. The pre-processing step involves skull stripping and standardization of perfusion maps and extraction of slices with lesions as the training data. These CT perfusion maps are used to train the proposed network for automatic segmentation of stroke lesions. The network is trained by minimizing the weighted combination of cross entropy and dice losses. Our algorithm achieves 0.43, 0.53 and 0.45 Dice, precision, and recall respectively on challenge test data set.

Vikas Kumar Anand, Mahendra Khened, Varghese Alex, Ganapathy Krishnamurthi
Combining Good Old Random Forest and DeepLabv3+ for ISLES 2018 CT-Based Stroke Segmentation

Recent years’ segmentation challenges on Ischemic Stroke Lesion Segmentation (ISLES) attracted great interest in the medical image computing domain, reflected in >80 citations of the 2017 summary article of the initial ISLES 2015 challenge [1]. While 2015–2017 ISLES challenges focussed on MRI images, the 2018 challenge takes into account clinical relevance of (perfusion) CT to triage stroke patients. Thus, from a methodological point of view, it is now to be analyzed whether and to what extent the 2015–2017 methods can be adapted to automated core lesion segmentation using acute stroke CT perfusion imaging.We strive to deliver a baseline for ISLES 2018 by using two well established machine learning-based segmentation approaches already applied for the initial ISLES 2015 challenge: random forest (RF) with classical hand-crafted image features (i.e. the most frequently used type of algorithm in ISLES 2015) and encoder-decoder-style convolutional neuronal networks (CNNs). In detail, for CNN-based segmentation, we employ the DeepLabv3+ architecture. The performance of the individual as well as a combination of the segmentation approaches is evaluated based on the ISLES 2018 training data set, and respective results are presented. Aiming at an ISLES 2018-specific performance baseline, we do neither make use of additional data other than the provided challenge data nor perform extensive data augmentation. The results highlight the potential to improve stroke lesion segmentation accuracy by combining RF and CNN information.

Lasse Böhme, Frederic Madesta, Thilo Sentker, René Werner
Volumetric Adversarial Training for Ischemic Stroke Lesion Segmentation

Ischemic stroke is one of the most common and yet deadly cerebrovascular diseases. Identifying lesion area is an essential step for stroke management and outcome assessment. Currently, manual delineation is the gold standard for clinical diagnosis. However, inter-annotator variances and labor-intensive nature of manual labeling can lead to observer bias or potential disagreement of between annotators. While incorporating a computer-aided diagnosis system may alleviate these issues, other challenges such as highly varying shapes and difficult boundaries in the lesion area make the designing of such system non-trivial. To address these issues, we propose a novel adversarial training paradigm for segmenting ischemic stroke lesion. The training procedure involves the main segmentation network and an auxiliary critique network. The segmentation network is a 3D residual U-net that produces a segmentation mask in each training iteration while critique network enforces high-level constraints on the segmentation network to produce predictions that mimic the ground truth distribution. We applied the proposed model on the 2018 ISLES stroke lesion segmentation challenge dataset and achieved competitive results on the training dataset.

Hao-Yu Yang
Ischemic Stroke Lesion Segmentation in CT Perfusion Scans Using Pyramid Pooling and Focal Loss

We present a fully convolutional neural network for segmenting ischemic stroke lesions in CT perfusion images for the ISLES 2018 challenge. Treatment of stroke is time sensitive and current standards for lesion identification require manual segmentation, a time consuming and challenging process. Automatic segmentation methods present the possibility of accurately identifying lesions and improving treatment planning. Our model is based on the PSPNet, a network architecture that makes use of pyramid pooling to provide global and local contextual information. To learn the varying shapes of the lesions, we train our network using focal loss, a loss function designed for the network to focus on learning the more difficult samples. We compare our model to networks trained using the U-Net and V-Net architectures. Our approach demonstrates effective performance in lesion segmentation and ranked among the top performers at the challenge conclusion.

S. Mazdak Abulnaga, Jonathan Rubin

Grand Challenge on MR Brain Segmentation

Frontmatter
MixNet: Multi-modality Mix Network for Brain Segmentation

Automated brain structure segmentation is important to many clinical quantitative analysis and diagnoses. In this work, we introduce MixNet, a 2D semantic-wise deep convolutional neural network to segment brain structure in multi-modality MRI images. The network is composed of our modified deep residual learning units. In the unit, we replace the traditional convolution layer with the dilated convolutional layer, which avoids the use of pooling layers and deconvolutional layers, reducing the number of network parameters. Final predictions are made by aggregating information from multiple scales and modalities. A pyramid pooling module is used to capture spatial information of the anatomical structures at the output end. In addition, we test three architectures (MixNetv1, MixNetv2 and MixNetv3) which fuse the modalities differently to see the effect on the results. Our network achieves the state-of-the-art performance. MixNetv2 was submitted to the MRBrainS challenge at MICCAI 2018 and won the 3rd place in the 3-label task. On the MRBrainS2018 dataset, which includes subjects with a variety of pathologies, the overall DSC (Dice Coefficient) of 84.7% (gray matter), 87.3% (white matter) and 83.4% (cerebrospinal fluid) were obtained with only 7 subjects as training data.

Long Chen, Dorit Merhof
A Skip-Connected 3D DenseNet Networks with Adversarial Training for Volumetric Segmentation

In this paper, we propose a novel end-to-end adversarial training on volumetric brain segmentation architecture that allows to enforce long-range spatial label contiguity and label consistency. The proposed network consists of two networks: generator and discriminator. The generator network allows to take volumetric image as input and provides a volumetric probability map for each tissue. Then, the discriminator network learns to differentiate ground-truth maps from the probability maps of generator network. We design a discriminator in a fully convolutional manner to differentiate the predicted probability maps from the ground-truth segmentation distribution with the consideration of the spatial information on voxel level, which makes it difficult to learn the discriminator. In order to overcome it, the proposed discriminator provides a 3D confidence map which indicates corresponding regions of the probability maps close to the ground-truth. Based on the 3D confidence map information, the generator network will refine prediction output close to the ground-truth maps in a high-order structure.

Toan Duc Bui, Sang-il Ahn, Yongwoo Lee, Jitae Shin
Automatic Brain Structures Segmentation Using Deep Residual Dilated U-Net

Brain image segmentation is used for visualizing and quantifying anatomical structures of the brain. We present an automated approach using 2D deep residual dilated networks which captures rich context information of different tissues for the segmentation of eight brain structures. The proposed system was evaluated in the MICCAI Brain Segmentation Challenge ( http://mrbrains18.isi.uu.nl/ ) and ranked 9 $$^{th}$$ th out of 22 teams. We further compared the method with traditional U-Net using leave-one-subject-out cross-validation setting on the public dataset. Experimental results shows that the proposed method outperforms traditional U-Net (i.e. 80.9% vs 78.3% in averaged Dice score, 4.35 mm vs 11.59 mm in averaged robust Hausdorff distance) and is computationally efficient.

Hongwei Li, Andrii Zhygallo, Bjoern Menze
3D Patchwise U-Net with Transition Layers for MR Brain Segmentation

We propose a new patch based 3D convolutional neural network to automatically segment multiple brain structures on Magnetic Resonance (MR) images. The proposed network consists of encoding layers to extract informative features and decoding layers to reconstruct the segmentation labels. Unlike the conventional U-net model, we use transition layers between the encoding layers and the decoding layers to emphasize the impact of feature maps in the decoding layers. Moreover, we use batch normalization on every convolution layer to make a well generalized model. Finally, we utilize a new loss function which can normalize the categorical cross entropy to accurately segment the relatively small interest regions which are opt to be misclassified. The proposed method ranked 1 $$^{st}$$ st over 22 participants at the MRBrainS18 segmentation challenge at MICCAI 2018.

Miguel Luna, Sang Hyun Park

Computational Precision Medicine

Frontmatter
Dropout-Enabled Ensemble Learning for Multi-scale Biomedical Data

Leveraging information from multiple scales is crucial to understanding complex diseases such as cancer where this could have a significant impact in improving diagnoses, patient management and treatment decisions. Recent advances in Convolutional Neural Networks (CNNs) have enabled major breakthroughs in biomedical image analysis, in particular for histopathology and radiology images. Our main contribution is a methodology to combine independent CNN models built for these two types of images in order to improve diagnostic accuracy. We train separate CNN models and combine them using a Dropout-Enabled meta-classifier. Our framework achieved second place in the MICCAI 2018 Computational Precision Medicine Challenge.

Alexandre Momeni, Marc Thibault, Olivier Gevaert
A Combined Radio-Histological Approach for Classification of Low Grade Gliomas

Deep learning based techniques have shown to be beneficial for automating various medical image tasks like segmentation of lesions and automation of disease diagnosis. In this work, we demonstrate the utility of deep learning and radiomics features for classification of low grade gliomas (LGG) into astrocytoma and oligodendroglioma. In this study the objective is to use whole-slide H&E stained images and Magnetic Resonance (MR) images of the brain to make a prediction about the class of the glioma. We treat both the pathology and radiology datasets separately for in-depth analysis and then combine the predictions made by the individual models to get the final class label for a patient. The pre-processing of the whole slide images involved region of interest detection, stain normalization and patch extraction. An autoencoder was trained to extract features from each patch and these features are then used to find anomaly patches among the entire set of patches for a single Whole Slide Image. These anomaly patches from all the training slides form the dataset for training the classification model. A deep neural network based classification model was used to classify individual patches among the two classes. For the radiology dataset based analysis, each MRI scan was fed into a pre-processing pipeline which involved skull-stripping, co-registration of MR sequences to T1c, re-sampling of MR volumes to isotropic voxels and segmentation of brain lesion. The lesions in the MR volumes were automatically segmented using a fully convolutional Neural Network (CNN) trained on BraTS-2018 segmentation challenge dataset. From the segmentation maps 64 $$\,\times \,$$ × 64 $$\,\times \,$$ × 64 cube patches centered around the tumor were extracted from the T1 MR images for extraction of high level radiomic features. These features were then used to train a logistic regression classifier. After developing the two models, we used a confidence based prediction methodology to get the final class labels for each patient. This combined approach achieved a classification accuracy of 90% on the challenge test set (n = 20). These results showcase the emerging role of deep learning and radiomics in analyzing whole-slide images and MR scans for lesion characterization.

Aditya Bagari, Ashish Kumar, Avinash Kori, Mahendra Khened, Ganapathy Krishnamurthi
Robust Segmentation of Nucleus in Histopathology Images via Mask R-CNN

Nuclei segmentation plays an import role in histopathology images analysis. Deep learning approaches have shown its strength for histopathology images processing in various studies. In this paper, we proposed a novel deep learning framework for automatic nuclei segmentation. The framework adopts the Mask R-CNN as backbone and employs structure-preserving color normalization (SPCN) and watershed for pre- and post-processing. The proposed framework achieved a Dice score of 90.46% on the validation set, which demonstrates its competing segmentation performance.

Xinpeng Xie, Yuexiang Li, Menglu Zhang, Linlin Shen

Stroke Workshop on Imaging and Treatment Challenges

Frontmatter
Perfusion Parameter Estimation Using Neural Networks and Data Augmentation

Perfusion imaging plays a crucial role in acute stroke diagnosis and treatment decision making. Current perfusion analysis relies on deconvolution of the measured signals, an operation that is mathematically ill-conditioned and requires strong regularization. We propose a neural network and a data augmentation approach to predict perfusion parameters directly from the native measurements. A comparison on simulated CT Perfusion data shows that the neural network provides better estimations for both CBF and Tmax than a state of the art deconvolution method, and this over a wide range of noise levels. The proposed data augmentation enables to achieve these results with less than 100 datasets.

David Robben, Paul Suetens
Synthetic Perfusion Maps: Imaging Perfusion Deficits in DSC-MRI with Deep Learning

In this work, we present a novel convolutional neural network based method for perfusion map generation in dynamic susceptibility contrast-enhanced perfusion imaging. The proposed architecture is trained end-to-end and solely relies on raw perfusion data for inference. We used a dataset of 151 acute ischemic stroke cases for evaluation. Our method generates perfusion maps that are comparable to the target maps used for clinical routine, while being model-free, fast, and less noisy.

Andreas Hess, Raphael Meier, Johannes Kaesmacher, Simon Jung, Fabien Scalzo, David Liebeskind, Roland Wiest, Richard McKinley
ICHNet: Intracerebral Hemorrhage (ICH) Segmentation Using Deep Learning

We develop a deep learning approach for automated intracerebral hemorrhage (ICH) segmentation from 3D computed tomography (CT) scans. Our model, ICHNet, evolves by integrating dilated convolution neural network (CNN) with hypercolumn features where a modest number of pixels are sampled and corresponding features from multiple layers are concatenated. Due to freedom of sampling pixels rather than image patch, this model trains within the brain region and ignores the CT background padding. This boosts the convergence time and accuracy by learning only healthy and defected brain tissues. To overcome the class imbalance problem, we sample an equal number of pixels from each class. We also incorporate 3D conditional random field (3D CRF) to smoothen the predicted segmentation as a post-processing step. ICHNet demonstrates 87.6% Dice accuracy in hemorrhage segmentation, that is comparable to radiologists.

Mobarakol Islam, Parita Sanghani, Angela An Qi See, Michael Lucas James, Nicolas Kon Kam King, Hongliang Ren
Can Diffusion MRI Reveal Stroke-Induced Microstructural Changes in GM?

The development of noninvasive techniques to image the human brain has enabled the demonstration of structural plasticity in response to motor learning. In the last years evidence has emerged on the potential of some measures derived from diffusion Magnetic Resonance Imaging (DMRI) as numerical biomarkers of tissue changes in regions involved in the motor network. In these works, the descriptors were extensively analysed in contralateral white matter (WM) along both single connections and networks relying on tract-based analyses and statistical evaluation. Though, their ability to detect changes in gray matter (GM) has been scarcely investigated. This work aims at the assessment of propagator-based microstructural indices in capturing GM changes and the relation of such changes to functional recovery at six months from the injury focusing on the Diffusion Tensor Imaging (DTI) and the three dimensional Simple Harmonics Oscillator based Reconstruction and Estimation (3D-SHORE) models.

Lorenza Brusini, Ilaria Boscolo Galazzo, Mauro Zucchelli, Cristina Granziera, Gloria Menegaz
Backmatter
Metadaten
Titel
Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries
herausgegeben von
Alessandro Crimi
Spyridon Bakas
Hugo Kuijf
Farahani Keyvan
Mauricio Reyes
Theo van Walsum
Copyright-Jahr
2019
Electronic ISBN
978-3-030-11723-8
Print ISBN
978-3-030-11722-1
DOI
https://doi.org/10.1007/978-3-030-11723-8