Skip to main content
main-content
Top

Hint

Swipe to navigate through the chapters of this book

2019 | OriginalPaper | Chapter

Improving Pathological Structure Segmentation via Transfer Learning Across Diseases

Authors : Barleen Kaur, Paul Lemaître, Raghav Mehta, Nazanin Mohammadi Sepahvand, Doina Precup, Douglas Arnold, Tal Arbel

Published in: Domain Adaptation and Representation Transfer and Medical Image Learning with Less Labels and Imperfect Data

Publisher: Springer International Publishing

share
SHARE

Abstract

One of the biggest challenges in developing robust machine learning techniques for medical image analysis is the lack of access to large-scale annotated image datasets needed for supervised learning. When the task is to segment pathological structures (e.g. lesions, tumors) from patient images, training on a dataset with few samples is very challenging due to the large class imbalance and inter-subject variability. In this paper, we explore how to best leverage a segmentation model that has been pre-trained on a large dataset of patients images with one disease in order to successfully train a deep learning pathology segmentation model for a different disease, for which only a relatively small patient dataset is available. Specifically, we train a UNet model on a large-scale, proprietary, multi-center, multi-scanner Multiple Sclerosis (MS) clinical trial dataset containing over 3500 multi-modal MRI samples with expert-derived lesion labels. We explore several transfer learning approaches to leverage the learned MS model for the task of multi-class brain tumor segmentation on the BraTS 2018 dataset. Our results indicate that adapting and fine-tuning the encoder and decoder of the network trained on the larger MS dataset leads to improvement in brain tumor segmentation when few instances are available. This type of transfer learning outperforms training and testing the network on the BraTS dataset from scratch as well as several other transfer learning approaches, particularly when only a small subset of the dataset is available.
Appendix
Available only for authorised users
Footnotes
1
Please note that the predictions made on the BraTS 2018 Validation set must contain all four tumor sub-classes, which are then uploaded onto the BraTS web portal for evaluation.
 
4
The percentage improvement is calculated as the ratio of difference in the baseline and FT-All Dice scores over the baseline.
 
Literature
1.
go back to reference Avants, B.B., et al.: A reproducible evaluation of ANTs similarity metric performance in brain image registration. Neuroimage 54(3), 2033–2044 (2011) CrossRef Avants, B.B., et al.: A reproducible evaluation of ANTs similarity metric performance in brain image registration. Neuroimage 54(3), 2033–2044 (2011) CrossRef
2.
go back to reference Bakas, S., et al.: Advancing the cancer genome atlas glioma MRI collections with expert segmentation labels and radiomic features. Sci. Data 4, 170117 (2017) CrossRef Bakas, S., et al.: Advancing the cancer genome atlas glioma MRI collections with expert segmentation labels and radiomic features. Sci. Data 4, 170117 (2017) CrossRef
3.
go back to reference Bakas, S., et al.: Segmentation labels and radiomic features for the pre-operative scans of the TCGA-LGG collection. TCIA, vol. 286 (2017) Bakas, S., et al.: Segmentation labels and radiomic features for the pre-operative scans of the TCGA-LGG collection. TCIA, vol. 286 (2017)
4.
go back to reference Cheplygina, V., et al.: Not-so-supervised: a survey of semi-supervised, multi-instance, and transfer learning in medical image analysis. MIA 54, 280–296 (2019) Cheplygina, V., et al.: Not-so-supervised: a survey of semi-supervised, multi-instance, and transfer learning in medical image analysis. MIA 54, 280–296 (2019)
9.
go back to reference Huynh, B.Q., et al.: Digital mammographic tumor classification using transfer learning from deep convolutional neural networks. JMI 3(3), 034501 (2016) Huynh, B.Q., et al.: Digital mammographic tumor classification using transfer learning from deep convolutional neural networks. JMI 3(3), 034501 (2016)
11.
go back to reference Ioffe, S., et al.: Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:​1502.​03167 (2015) Ioffe, S., et al.: Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:​1502.​03167 (2015)
13.
go back to reference Kamnitsas, K., et al.: Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. MIA 36, 61–78 (2017) Kamnitsas, K., et al.: Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. MIA 36, 61–78 (2017)
15.
go back to reference Menegola, A., et al.: Knowledge transfer for melanoma screening with deep learning. ISBI 2017, 297–300 (2017) Menegola, A., et al.: Knowledge transfer for melanoma screening with deep learning. ISBI 2017, 297–300 (2017)
16.
go back to reference Menze, B.H., et al.: The multimodal brain tumor image segmentation benchmark (BraTS). TMI 34(10), 1993–2024 (2014) Menze, B.H., et al.: The multimodal brain tumor image segmentation benchmark (BraTS). TMI 34(10), 1993–2024 (2014)
17.
go back to reference Nyúl, L.G., et al.: New variants of a method of MRI scale standardization. TMI 19(2), 143–150 (2000) Nyúl, L.G., et al.: New variants of a method of MRI scale standardization. TMI 19(2), 143–150 (2000)
18.
go back to reference Sled, J.G., et al.: A nonparametric method for automatic correction of intensity nonuniformity in MRI data. TMI 17(1), 87–97 (1998) Sled, J.G., et al.: A nonparametric method for automatic correction of intensity nonuniformity in MRI data. TMI 17(1), 87–97 (1998)
19.
go back to reference Smith, S.M.: Fast robust automated brain extraction. HBM 17(3), 143–155 (2002) CrossRef Smith, S.M.: Fast robust automated brain extraction. HBM 17(3), 143–155 (2002) CrossRef
20.
go back to reference Tajbakhsh, N., et al.: Convolutional neural networks for medical image analysis: full training or fine tuning? IEEE TMI 35(5), 1299–1312 (2016) Tajbakhsh, N., et al.: Convolutional neural networks for medical image analysis: full training or fine tuning? IEEE TMI 35(5), 1299–1312 (2016)
21.
go back to reference Yosinski, J., et al.: How transferable are features in deep neural networks? In: Proceeding of NIPS, pp. 3320–3328 (2014) Yosinski, J., et al.: How transferable are features in deep neural networks? In: Proceeding of NIPS, pp. 3320–3328 (2014)
22.
go back to reference Zhang, D., Shen, D., Alzheimer’s Disease Neuroimaging Initiative: Multi-modal multi-task learning for joint prediction of multiple regression and classification variables in Alzheimer’s disease. NeuroImage, 59(2), 895–907 (2012) Zhang, D., Shen, D., Alzheimer’s Disease Neuroimaging Initiative: Multi-modal multi-task learning for joint prediction of multiple regression and classification variables in Alzheimer’s disease. NeuroImage, 59(2), 895–907 (2012)
Metadata
Title
Improving Pathological Structure Segmentation via Transfer Learning Across Diseases
Authors
Barleen Kaur
Paul Lemaître
Raghav Mehta
Nazanin Mohammadi Sepahvand
Doina Precup
Douglas Arnold
Tal Arbel
Copyright Year
2019
DOI
https://doi.org/10.1007/978-3-030-33391-1_11

Premium Partner