Skip to main content

2017 | OriginalPaper | Buchkapitel

Synthesis of Positron Emission Tomography (PET) Images via Multi-channel Generative Adversarial Networks (GANs)

verfasst von : Lei Bi, Jinman Kim, Ashnil Kumar, Dagan Feng, Michael Fulham

Erschienen in: Molecular Imaging, Reconstruction and Analysis of Moving Body Organs, and Stroke Imaging and Treatment

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Positron emission tomography (PET) imaging is widely used for staging and monitoring treatment in a variety of cancers including the lymphomas and lung cancer. Recently, there has been a marked increase in the accuracy and robustness of machine learning methods and their application to computer-aided diagnosis (CAD) systems, e.g., the automated detection and quantification of abnormalities in medical images. Successful machine learning methods require large amounts of training data and hence, synthesis of PET images could play an important role in enhancing training data and ultimately improve the accuracy of PET-based CAD systems. Existing approaches such as atlas-based or methods that are based on simulated or physical phantoms have problems in synthesizing the low resolution and low signal-to-noise ratios inherent in PET images. In addition, these methods usually have limited capacity to produce a variety of synthetic PET images with large anatomical and functional differences. Hence, we propose a new method to synthesize PET data via multi-channel generative adversarial networks (M-GAN) to address these limitations. Our M-GAN approach, in contrast to the existing medical image synthetic methods that rely on using low-level features, has the ability to capture feature representations with a high-level of semantic information based on the adversarial learning concept. Our M-GAN is also able to take the input from the annotation (label) to synthesize regions of high uptake e.g., tumors and from the computed tomography (CT) images to constrain the appearance consistency based on the CT derived anatomical information in a single framework and output the synthetic PET images directly. Our experimental data from 50 lung cancer PET-CT studies show that our method provides more realistic PET images compared to conventional GAN methods. Further, the PET tumor detection model, trained with our synthetic PET data, performed competitively when compared to the detection model trained with real PET data (2.79% lower in terms of recall). We suggest that our approach when used in combination with real and synthetic images, boosts the training data for machine learning methods.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Nestle, U., et al.: Comparison of different methods for delineation of 18F-FDG PET-positive tissue for target volume definition in radiotherapy of patients with non-small cell lung cancer. J. Nuclear Med. 46, 1342–1348 (2005) Nestle, U., et al.: Comparison of different methods for delineation of 18F-FDG PET-positive tissue for target volume definition in radiotherapy of patients with non-small cell lung cancer. J. Nuclear Med. 46, 1342–1348 (2005)
2.
Zurück zum Zitat Li, H., et al.: A novel PET tumor delineation method based on adaptive region-growing and dual-front active contours. Med. Phys. 35(8), 3711–3721 (2008)CrossRef Li, H., et al.: A novel PET tumor delineation method based on adaptive region-growing and dual-front active contours. Med. Phys. 35(8), 3711–3721 (2008)CrossRef
3.
Zurück zum Zitat Hoh, C.K., et al.: Whole-body FDG-PET imaging for staging of Hodgkin’s disease and lymphoma. J. Nuclear Med. 38(3), 343 (1997) Hoh, C.K., et al.: Whole-body FDG-PET imaging for staging of Hodgkin’s disease and lymphoma. J. Nuclear Med. 38(3), 343 (1997)
5.
Zurück zum Zitat Doi, K.: Computer-aided diagnosis in medical imaging: historical review, current status and future potential. Comput. Med. Imaging Graph. 31(4), 198–211 (2007)CrossRef Doi, K.: Computer-aided diagnosis in medical imaging: historical review, current status and future potential. Comput. Med. Imaging Graph. 31(4), 198–211 (2007)CrossRef
6.
Zurück zum Zitat Gulshan, V., et al.: Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 316, 2402–2410 (2016)CrossRef Gulshan, V., et al.: Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 316, 2402–2410 (2016)CrossRef
7.
Zurück zum Zitat Kooi, T., et al.: Large scale deep learning for computer aided detection of mammographic lesions. Med. Image Anal. 35, 303–312 (2017)CrossRef Kooi, T., et al.: Large scale deep learning for computer aided detection of mammographic lesions. Med. Image Anal. 35, 303–312 (2017)CrossRef
8.
Zurück zum Zitat Esteva, A., et al.: Dermatologist-level classification of skin cancer with deep neural networks. Nature 542(7639), 115–118 (2017)CrossRef Esteva, A., et al.: Dermatologist-level classification of skin cancer with deep neural networks. Nature 542(7639), 115–118 (2017)CrossRef
9.
Zurück zum Zitat Chen, H., et al.: DCAN: Deep contour-aware networks for object instance segmentation from histology images. Med. Image Anal. 36, 135–146 (2017)CrossRef Chen, H., et al.: DCAN: Deep contour-aware networks for object instance segmentation from histology images. Med. Image Anal. 36, 135–146 (2017)CrossRef
10.
Zurück zum Zitat Bi, L., et al.: Dermoscopic image segmentation via multi-stage fully convolutional networks. IEEE Trans. Biomed. Eng. 64, 2065–2074 (2017)CrossRef Bi, L., et al.: Dermoscopic image segmentation via multi-stage fully convolutional networks. IEEE Trans. Biomed. Eng. 64, 2065–2074 (2017)CrossRef
11.
Zurück zum Zitat Mumcuoglu, E.U., et al.: Bayesian reconstruction of PET images: methodology and performance analysis. Phys. Med. Biol. 41, 1777 (1996)CrossRef Mumcuoglu, E.U., et al.: Bayesian reconstruction of PET images: methodology and performance analysis. Phys. Med. Biol. 41, 1777 (1996)CrossRef
12.
Zurück zum Zitat Burgos, N., et al.: Attenuation correction synthesis for hybrid PET-MR scanners: application to brain studies. IEEE Trans. Med. Imaging 12, 2332–2341 (2014)CrossRef Burgos, N., et al.: Attenuation correction synthesis for hybrid PET-MR scanners: application to brain studies. IEEE Trans. Med. Imaging 12, 2332–2341 (2014)CrossRef
13.
Zurück zum Zitat Goodfellow, I., et al.: Generative adversarial nets. In: Neural Information Processing Systems (2014) Goodfellow, I., et al.: Generative adversarial nets. In: Neural Information Processing Systems (2014)
14.
Zurück zum Zitat van den Oord, A., et al.: Conditional image generation with pixelcnn decoders. In: Neural Information Processing Systems (2016) van den Oord, A., et al.: Conditional image generation with pixelcnn decoders. In: Neural Information Processing Systems (2016)
15.
Zurück zum Zitat Isola, P., et al.: Image-to-image translation with conditional adversarial networks. In: Computer Vision and Pattern Recognition (2017) Isola, P., et al.: Image-to-image translation with conditional adversarial networks. In: Computer Vision and Pattern Recognition (2017)
16.
Zurück zum Zitat Shrivastava, A., et al.: Learning from simulated and unsupervised images through adversarial training. arXiv preprint arXiv:1612.07828 (2016) Shrivastava, A., et al.: Learning from simulated and unsupervised images through adversarial training. arXiv preprint arXiv:​1612.​07828 (2016)
17.
Zurück zum Zitat Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). doi:10.1007/978-3-319-24574-4_28 CrossRef Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). doi:10.​1007/​978-3-319-24574-4_​28 CrossRef
18.
Zurück zum Zitat Collobert, R., Bengio, S., Mariéthoz, J.: Torch: a modular machine learning software library. No. EPFL-REPORT-82802. Idiap (2002) Collobert, R., Bengio, S., Mariéthoz, J.: Torch: a modular machine learning software library. No. EPFL-REPORT-82802. Idiap (2002)
19.
Zurück zum Zitat Gholipour, A., et al.: Robust super-resolution volume reconstruction from slice acquisitions: application to fetal brain MRI. IEEE Trans. Med. Imaging 29(10), 1739–1758 (2010)CrossRef Gholipour, A., et al.: Robust super-resolution volume reconstruction from slice acquisitions: application to fetal brain MRI. IEEE Trans. Med. Imaging 29(10), 1739–1758 (2010)CrossRef
20.
Zurück zum Zitat Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Computer Vision and Pattern Recognition (2015) Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Computer Vision and Pattern Recognition (2015)
21.
Zurück zum Zitat Bi, L., et al.: Stacked fully convolutional networks with multi-channel learning: application to medical image segmentation. Vis. Comput. 33(6), 1061–1071 (2017)CrossRef Bi, L., et al.: Stacked fully convolutional networks with multi-channel learning: application to medical image segmentation. Vis. Comput. 33(6), 1061–1071 (2017)CrossRef
22.
Zurück zum Zitat Kamnitsas, K., et al.: Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Med. Image Anal. 36, 61–78 (2017)CrossRef Kamnitsas, K., et al.: Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Med. Image Anal. 36, 61–78 (2017)CrossRef
23.
Zurück zum Zitat Song, Y., et al.: A multistage discriminative model for tumor and lymph node detection in thoracic images. IEEE Trans. Med. Imaging 31(5), 1061–1075 (2012)CrossRef Song, Y., et al.: A multistage discriminative model for tumor and lymph node detection in thoracic images. IEEE Trans. Med. Imaging 31(5), 1061–1075 (2012)CrossRef
24.
Zurück zum Zitat Kim, J., et al.: Use of anatomical priors in the segmentation of PET lung tumor images. In: Nuclear Science Symposium Conference Record, vol. 6, pp. 4242–4245 (2007) Kim, J., et al.: Use of anatomical priors in the segmentation of PET lung tumor images. In: Nuclear Science Symposium Conference Record, vol. 6, pp. 4242–4245 (2007)
25.
Zurück zum Zitat Papadimitroulas, P., et al.: Investigation of realistic PET simulations incorporating tumor patient’s specificity using anthropomorphic models: creation of an oncology database. Med. Phys. 40(11), 112506-(1-13) (2013) Papadimitroulas, P., et al.: Investigation of realistic PET simulations incorporating tumor patient’s specificity using anthropomorphic models: creation of an oncology database. Med. Phys. 40(11), 112506-(1-13) (2013)
Metadaten
Titel
Synthesis of Positron Emission Tomography (PET) Images via Multi-channel Generative Adversarial Networks (GANs)
verfasst von
Lei Bi
Jinman Kim
Ashnil Kumar
Dagan Feng
Michael Fulham
Copyright-Jahr
2017
DOI
https://doi.org/10.1007/978-3-319-67564-0_5