Skip to main content
Erschienen in: Wireless Personal Communications 4/2021

29.03.2021

Cross-Modality Breast Image Translation with Improved Resolution Using Generative Adversarial Networks

verfasst von: Akanksha Sharma, Neeru Jindal

Erschienen in: Wireless Personal Communications | Ausgabe 4/2021

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Unpaired cross domain medical image translation is a challenging problem as the target image modality cannot be mapped directly from the input data distribution. The best approach used till date, was by Cycle generative adversarial network, which utilized the cycle consistency loss to perform the task. Although efficient, still the resultant image size was small and blurry. Recent trends show that due to change in lifestyle and increased exposure to carcinogens in different forms has increased occurrence of cancer. Statistics show that every one in eight women might develop breast cancer at some stage of her life. Hence, this paper focuses on a combination of two GANs, CycleGAN and Super resolution GAN is used in two stages to obtain translated breast images with improved resolution. The proposed model is tested on images of breast cancer patients to obtain CT Scan using PET scan and vice versa so that the patients are not exposed to an extremely potent dose of radiation. In order to ensure the presence of tumour in the estimated image, a simplified U-net feature extractor is also used. Quantitative studies are carried out for both the stages of simulation to establish the efficiency of the proposed model.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Islam, M. T., Al-Absi, H. R., Ruagh, E. A., & Alam, T. (2021). DiaNet: A deep learning based architecture to diagnose diabetes using retinal images only. IEEE Access, 9, 15686–15695.CrossRef Islam, M. T., Al-Absi, H. R., Ruagh, E. A., & Alam, T. (2021). DiaNet: A deep learning based architecture to diagnose diabetes using retinal images only. IEEE Access, 9, 15686–15695.CrossRef
2.
Zurück zum Zitat Richens, J. G., Lee, C. M., & Johri, S. (2020). Improving the accuracy of medical diagnosis with causal machine learning. Nature Communications, 11, 3923.CrossRef Richens, J. G., Lee, C. M., & Johri, S. (2020). Improving the accuracy of medical diagnosis with causal machine learning. Nature Communications, 11, 3923.CrossRef
3.
Zurück zum Zitat Zadeh, H., Fayazi, A., Binazir, B., & Yargholi, M. (2021). Breast cancer diagnosis based on feature extraction using dynamic models of thermal imaging and deep autoencoder neural networks. Journal of Testing and Evaluation, 49, 20200044.CrossRef Zadeh, H., Fayazi, A., Binazir, B., & Yargholi, M. (2021). Breast cancer diagnosis based on feature extraction using dynamic models of thermal imaging and deep autoencoder neural networks. Journal of Testing and Evaluation, 49, 20200044.CrossRef
4.
Zurück zum Zitat Ghosh, D., Kumar, A., Ghosal, P., Mukherjee, A., & Nandi, D. (2021). Filtering super-resolution scan conversion of medical ultrasound frames. Wireless Personal Communications, 116, 883–905.CrossRef Ghosh, D., Kumar, A., Ghosal, P., Mukherjee, A., & Nandi, D. (2021). Filtering super-resolution scan conversion of medical ultrasound frames. Wireless Personal Communications, 116, 883–905.CrossRef
5.
Zurück zum Zitat Preetha, R. & Jinny S. V. (2020). Early diagnose breast cancer with PCA-LDA based FER and neuro-fuzzy classification system. Journal of Ambient Intelligence and Humanized Computing, 1–10. Preetha, R. & Jinny S. V. (2020). Early diagnose breast cancer with PCA-LDA based FER and neuro-fuzzy classification system. Journal of Ambient Intelligence and Humanized Computing, 1–10.
6.
Zurück zum Zitat Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., et al. (2014). Generative adversarial nets. In Proceedings of Advances in neural information processing systems (pp. 2672–2680) Red Hook, NY: Curran. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., et al. (2014). Generative adversarial nets. In Proceedings of Advances in neural information processing systems (pp. 2672–2680) Red Hook, NY: Curran.
7.
Zurück zum Zitat Arjovsky, M., Chintala, S., & Bottou, L. (2017). Wasserstein gan. In Proceedings of the 34th international conference on machine learning, Sydney, Australia (pp.1–10) PMLR 70. Arjovsky, M., Chintala, S., & Bottou, L. (2017). Wasserstein gan. In Proceedings of the 34th international conference on machine learning, Sydney, Australia (pp.1–10) PMLR 70.
8.
Zurück zum Zitat Mao, X., Li, Q., Xie, H., Lau, R. Y., Wang, Z., & Paul Smolley, S. (2017). Least squares generative adversarial networks. In Proceedings of the IEEE international conference on computer vision (pp. 2794–2802). IEEE. Mao, X., Li, Q., Xie, H., Lau, R. Y., Wang, Z., & Paul Smolley, S. (2017). Least squares generative adversarial networks. In Proceedings of the IEEE international conference on computer vision (pp. 2794–2802). IEEE.
9.
Zurück zum Zitat Denton, E. L., Chintala, S., & Fergus, R. (2015). Deep generative image models using a laplacian pyramid of adversarial networks. In Advances in neural information processing systems (pp. 1486–1494). Red Hook, NY: Curran. Denton, E. L., Chintala, S., & Fergus, R. (2015). Deep generative image models using a laplacian pyramid of adversarial networks. In Advances in neural information processing systems (pp. 1486–1494). Red Hook, NY: Curran.
10.
Zurück zum Zitat Jolicoeur-Martineau, A. (2018). The relativistic discriminator: A key element missing from standard GAN. arXiv preprint arXiv:1807.00734. Jolicoeur-Martineau, A. (2018). The relativistic discriminator: A key element missing from standard GAN. arXiv preprint arXiv:1807.00734.
11.
Zurück zum Zitat Zhang, H., Goodfellow, I., Metaxas, D., & Odena, A. (2018). Self-attention generative adversarial networks. arXiv preprint arXiv:1805.08318. Zhang, H., Goodfellow, I., Metaxas, D., & Odena, A. (2018). Self-attention generative adversarial networks. arXiv preprint arXiv:1805.08318.
12.
Zurück zum Zitat Isola, P., Zhu, J. Y., Zhou, T., & Efros, A. A. (2017). Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1125–1134). IEEE. Isola, P., Zhu, J. Y., Zhou, T., & Efros, A. A. (2017). Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1125–1134). IEEE.
13.
Zurück zum Zitat Yi, Z., Zhang, H., Tan, P., & Gong, M. (2017). Dualgan: Unsupervised dual learning for image-to-image translation. In Proceedings of the IEEE international conference on computer vision (pp. 2849–2857). IEEE. Yi, Z., Zhang, H., Tan, P., & Gong, M. (2017). Dualgan: Unsupervised dual learning for image-to-image translation. In Proceedings of the IEEE international conference on computer vision (pp. 2849–2857). IEEE.
14.
Zurück zum Zitat Zhang, H., Xu, T., Li, H., Zhang, S., Wang, X., Huang, X., & Metaxas, D. N. (2017). Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. In Proceedings of the IEEE international conference on computer vision (pp.5907–5915). IEEE. Zhang, H., Xu, T., Li, H., Zhang, S., Wang, X., Huang, X., & Metaxas, D. N. (2017). Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. In Proceedings of the IEEE international conference on computer vision (pp.5907–5915). IEEE.
15.
Zurück zum Zitat Zhang, H., Xu, T., Li, H., Zhang, S., Wang, X., Huang, X., & Metaxas, D. (2017). Stackgan++: Realistic image synthesis with stacked generative adversarial networks. arXiv preprint arXiv:1710.10916. Zhang, H., Xu, T., Li, H., Zhang, S., Wang, X., Huang, X., & Metaxas, D. (2017). Stackgan++: Realistic image synthesis with stacked generative adversarial networks. arXiv preprint arXiv:1710.10916.
16.
Zurück zum Zitat Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., et al. (2017). Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp.4681–4690). IEEE. Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., et al. (2017). Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp.4681–4690). IEEE.
17.
Zurück zum Zitat Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., et al. (2018). Esrgan: Enhanced super-resolution generative adversarial networks. In Proceedings of the European conference on computer vision (ECCV) (pp. 63–79). Springer. Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., et al. (2018). Esrgan: Enhanced super-resolution generative adversarial networks. In Proceedings of the European conference on computer vision (ECCV) (pp. 63–79). Springer.
18.
Zurück zum Zitat Karras, T., Aila, T., Laine, S., & Lehtinen, J. (2017). Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196. Karras, T., Aila, T., Laine, S., & Lehtinen, J. (2017). Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196.
19.
Zurück zum Zitat Nie, D., Trullo, R., Lian, J., Petitjean, C., Ruan, S., Wang, Q., & Shen, D. (2017). Medical image synthesis with context-aware generative adversarial networks. In International conference on medical image computing and computer-assisted intervention (pp. 417–425). Springer. Nie, D., Trullo, R., Lian, J., Petitjean, C., Ruan, S., Wang, Q., & Shen, D. (2017). Medical image synthesis with context-aware generative adversarial networks. In International conference on medical image computing and computer-assisted intervention (pp. 417–425). Springer.
20.
Zurück zum Zitat Wolterink, J. M., Dinkla, A. M., Savenije, M. H., Seevinck, P. R., van den Berg, C. A., & Išgum, I. (2017, September). Deep MR to CT synthesis using unpaired data. In International workshop on simulation and synthesis in medical imaging (pp. 14–23). Springer. Wolterink, J. M., Dinkla, A. M., Savenije, M. H., Seevinck, P. R., van den Berg, C. A., & Išgum, I. (2017, September). Deep MR to CT synthesis using unpaired data. In International workshop on simulation and synthesis in medical imaging (pp. 14–23). Springer.
21.
Zurück zum Zitat Zhao, M., Wang, L., Chen, J., Nie, D., Cong, Y., Ahmad, S., et al. (2018). Craniomaxillofacial bony structures segmentation from MRI with deep-supervision adversarial learning. In International conference on medical image computing and computer-assisted intervention (pp. 720–727). Springer. Zhao, M., Wang, L., Chen, J., Nie, D., Cong, Y., Ahmad, S., et al. (2018). Craniomaxillofacial bony structures segmentation from MRI with deep-supervision adversarial learning. In International conference on medical image computing and computer-assisted intervention (pp. 720–727). Springer.
22.
Zurück zum Zitat Liu, R., Lei, Y., Wang, T., Zhou, J., Roper, J., Lin, L., et al. (2021). Synthetic dual-energy CT for MRI-only based proton therapy treatment planning using label-GAN. Physics in Medicine and Biology, 66(6), 1–27. Liu, R., Lei, Y., Wang, T., Zhou, J., Roper, J., Lin, L., et al. (2021). Synthetic dual-energy CT for MRI-only based proton therapy treatment planning using label-GAN. Physics in Medicine and Biology, 66(6), 1–27.
23.
Zurück zum Zitat Chartsias, A., Joyce, T., Dharmakumar, R., & Tsaftaris, S. A. (2017). Adversarial image synthesis for unpaired multi-modal cardiac data. In International workshop on simulation and synthesis in medical imaging (pp. 3–13). Springer. Chartsias, A., Joyce, T., Dharmakumar, R., & Tsaftaris, S. A. (2017). Adversarial image synthesis for unpaired multi-modal cardiac data. In International workshop on simulation and synthesis in medical imaging (pp. 3–13). Springer.
24.
Zurück zum Zitat Jiang, J., Hu, Y.C., Tyagi, N., Zhang, P., Rimner, A., Mageras, G.S., et al. (2018). Tumour-aware, adversarial domain adaptation from ct to mri for lung cancer segmentation. In International conference on medical image computing and computer-assisted intervention (pp. 777–785). Springer. Jiang, J., Hu, Y.C., Tyagi, N., Zhang, P., Rimner, A., Mageras, G.S., et al. (2018). Tumour-aware, adversarial domain adaptation from ct to mri for lung cancer segmentation. In International conference on medical image computing and computer-assisted intervention (pp. 777–785). Springer.
25.
Zurück zum Zitat Ben-Cohen, A., Klang, E., Raskin, S. P., Soffer, S., Ben-Haim, S., Konen, E., Amitai, M. M., & Greenspan, H. (2019). Cross-modality synthesis from CT to PET using FCN and GAN networks for improved automated lesion detection. Engineering Applications of Artificial Intelligence, 78, 186–194.CrossRef Ben-Cohen, A., Klang, E., Raskin, S. P., Soffer, S., Ben-Haim, S., Konen, E., Amitai, M. M., & Greenspan, H. (2019). Cross-modality synthesis from CT to PET using FCN and GAN networks for improved automated lesion detection. Engineering Applications of Artificial Intelligence, 78, 186–194.CrossRef
26.
Zurück zum Zitat Bi, L., Kim, J., Kumar, A., Feng, D., & Fulham, M. (2017). Synthesis of positron emission tomography (PET) images via multi-channel generative adversarial networks (GANs). In Molecular imaging, reconstruction and analysis of moving body organs, and stroke imaging and treatment (pp. 43–51). Springer. Bi, L., Kim, J., Kumar, A., Feng, D., & Fulham, M. (2017). Synthesis of positron emission tomography (PET) images via multi-channel generative adversarial networks (GANs). In Molecular imaging, reconstruction and analysis of moving body organs, and stroke imaging and treatment (pp. 43–51). Springer.
27.
Zurück zum Zitat Zhu, J. Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision (pp. 2223–2232). IEEE. Zhu, J. Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision (pp. 2223–2232). IEEE.
28.
Zurück zum Zitat Dai, J., Lei, S., Dong, L., Lin, X., Zhang, H., Sun, D., & Yuan, K. (2021). More reliable AI solution: Breast ultrasound diagnosis using multi-AI combination. ArXiv, abs/2101.02639. Dai, J., Lei, S., Dong, L., Lin, X., Zhang, H., Sun, D., & Yuan, K. (2021). More reliable AI solution: Breast ultrasound diagnosis using multi-AI combination. ArXiv, abs/2101.02639.
29.
Zurück zum Zitat Armanious K et al. (2018). MedGAN: Medical image translation using GANs. arXiv preprint arXiv:1806.06397. Armanious K et al. (2018). MedGAN: Medical image translation using GANs. arXiv preprint arXiv:1806.06397.
30.
Zurück zum Zitat Li, Z., Kitajima, K., Hirata, K., Togo, R., Takenaka, J., Miyoshi, Y., et al. (2021). Preliminary study of AI-assisted diagnosis using FDG-PET/CT for axillary lymph node metastasis in patients with breast cancer. EJNMMI Research, 11, 1–10.CrossRef Li, Z., Kitajima, K., Hirata, K., Togo, R., Takenaka, J., Miyoshi, Y., et al. (2021). Preliminary study of AI-assisted diagnosis using FDG-PET/CT for axillary lymph node metastasis in patients with breast cancer. EJNMMI Research, 11, 1–10.CrossRef
Metadaten
Titel
Cross-Modality Breast Image Translation with Improved Resolution Using Generative Adversarial Networks
verfasst von
Akanksha Sharma
Neeru Jindal
Publikationsdatum
29.03.2021
Verlag
Springer US
Erschienen in
Wireless Personal Communications / Ausgabe 4/2021
Print ISSN: 0929-6212
Elektronische ISSN: 1572-834X
DOI
https://doi.org/10.1007/s11277-021-08376-5

Weitere Artikel der Ausgabe 4/2021

Wireless Personal Communications 4/2021 Zur Ausgabe

Neuer Inhalt