Skip to main content
Erschienen in: Multimedia Systems 6/2023

16.07.2023 | Special Issue Paper

Learning intra-inter-modality complementary for brain tumor segmentation

verfasst von: Jiangpeng Zheng, Fan Shi, Meng Zhao, Chen Jia, Congcong Wang

Erschienen in: Multimedia Systems | Ausgabe 6/2023

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Multi-modal MRI has become a valuable tool in medical imaging for diagnosing and investigating brain tumors, as it provides complementary information from multiple modalities. However, traditional methods for multi-modal MRI segmentation using UNet architecture typically fuse the modalities at an early or mid-stage of the network, without considering the inter-modal feature fusion or dependencies. To address this, a novel CMMFNet (cross-modal multi-scale fusion network) is proposed in this work, which explores both intra-modality and inter-modality relationships in brain tumor segmentation. The network is built on a transformer-based multi-encoder and single-decoder structure, which performs nested multi-modal fusion for high-level representations of different modalities. Additionally, the proposed CMMFNet uses a focusing mechanism that extracts larger receptive fields more effectively at the low-level scale and connects them to the decoding layer effectively. The multi-modal feature fusion module nests modality-aware feature aggregation, and the multi-modal features are better fused through long-term dependencies within each modality in the self-attention and cross-attention layers. The experiments showed that our CMMFNet outperformed state-of-the-art methods on the BraTS2020 benchmark dataset in brain tumor segmentation.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Goldman, L.W.: Principles of ct and ct technology. J. Nucl. Med. Technol. 35(3), 115–128 (2007)CrossRef Goldman, L.W.: Principles of ct and ct technology. J. Nucl. Med. Technol. 35(3), 115–128 (2007)CrossRef
2.
Zurück zum Zitat Plewes, D.B., Kucharczyk, W.: Physics of mri: a primer. J. Magn. Reson. Imaging 35(5), 1038–1054 (2012)CrossRef Plewes, D.B., Kucharczyk, W.: Physics of mri: a primer. J. Magn. Reson. Imaging 35(5), 1038–1054 (2012)CrossRef
3.
Zurück zum Zitat Ulku, I., Akagündüz, E.: A survey on deep learning-based architectures for semantic segmentation on 2d images. Appl. Artif. Intell. 36(1), 2032924 (2022)CrossRef Ulku, I., Akagündüz, E.: A survey on deep learning-based architectures for semantic segmentation on 2d images. Appl. Artif. Intell. 36(1), 2032924 (2022)CrossRef
4.
Zurück zum Zitat Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: transformers for image recognition at scale. arXiv:2010.11929 (arXiv preprint) (2020) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: transformers for image recognition at scale. arXiv:​2010.​11929 (arXiv preprint) (2020)
5.
Zurück zum Zitat Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł, Polosukhin, I.: Attention is all you need. Adv. Neural. Inf. Process. Syst. 30, 25 (2017) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł, Polosukhin, I.: Attention is all you need. Adv. Neural. Inf. Process. Syst. 30, 25 (2017)
6.
Zurück zum Zitat Shafiq, M., Gu, Z.: Deep residual learning for image recognition: a survey. Appl. Sci. 12(18), 8972 (2022)CrossRef Shafiq, M., Gu, Z.: Deep residual learning for image recognition: a survey. Appl. Sci. 12(18), 8972 (2022)CrossRef
7.
Zurück zum Zitat Zou, Z., Chen, K., Shi, Z., Guo, Y., Ye, J.: Object detection in 20 years: a survey. In: Proceedings of the IEEE (2023) Zou, Z., Chen, K., Shi, Z., Guo, Y., Ye, J.: Object detection in 20 years: a survey. In: Proceedings of the IEEE (2023)
8.
Zurück zum Zitat Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015: 18th International Conference, Munich, Germany, October 5–9, 2015, Proceedings, Part III 18. Springer, pp. 234–241 (2015) Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015: 18th International Conference, Munich, Germany, October 5–9, 2015, Proceedings, Part III 18. Springer, pp. 234–241 (2015)
9.
Zurück zum Zitat Zhou, Z., Siddiquee, M.M.R., Tajbakhsh, N., Liang, J.: Unet++: redesigning skip connections to exploit multiscale features in image segmentation. IEEE Trans. Med. Imaging 39(6), 1856–1867 (2019)CrossRef Zhou, Z., Siddiquee, M.M.R., Tajbakhsh, N., Liang, J.: Unet++: redesigning skip connections to exploit multiscale features in image segmentation. IEEE Trans. Med. Imaging 39(6), 1856–1867 (2019)CrossRef
10.
Zurück zum Zitat Diakogiannis, F.I., Waldner, F., Caccetta, P., Wu, C.: Resunet-a: a deep learning framework for semantic segmentation of remotely sensed data. ISPRS J. Photogramm. Remote. Sens. 162, 94–114 (2020)CrossRef Diakogiannis, F.I., Waldner, F., Caccetta, P., Wu, C.: Resunet-a: a deep learning framework for semantic segmentation of remotely sensed data. ISPRS J. Photogramm. Remote. Sens. 162, 94–114 (2020)CrossRef
11.
Zurück zum Zitat Kong, X., Sun, G., Wu, Q., Liu, J., Lin, F.: Hybrid pyramid u-net model for brain tumor segmentation. In: Intelligent Information Processing IX: 10th IFIP TC 12 International Conference, IIP 2018, Nanning, China, October 19–22, 2018, Proceedings 10. Springer, pp. 346–355 (2018) Kong, X., Sun, G., Wu, Q., Liu, J., Lin, F.: Hybrid pyramid u-net model for brain tumor segmentation. In: Intelligent Information Processing IX: 10th IFIP TC 12 International Conference, IIP 2018, Nanning, China, October 19–22, 2018, Proceedings 10. Springer, pp. 346–355 (2018)
12.
Zurück zum Zitat Milletari, F., Navab, N., Ahmadi, S.-A.: V-net: fully convolutional neural networks for volumetric medical image segmentation. In: 2016 Fourth International Conference on 3D Vision (3DV). Ieee, pp. 565–571 (2016) Milletari, F., Navab, N., Ahmadi, S.-A.: V-net: fully convolutional neural networks for volumetric medical image segmentation. In: 2016 Fourth International Conference on 3D Vision (3DV). Ieee, pp. 565–571 (2016)
13.
Zurück zum Zitat Dong, F., Wu, D., Guo, C., Zhang, S., Yang, B., Gong, X.: Craunet: a cascaded residual attention u-net for retinal vessel segmentation. Comput. Biol. Med. 147, 105651 (2022)CrossRef Dong, F., Wu, D., Guo, C., Zhang, S., Yang, B., Gong, X.: Craunet: a cascaded residual attention u-net for retinal vessel segmentation. Comput. Biol. Med. 147, 105651 (2022)CrossRef
14.
Zurück zum Zitat Hatamizadeh, A., Nath, V., Tang, Y., Yang, D., Roth, H.R., Xu, D.: Swin unetr: Swin transformers for semantic segmentation of brain tumors in MRI images. In: Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 7th International Workshop, BrainLes 2021, Held in Conjunction with MICCAI 2021, Virtual Event, September 27, 2021, Revised Selected Papers, Part I. Springer, pp. 272–284 (2022) Hatamizadeh, A., Nath, V., Tang, Y., Yang, D., Roth, H.R., Xu, D.: Swin unetr: Swin transformers for semantic segmentation of brain tumors in MRI images. In: Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 7th International Workshop, BrainLes 2021, Held in Conjunction with MICCAI 2021, Virtual Event, September 27, 2021, Revised Selected Papers, Part I. Springer, pp. 272–284 (2022)
15.
Zurück zum Zitat Shaker, A., Maaz, M., Rasheed, H., Khan, S., Yang, M.-H., Khan, F.S.: Unetr++: delving into efficient and accurate 3d medical image segmentation. arXiv:2212.04497 (arXiv preprint) (2022) Shaker, A., Maaz, M., Rasheed, H., Khan, S., Yang, M.-H., Khan, F.S.: Unetr++: delving into efficient and accurate 3d medical image segmentation. arXiv:​2212.​04497 (arXiv preprint) (2022)
16.
Zurück zum Zitat Li, J., Wang, W., Chen, C., Zhang, T., Zha, S., Wang, J., Yu, H.: Transbtsv2: towards better and more efficient volumetric segmentation of medical images. arXiv (2022) Li, J., Wang, W., Chen, C., Zhang, T., Zha, S., Wang, J., Yu, H.: Transbtsv2: towards better and more efficient volumetric segmentation of medical images. arXiv (2022)
17.
Zurück zum Zitat Lin, X., Yan, Z., Yu, L., Cheng, K.-T.: C2ftrans: coarse-to-fine transformers for medical image segmentation. arXiv:2206.14409 (arXiv preprint) (2022) Lin, X., Yan, Z., Yu, L., Cheng, K.-T.: C2ftrans: coarse-to-fine transformers for medical image segmentation. arXiv:​2206.​14409 (arXiv preprint) (2022)
18.
Zurück zum Zitat Tragakis, A., Kaul, C., Murray-Smith, R., Husmeier, D.: The fully convolutional transformer for medical image segmentation. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3660–3669 (2023) Tragakis, A., Kaul, C., Murray-Smith, R., Husmeier, D.: The fully convolutional transformer for medical image segmentation. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3660–3669 (2023)
19.
Zurück zum Zitat Hatamizadeh, A., Tang, Y., Nath, V., Yang, D., Myronenko, A., Landman, B., Roth, H.R., Xu, D.: Unetr: Transformers for 3d medical image segmentation. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 574–584 (2022) Hatamizadeh, A., Tang, Y., Nath, V., Yang, D., Myronenko, A., Landman, B., Roth, H.R., Xu, D.: Unetr: Transformers for 3d medical image segmentation. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 574–584 (2022)
20.
Zurück zum Zitat Lee, H.H., Bao, S., Huo, Y., Landman, B.A.: 3d ux-net: a large kernel volumetric convnet modernizing hierarchical transformer for medical image segmentation. arXiv:2209.15076 (arXiv preprint) (2022) Lee, H.H., Bao, S., Huo, Y., Landman, B.A.: 3d ux-net: a large kernel volumetric convnet modernizing hierarchical transformer for medical image segmentation. arXiv:​2209.​15076 (arXiv preprint) (2022)
21.
Zurück zum Zitat Roy, S., Koehler, G., Ulrich, C., Baumgartner, M., Petersen, J., Isensee, F., Jaeger, P.F., Maier-Hein, K.: Mednext: transformer-driven scaling of convnets for medical image segmentation. arXiv:2303.09975 (arXiv preprint) (2023) Roy, S., Koehler, G., Ulrich, C., Baumgartner, M., Petersen, J., Isensee, F., Jaeger, P.F., Maier-Hein, K.: Mednext: transformer-driven scaling of convnets for medical image segmentation. arXiv:​2303.​09975 (arXiv preprint) (2023)
22.
Zurück zum Zitat Qin, C., Zhang, A., Zhang, Z., Chen, J., Yasunaga, M., Yang, D.: Is chatgpt a general-purpose natural language processing task solver? arXiv:2302.06476 (arXiv preprint) (2023) Qin, C., Zhang, A., Zhang, Z., Chen, J., Yasunaga, M., Yang, D.: Is chatgpt a general-purpose natural language processing task solver? arXiv:​2302.​06476 (arXiv preprint) (2023)
23.
Zurück zum Zitat Karamcheti, S., Nair, S., Chen, A.S., Kollar, T., Finn, C., Sadigh, D., Liang, P.: Language-driven representation learning for robotics. arXiv:2302.12766 (arXiv preprint) (2023) Karamcheti, S., Nair, S., Chen, A.S., Kollar, T., Finn, C., Sadigh, D., Liang, P.: Language-driven representation learning for robotics. arXiv:​2302.​12766 (arXiv preprint) (2023)
24.
Zurück zum Zitat Liu, H., Huang, R., Lin, X., Xu, W., Zheng, M., Chen, H., He, J., Zhao, Z.: Vit-tts: visual text-to-speech with scalable diffusion transformer. arXiv:2305.12708 (arXiv preprint) (2023) Liu, H., Huang, R., Lin, X., Xu, W., Zheng, M., Chen, H., He, J., Zhao, Z.: Vit-tts: visual text-to-speech with scalable diffusion transformer. arXiv:​2305.​12708 (arXiv preprint) (2023)
25.
Zurück zum Zitat Chen, C., Dou, Q., Jin, Y., Liu, Q., Heng, P.A.: Learning with privileged multimodal knowledge for unimodal segmentation. IEEE Trans. Med. Imaging 41(3), 621–632 (2021)CrossRef Chen, C., Dou, Q., Jin, Y., Liu, Q., Heng, P.A.: Learning with privileged multimodal knowledge for unimodal segmentation. IEEE Trans. Med. Imaging 41(3), 621–632 (2021)CrossRef
26.
Zurück zum Zitat Zhou, T., Canu, S., Vera, P., Ruan, S.: Latent correlation representation learning for brain tumor segmentation with missing MRI modalities. IEEE Trans. Image Process. 30, 4263–4274 (2021)CrossRef Zhou, T., Canu, S., Vera, P., Ruan, S.: Latent correlation representation learning for brain tumor segmentation with missing MRI modalities. IEEE Trans. Image Process. 30, 4263–4274 (2021)CrossRef
27.
Zurück zum Zitat Zhou, T., Canu, S., Vera, P., Ruan, S.: 3d medical multi-modal segmentation network guided by multi-source correlation constraint. In: 2020 25th International Conference on Pattern Recognition (ICPR). IEEE, pp. 10243–10250 (2021) Zhou, T., Canu, S., Vera, P., Ruan, S.: 3d medical multi-modal segmentation network guided by multi-source correlation constraint. In: 2020 25th International Conference on Pattern Recognition (ICPR). IEEE, pp. 10243–10250 (2021)
28.
Zurück zum Zitat Zhang, Y., He, N., Yang, J., Li, Y., Wei, D., Huang, Y., Zhang, Y., He, Z., Zheng, Y.: mmformer: Multimodal medical transformer for incomplete multimodal learning of brain tumor segmentation. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2022: 25th International Conference, Singapore, September 18–22, 2022, Proceedings, Part V. Springer, pp. 107–117 (2022) Zhang, Y., He, N., Yang, J., Li, Y., Wei, D., Huang, Y., Zhang, Y., He, Z., Zheng, Y.: mmformer: Multimodal medical transformer for incomplete multimodal learning of brain tumor segmentation. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2022: 25th International Conference, Singapore, September 18–22, 2022, Proceedings, Part V. Springer, pp. 107–117 (2022)
29.
Zurück zum Zitat Fidon, L., Ourselin, S., Vercauteren, T.: Generalized wasserstein dice score, distributionally robust deep learning, and ranger for brain tumor segmentation: Brats 2020 challenge. In: Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 6th International Workshop, BrainLes 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, October 4, 2020, Revised Selected Papers, Part II 6. Springer, pp. 200–214 (2021) Fidon, L., Ourselin, S., Vercauteren, T.: Generalized wasserstein dice score, distributionally robust deep learning, and ranger for brain tumor segmentation: Brats 2020 challenge. In: Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 6th International Workshop, BrainLes 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, October 4, 2020, Revised Selected Papers, Part II 6. Springer, pp. 200–214 (2021)
30.
Zurück zum Zitat Imambi, S., Prakash, K.B., Kanagachidambaresan, G.: Pytorch. Programming with TensorFlow: Solution for Edge Computing Applications, pp. 87–104 (2021) Imambi, S., Prakash, K.B., Kanagachidambaresan, G.: Pytorch. Programming with TensorFlow: Solution for Edge Computing Applications, pp. 87–104 (2021)
31.
Zurück zum Zitat Cardoso, M.J., Li, W., Brown, R., Ma, N., Kerfoot, E., Wang, Y., Murrey, B., Myronenko, A., Zhao, C., Yang, D., et al.: Monai: an open-source framework for deep learning in healthcare. arXiv:2211.02701 (arXiv preprint) (2022) Cardoso, M.J., Li, W., Brown, R., Ma, N., Kerfoot, E., Wang, Y., Murrey, B., Myronenko, A., Zhao, C., Yang, D., et al.: Monai: an open-source framework for deep learning in healthcare. arXiv:​2211.​02701 (arXiv preprint) (2022)
33.
Zurück zum Zitat Galdran, A., Carneiro, G., Ballester, M.A.G.: On the optimal combination of cross-entropy and soft dice losses for lesion segmentation with out-of-distribution robustness. In: Diabetic Foot Ulcers Grand Challenge: Third Challenge, DFUC 2022, Held in Conjunction with MICCAI 2022, Singapore, September 22, 2022, Proceedings. Springer, pp. 40–51 (2023) Galdran, A., Carneiro, G., Ballester, M.A.G.: On the optimal combination of cross-entropy and soft dice losses for lesion segmentation with out-of-distribution robustness. In: Diabetic Foot Ulcers Grand Challenge: Third Challenge, DFUC 2022, Held in Conjunction with MICCAI 2022, Singapore, September 22, 2022, Proceedings. Springer, pp. 40–51 (2023)
34.
Zurück zum Zitat Bertels, J., Eelbode, T., Berman, M., Vandermeulen, D., Maes, F., Bisschops, R., Blaschko, M.B.: Optimizing the dice score and jaccard index for medical image segmentation: Theory and practice. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd International Conference, Shenzhen, China, October 13–17, 2019, Proceedings, Part II 22. Springer, pp. 92–100 (2019) Bertels, J., Eelbode, T., Berman, M., Vandermeulen, D., Maes, F., Bisschops, R., Blaschko, M.B.: Optimizing the dice score and jaccard index for medical image segmentation: Theory and practice. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd International Conference, Shenzhen, China, October 13–17, 2019, Proceedings, Part II 22. Springer, pp. 92–100 (2019)
35.
Zurück zum Zitat Fedorov, A., Billet, E., Prastawa, M., Gerig, G., Radmanesh, A., Warfield, S.K., Kikinis, R., Chrisochoides, N.: Evaluation of brain MRI alignment with the robust Hausdorff distance measures. In: Advances in Visual Computing: 4th International Symposium, ISVC 2008, Las Vegas, NV, USA, December 1-3, 2008. Proceedings, Part I 4. Springer, pp. 594–603 (2008) Fedorov, A., Billet, E., Prastawa, M., Gerig, G., Radmanesh, A., Warfield, S.K., Kikinis, R., Chrisochoides, N.: Evaluation of brain MRI alignment with the robust Hausdorff distance measures. In: Advances in Visual Computing: 4th International Symposium, ISVC 2008, Las Vegas, NV, USA, December 1-3, 2008. Proceedings, Part I 4. Springer, pp. 594–603 (2008)
36.
Zurück zum Zitat Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T., Ronneberger, O.: 3d u-net: learning dense volumetric segmentation from sparse annotation. In: Medical Image Computing and Computer-Assisted Intervention–MICCAI 2016: 19th International Conference, Athens, Greece, October 17–21, 2016, Proceedings, Part II 19. Springer, pp. 424–432 (2016) Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T., Ronneberger, O.: 3d u-net: learning dense volumetric segmentation from sparse annotation. In: Medical Image Computing and Computer-Assisted Intervention–MICCAI 2016: 19th International Conference, Athens, Greece, October 17–21, 2016, Proceedings, Part II 19. Springer, pp. 424–432 (2016)
37.
Zurück zum Zitat Myronenko, A., Siddiquee, M.M.R., Yang, D., He, Y., Xu, D.: Automated head and neck tumor segmentation from 3d pet/ct. arXiv:2209.10809 (arXiv preprint) (2022) Myronenko, A., Siddiquee, M.M.R., Yang, D., He, Y., Xu, D.: Automated head and neck tumor segmentation from 3d pet/ct. arXiv:​2209.​10809 (arXiv preprint) (2022)
38.
Zurück zum Zitat Zhang, Y., Yang, J., Tian, J., Shi, Z., Zhong, C., Zhang, Y., He, Z.: Modality-aware mutual learning for multi-modal medical image segmentation. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part I 24. Springer, pp. 589–599 (2021) Zhang, Y., Yang, J., Tian, J., Shi, Z., Zhong, C., Zhang, Y., He, Z.: Modality-aware mutual learning for multi-modal medical image segmentation. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part I 24. Springer, pp. 589–599 (2021)
39.
Zurück zum Zitat Isensee, F., Jaeger, P.F., Kohl, S.A., Petersen, J., Maier-Hein, K.H.: nnu-net: a self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods 18(2), 203–211 (2021)CrossRef Isensee, F., Jaeger, P.F., Kohl, S.A., Petersen, J., Maier-Hein, K.H.: nnu-net: a self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods 18(2), 203–211 (2021)CrossRef
40.
Zurück zum Zitat Wang, W., Chen, C., Ding, M., Yu, H., Zha, S., Li, J.: Transbts: multimodal brain tumor segmentation using transformer. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part I 24. Springer, pp. 109–119 (2021) Wang, W., Chen, C., Ding, M., Yu, H., Zha, S., Li, J.: Transbts: multimodal brain tumor segmentation using transformer. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part I 24. Springer, pp. 109–119 (2021)
41.
Zurück zum Zitat Xing, Z., Yu, L., Wan, L., Han, T., Zhu, L.: Nestedformer: nested modality-aware transformer for brain tumor segmentation. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2022: 25th International Conference, Singapore, September 18–22, 2022, Proceedings, Part V. Springer, pp. 140–150 (2022) Xing, Z., Yu, L., Wan, L., Han, T., Zhu, L.: Nestedformer: nested modality-aware transformer for brain tumor segmentation. In: Medical Image Computing and Computer Assisted Intervention–MICCAI 2022: 25th International Conference, Singapore, September 18–22, 2022, Proceedings, Part V. Springer, pp. 140–150 (2022)
Metadaten
Titel
Learning intra-inter-modality complementary for brain tumor segmentation
verfasst von
Jiangpeng Zheng
Fan Shi
Meng Zhao
Chen Jia
Congcong Wang
Publikationsdatum
16.07.2023
Verlag
Springer Berlin Heidelberg
Erschienen in
Multimedia Systems / Ausgabe 6/2023
Print ISSN: 0942-4962
Elektronische ISSN: 1432-1882
DOI
https://doi.org/10.1007/s00530-023-01138-2

Weitere Artikel der Ausgabe 6/2023

Multimedia Systems 6/2023 Zur Ausgabe