Skip to main content

2024 | OriginalPaper | Buchkapitel

Fast and Efficient Brain Extraction with Recursive MLP Based 3D UNet

verfasst von : Guoqing Shangguan, Hao Xiong, Dong Liu, Hualei Shen

Erschienen in: Neural Information Processing

Verlag: Springer Nature Singapore

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Extracting brain from other non-brain tissues is an essential step in neuroimage analyses such as brain volume estimation. The transformers and 3D UNet based methods achieve strong performance using attention and 3D convolutions. They normally have complex architecture and are thus computationally slow. Consequently, they can hardly be deployed in computational resource-constrained environments. To achieve rapid segmentation, the most recent work UNeXt reduces convolution filters and also presents the Multilayer Perception (MLP) blocks that exploit simpler and linear MLP operations. To further boost performance, it shifts the feature channels in MLP block so as to focus on learning local dependencies. However, it performs segmentation on 2D medical images rather than 3D volumes. In this paper, we propose a recursive MLP based 3D UNet to efficiently extract brain from 3D head volume. Our network involves 3D convolution blocks and MLP blocks to capture both long range information and local dependencies. Meanwhile, we also leverage the simplicity of MLPs to enhance computational efficiency. Unlike UNeXt extracting one locality, we apply several shifts to capture multiple localities representing different local dependencies and then introduce a recursive design to aggregate them. To save computational cost, the shifts do not introduce any parameters and the parameters are also shared across recursions. Extensive experiments on two public datasets demonstrate the superiority of our approach against other state-of-the-art methods with respect to both accuracy and CPU inference time.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
4.
Zurück zum Zitat Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T., Ronneberger, O.: 3D U-net: learning dense volumetric segmentation from sparse annotation. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 424–432. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46723-8_49CrossRef Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T., Ronneberger, O.: 3D U-net: learning dense volumetric segmentation from sparse annotation. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 424–432. Springer, Cham (2016). https://​doi.​org/​10.​1007/​978-3-319-46723-8_​49CrossRef
5.
Zurück zum Zitat Cooley, C.Z., et al.: A portable scanner for magnetic resonance imaging of the brain. Nat. Biomed. Eng. 5(3), 229–239 (2021)CrossRef Cooley, C.Z., et al.: A portable scanner for magnetic resonance imaging of the brain. Nat. Biomed. Eng. 5(3), 229–239 (2021)CrossRef
6.
Zurück zum Zitat Fatima, A., Madni, T.M., Anwar, F., Janjua, U.I., Sultana, N.: Automated 2D slice-based skull stripping multi-view ensemble model on NFBS and IBSR datasets. J. Digit. Imaging 35(2), 374–384 (2022)CrossRef Fatima, A., Madni, T.M., Anwar, F., Janjua, U.I., Sultana, N.: Automated 2D slice-based skull stripping multi-view ensemble model on NFBS and IBSR datasets. J. Digit. Imaging 35(2), 374–384 (2022)CrossRef
7.
Zurück zum Zitat Hatamizadeh, A., et al.: UNETR: transformers for 3D medical image segmentation. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 574–584 (2022) Hatamizadeh, A., et al.: UNETR: transformers for 3D medical image segmentation. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 574–584 (2022)
9.
Zurück zum Zitat Hwang, H., Rehman, H.Z.U., Lee, S.: 3D U-Net for skull stripping in brain MRI. Appl. Sci. 9(3), 569 (2019)CrossRef Hwang, H., Rehman, H.Z.U., Lee, S.: 3D U-Net for skull stripping in brain MRI. Appl. Sci. 9(3), 569 (2019)CrossRef
10.
Zurück zum Zitat Ibtehaz, N., Rahman, M.S.: MultiResUNet: rethinking the U-Net architecture for multimodal biomedical image segmentation. Neural Netw. 121, 74–87 (2020)CrossRef Ibtehaz, N., Rahman, M.S.: MultiResUNet: rethinking the U-Net architecture for multimodal biomedical image segmentation. Neural Netw. 121, 74–87 (2020)CrossRef
11.
Zurück zum Zitat Isensee, F., Kickingereder, P., Wick, W., Bendszus, M., Maier-Hein, K.H.: Brain tumor segmentation and radiomics survival prediction: contribution to the BRATS 2017 challenge. In: Crimi, A., Bakas, S., Kuijf, H., Menze, B., Reyes, M. (eds.) BrainLes 2017. LNCS, vol. 10670, pp. 287–297. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-75238-9_25CrossRef Isensee, F., Kickingereder, P., Wick, W., Bendszus, M., Maier-Hein, K.H.: Brain tumor segmentation and radiomics survival prediction: contribution to the BRATS 2017 challenge. In: Crimi, A., Bakas, S., Kuijf, H., Menze, B., Reyes, M. (eds.) BrainLes 2017. LNCS, vol. 10670, pp. 287–297. Springer, Cham (2018). https://​doi.​org/​10.​1007/​978-3-319-75238-9_​25CrossRef
12.
Zurück zum Zitat Jenkinson, M., Beckmann, C.F., Behrens, T.E., Woolrich, M.W., Smith, S.M.: FSL. Neuroimage 62(2), 782–790 (2012)CrossRef Jenkinson, M., Beckmann, C.F., Behrens, T.E., Woolrich, M.W., Smith, S.M.: FSL. Neuroimage 62(2), 782–790 (2012)CrossRef
13.
Zurück zum Zitat Lian, D., Yu, Z., Sun, X., Gao, S.: AS-MLP: an axial shifted MLP architecture for vision. In: International Conference on Learning Representations (ICLR), pp. 1–19 (2022) Lian, D., Yu, Z., Sun, X., Gao, S.: AS-MLP: an axial shifted MLP architecture for vision. In: International Conference on Learning Representations (ICLR), pp. 1–19 (2022)
14.
Zurück zum Zitat Mazurek, M.H., et al.: Portable, bedside, low-field magnetic resonance imaging for evaluation of intracerebral hemorrhage. Nat. Commun. 12(1), 1–11 (2021)CrossRef Mazurek, M.H., et al.: Portable, bedside, low-field magnetic resonance imaging for evaluation of intracerebral hemorrhage. Nat. Commun. 12(1), 1–11 (2021)CrossRef
15.
Zurück zum Zitat Milletari, F., Navab, N., Ahmadi, S.A.: V-net: fully convolutional neural networks for volumetric medical image segmentation. In: 2016 Fourth International Conference on 3D Vision (3DV), pp. 565–571 (2016) Milletari, F., Navab, N., Ahmadi, S.A.: V-net: fully convolutional neural networks for volumetric medical image segmentation. In: 2016 Fourth International Conference on 3D Vision (3DV), pp. 565–571 (2016)
16.
Zurück zum Zitat Nie, D., Wang, L., Adeli, E., Lao, C., Lin, W., Shen, D.: 3-D fully convolutional networks for multimodal isointense infant brain image segmentation. IEEE Trans. Cybern. 49(3), 1123–1136 (2018)CrossRef Nie, D., Wang, L., Adeli, E., Lao, C., Lin, W., Shen, D.: 3-D fully convolutional networks for multimodal isointense infant brain image segmentation. IEEE Trans. Cybern. 49(3), 1123–1136 (2018)CrossRef
17.
Zurück zum Zitat Pan, S., et al.: Abdomen CT multi-organ segmentation using token-based MLP-mixer. Med. Phys. 50, 3027–3038 (2022)CrossRef Pan, S., et al.: Abdomen CT multi-organ segmentation using token-based MLP-mixer. Med. Phys. 50, 3027–3038 (2022)CrossRef
18.
Zurück zum Zitat Qiu, Z., Yao, T., Ngo, C.W., Mei, T.: MLP-3D: a MLP-Like 3D architecture with grouped time mixing. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3062–3072 (2022) Qiu, Z., Yao, T., Ngo, C.W., Mei, T.: MLP-3D: a MLP-Like 3D architecture with grouped time mixing. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3062–3072 (2022)
20.
Zurück zum Zitat Sheth, K.N., et al.: Assessment of brain injury using portable, low-field magnetic resonance imaging at the bedside of critically ill patients. JAMA Neurol. 78(1), 41–47 (2021)CrossRef Sheth, K.N., et al.: Assessment of brain injury using portable, low-field magnetic resonance imaging at the bedside of critically ill patients. JAMA Neurol. 78(1), 41–47 (2021)CrossRef
21.
Zurück zum Zitat Smith, S.M.: Fast robust automated brain extraction. Hum. Brain Mapp. 17(3), 143–155 (2002)CrossRef Smith, S.M.: Fast robust automated brain extraction. Hum. Brain Mapp. 17(3), 143–155 (2002)CrossRef
22.
Zurück zum Zitat Sun, L., Shao, W., Zhu, Q., Wang, M., Li, G., Zhang, D.: Multi-scale multi-hierarchy attention convolutional neural network for fetal brain extraction. Pattern Recogn. 133, 109029 (2023)CrossRef Sun, L., Shao, W., Zhu, Q., Wang, M., Li, G., Zhang, D.: Multi-scale multi-hierarchy attention convolutional neural network for fetal brain extraction. Pattern Recogn. 133, 109029 (2023)CrossRef
23.
Zurück zum Zitat Tolstikhin, I.O., et al.: MLP-Mixer: an all-MLP architecture for vision. Adv. Neural. Inf. Process. Syst. 34, 24261–24272 (2021) Tolstikhin, I.O., et al.: MLP-Mixer: an all-MLP architecture for vision. Adv. Neural. Inf. Process. Syst. 34, 24261–24272 (2021)
25.
Zurück zum Zitat Wang, Z., Zou, N., Shen, D., Ji, S.: Non-local U-nets for biomedical image segmentation. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 6315–6322 (2020) Wang, Z., Zou, N., Shen, D., Ji, S.: Non-local U-nets for biomedical image segmentation. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 6315–6322 (2020)
26.
Zurück zum Zitat Yu, T., Li, X., Cai, Y., Sun, M., Li, P.: S\(^2\)-MLP spatial-shift MLP architecture for vision. In: the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 297–306 (2022) Yu, T., Li, X., Cai, Y., Sun, M., Li, P.: S\(^2\)-MLP spatial-shift MLP architecture for vision. In: the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 297–306 (2022)
27.
Zurück zum Zitat Zhou, Z., Siddiquee, M.M.R., Tajbakhsh, N., Liang, J.: UNet++: redesigning skip connections to exploit multiscale features in image segmentation. IEEE Trans. Med. Imaging 39(6), 1856–1867 (2020)CrossRef Zhou, Z., Siddiquee, M.M.R., Tajbakhsh, N., Liang, J.: UNet++: redesigning skip connections to exploit multiscale features in image segmentation. IEEE Trans. Med. Imaging 39(6), 1856–1867 (2020)CrossRef
Metadaten
Titel
Fast and Efficient Brain Extraction with Recursive MLP Based 3D UNet
verfasst von
Guoqing Shangguan
Hao Xiong
Dong Liu
Hualei Shen
Copyright-Jahr
2024
Verlag
Springer Nature Singapore
DOI
https://doi.org/10.1007/978-981-99-8067-3_43

Premium Partner