Skip to main content
Top
Published in: Medical & Biological Engineering & Computing 11/2023

25-08-2023 | Original Article

MBRARN: multibranch residual attention reconstruction network for medical image fusion

Authors: Weihao Zhang, Yuting Lu, Haodong Zheng, Lei Yu

Published in: Medical & Biological Engineering & Computing | Issue 11/2023

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Medical image fusion aims to integrate complementary information from multimodal medical images and has been widely applied in the field of medicine, such as clinical diagnosis, pathology analysis, and healing examinations. For the fusion task, feature extraction is a crucial step. To obtain significant information embedded in medical images, many deep learning-based algorithms have been proposed recently and achieved good fusion results. However, most of them can hardly capture the independent and underlying features, which leads to unsatisfactory fusion results. To address these issues, a multibranch residual attention reconstruction network (MBRARN) is proposed for the medical image fusion task. The proposed network mainly consists of three parts: feature extraction, feature fusion, and feature reconstruction. Firstly, the input medical images are converted into three scales by image pyramid operation and then are input into three branches of the proposed network respectively. The purpose of this procedure is to capture the local detailed information and the global structural information. Then, convolutions with residual attention modules are designed, which can not only enhance the captured outstanding features, but also make the network converge fast and stably. Finally, feature fusion is performed with the designed fusion strategy. In this step, a new more effective fusion strategy is correspondently designed for MRI-SPECT based on the Euclidean norm, called feature distance ratio (FDR). The experimental results conducted on Harvard whole brain atlas dataset demonstrate that the proposed network can achieve better results in terms of both subjective and objective evaluation, compared with some state-of-the-art medical image fusion algorithms.

Graphical Abstract

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Kang TW, Lee MW, Song KD et al (2017) Added value of contrast-enhanced ultrasound on biopsies of focal hepatic lesions invisible on fusion imaging guidance. Korean J Radiol 18(1):152–161PubMedPubMedCentralCrossRef Kang TW, Lee MW, Song KD et al (2017) Added value of contrast-enhanced ultrasound on biopsies of focal hepatic lesions invisible on fusion imaging guidance. Korean J Radiol 18(1):152–161PubMedPubMedCentralCrossRef
2.
go back to reference Wu XB, Li KY, Luo HC et al (2018) The diagnostic value of focal liver lesion (≤2 cm) undetectable on conventional ultrasound by image fusion with contrast-enhanced ultrasound. Chin J Ultrason 27(10):860–864 Wu XB, Li KY, Luo HC et al (2018) The diagnostic value of focal liver lesion (≤2 cm) undetectable on conventional ultrasound by image fusion with contrast-enhanced ultrasound. Chin J Ultrason 27(10):860–864
3.
go back to reference Hermessi H, Mourali O, Zagrouba E (2021) Multimodal medical image fusion review: theoretical background and recent advances. Signal Process 183:108036CrossRef Hermessi H, Mourali O, Zagrouba E (2021) Multimodal medical image fusion review: theoretical background and recent advances. Signal Process 183:108036CrossRef
4.
go back to reference Tawfik N, Elnemr HA, Fakhr M et al (2021) Survey study of multimodality medical image fusion methods. Multimedia Tools Appl 80:6369–6396CrossRef Tawfik N, Elnemr HA, Fakhr M et al (2021) Survey study of multimodality medical image fusion methods. Multimedia Tools Appl 80:6369–6396CrossRef
5.
go back to reference Zhou T, Cheng QR, Lu HL et al (2023) Deep learning methods for medical image fusion: a review. Comput Biol Med 160:106959PubMedCrossRef Zhou T, Cheng QR, Lu HL et al (2023) Deep learning methods for medical image fusion: a review. Comput Biol Med 160:106959PubMedCrossRef
6.
go back to reference Dong Y, Wang WP, Mao F et al (2016) Application of imaging fusion combining contrast-enhanced ultrasound and magnetic resonance imaging in detection of hepatic cellular carcinomas undetectable by conventional ultrasound. J Gastroenterol Hepatol 31(4):822–828PubMedCrossRef Dong Y, Wang WP, Mao F et al (2016) Application of imaging fusion combining contrast-enhanced ultrasound and magnetic resonance imaging in detection of hepatic cellular carcinomas undetectable by conventional ultrasound. J Gastroenterol Hepatol 31(4):822–828PubMedCrossRef
7.
go back to reference Wu DF, He W, Lin S et al (2018) The application of volume navigation with ultrasound and MR fusion image in neurosurgical brain tumor resection. Chinese J Ultrason 27(12):1036–1041 Wu DF, He W, Lin S et al (2018) The application of volume navigation with ultrasound and MR fusion image in neurosurgical brain tumor resection. Chinese J Ultrason 27(12):1036–1041
8.
go back to reference Zhang SY, Lv B (2023) A review of the diagnostic value of ultrasound image fusion in liver space occupying lesions. J Shandong First Med Un Shandong Acad Med Sci 44(4):313–316 Zhang SY, Lv B (2023) A review of the diagnostic value of ultrasound image fusion in liver space occupying lesions. J Shandong First Med Un Shandong Acad Med Sci 44(4):313–316
9.
go back to reference Ma HL, Kui GH, Zhang Z (2020) Value of multi-modality medical image fusion technology in evaluating hepatic focal lesions in chronic aplastic anemia patients administered androgens. J Clin Exp Med 19(22):2456–2459 Ma HL, Kui GH, Zhang Z (2020) Value of multi-modality medical image fusion technology in evaluating hepatic focal lesions in chronic aplastic anemia patients administered androgens. J Clin Exp Med 19(22):2456–2459
10.
go back to reference Ma BR, Yang H (2005) Image fusion technology and its application in epileptic. Chin J Biomed Eng 24(3):357–361 Ma BR, Yang H (2005) Image fusion technology and its application in epileptic. Chin J Biomed Eng 24(3):357–361
11.
go back to reference Hu J, Dong XQ, Lin Q et al (2018) Medical image registration and fusion technology applied in cancer radiotherapy. Chin Med Equip J 39(8):75–7884 Hu J, Dong XQ, Lin Q et al (2018) Medical image registration and fusion technology applied in cancer radiotherapy. Chin Med Equip J 39(8):75–7884
12.
go back to reference Du J, LI W, XIAO B et al (2016) Union Laplacian pyramid with multiple features for medical image fusion. Neurocomputing 194:326–339CrossRef Du J, LI W, XIAO B et al (2016) Union Laplacian pyramid with multiple features for medical image fusion. Neurocomputing 194:326–339CrossRef
13.
go back to reference Wang Z, Cui Z, Zhu Y (2020) Multi-modal medical image fusion by Laplacian pyramid and adaptive sparse representation. Comput Biol Med 123:103823PubMedCrossRef Wang Z, Cui Z, Zhu Y (2020) Multi-modal medical image fusion by Laplacian pyramid and adaptive sparse representation. Comput Biol Med 123:103823PubMedCrossRef
14.
go back to reference Bhat S, Koundal D (2021) Multi-focus image fusion using neutrosophic based wavelet transform. Appl Soft Comput 106:107307CrossRef Bhat S, Koundal D (2021) Multi-focus image fusion using neutrosophic based wavelet transform. Appl Soft Comput 106:107307CrossRef
15.
go back to reference Liu Y, Liu S, Wang Z (2014) Medical image fusion by combining nonsubsampled contourlet transform and sparse representation. In: Pattern Recognition: 6th Chinese Conference (CCPR). pp 17–19 Liu Y, Liu S, Wang Z (2014) Medical image fusion by combining nonsubsampled contourlet transform and sparse representation. In: Pattern Recognition: 6th Chinese Conference (CCPR). pp 17–19
16.
go back to reference Ibrahim SI, Makhlouf MA, El-Tawel GS (2023) Multimodal medical image fusion algorithm based on pulse coupled neural networks and nonsubsampled contourlet transform. Med Biol Eng Compu 61(1):155–177CrossRef Ibrahim SI, Makhlouf MA, El-Tawel GS (2023) Multimodal medical image fusion algorithm based on pulse coupled neural networks and nonsubsampled contourlet transform. Med Biol Eng Compu 61(1):155–177CrossRef
17.
go back to reference Xu W, Fu YL, Xu H (2023) Medical image fusion using enhanced cross-visual cortex model based on artificial selection and impulse-coupled neural network. Comput Methods Programs Biomed 229:107304PubMedCrossRef Xu W, Fu YL, Xu H (2023) Medical image fusion using enhanced cross-visual cortex model based on artificial selection and impulse-coupled neural network. Comput Methods Programs Biomed 229:107304PubMedCrossRef
18.
go back to reference Yin M, Liu X, Liu Y et al (2018) Medical image fusion with parameter-adaptive pulse coupled neural network in nonsubsampled shearlet transform domain. IEEE Trans Instrum Meas 68(1):49–64CrossRef Yin M, Liu X, Liu Y et al (2018) Medical image fusion with parameter-adaptive pulse coupled neural network in nonsubsampled shearlet transform domain. IEEE Trans Instrum Meas 68(1):49–64CrossRef
19.
go back to reference Wang L, Dou J, Qin P et al (2021) Multimodal medical image fusion based on nonsubsampled shearlet transform and convolutional sparse representation. Multimedia Tools Appl 80:36401–36421CrossRef Wang L, Dou J, Qin P et al (2021) Multimodal medical image fusion based on nonsubsampled shearlet transform and convolutional sparse representation. Multimedia Tools Appl 80:36401–36421CrossRef
20.
go back to reference Gai D, Shen X, Cheng H et al (2019) Medical image fusion via PCNN based on edge preservation and improved sparse representation in NSST domain. IEEE Access 7:85413–85429CrossRef Gai D, Shen X, Cheng H et al (2019) Medical image fusion via PCNN based on edge preservation and improved sparse representation in NSST domain. IEEE Access 7:85413–85429CrossRef
21.
go back to reference Dinh PH (2023) Combining spectral total variation with dynamic threshold neural P systems for medical image fusion. Biomed Signal Process Control 80:104343CrossRef Dinh PH (2023) Combining spectral total variation with dynamic threshold neural P systems for medical image fusion. Biomed Signal Process Control 80:104343CrossRef
22.
go back to reference Li J, Han D, Wang X et al (2023) Multi-sensor medical-image fusion technique based on embedding bilateral filter in least squares and salient detection. Sensors 23(7):3490PubMedPubMedCentralCrossRef Li J, Han D, Wang X et al (2023) Multi-sensor medical-image fusion technique based on embedding bilateral filter in least squares and salient detection. Sensors 23(7):3490PubMedPubMedCentralCrossRef
23.
go back to reference Dinh PH (2023) Medical image fusion based on enhanced three-layer image decomposition and Chameleon swarm algorithm. Biomed Signal Process Control 84:104740CrossRef Dinh PH (2023) Medical image fusion based on enhanced three-layer image decomposition and Chameleon swarm algorithm. Biomed Signal Process Control 84:104740CrossRef
24.
go back to reference Barba-J L, Vargas-Quintero L, Calderón-Agudelo JA (2022) Bone SPECT/CT image fusion based on the discrete hermite transform and sparse representation. Biomed Signal Process Control 71:103096CrossRef Barba-J L, Vargas-Quintero L, Calderón-Agudelo JA (2022) Bone SPECT/CT image fusion based on the discrete hermite transform and sparse representation. Biomed Signal Process Control 71:103096CrossRef
25.
go back to reference Chen J, Zhang L, Lu L et al (2021) A novel medical image fusion method based on rolling guidance filtering. Int Things 14:100172CrossRef Chen J, Zhang L, Lu L et al (2021) A novel medical image fusion method based on rolling guidance filtering. Int Things 14:100172CrossRef
26.
go back to reference Zhao H, Zhang JX, Zhang ZG (2021) PCNN medical image fusion based on NSCT and DWT. Adv Laser Optoelectron 58(20):445–454 Zhao H, Zhang JX, Zhang ZG (2021) PCNN medical image fusion based on NSCT and DWT. Adv Laser Optoelectron 58(20):445–454
27.
go back to reference Liu Y, Chen X, Cheng J et al (2017) A medical image fusion method based on convolutional neural networks[C]//2017 20th international conference on information fusion (Fusion). IEEE 2017:1–7 Liu Y, Chen X, Cheng J et al (2017) A medical image fusion method based on convolutional neural networks[C]//2017 20th international conference on information fusion (Fusion). IEEE 2017:1–7
28.
go back to reference Wang L, Zhang J, Liu Y et al (2021) Multimodal medical image fusion based on Gabor representation combination of multi-CNN and fuzzy neural network. IEEE Access 9:67634–67647CrossRef Wang L, Zhang J, Liu Y et al (2021) Multimodal medical image fusion based on Gabor representation combination of multi-CNN and fuzzy neural network. IEEE Access 9:67634–67647CrossRef
29.
go back to reference Zhang Y, Liu Y, Sun P et al (2020) IFCNN: A general image fusion framework based on convolutional neural network. Inform Fusion 54:99–118CrossRef Zhang Y, Liu Y, Sun P et al (2020) IFCNN: A general image fusion framework based on convolutional neural network. Inform Fusion 54:99–118CrossRef
30.
go back to reference Xu H, Ma J (2021) EMFusion: An unsupervised enhanced medical image fusion network. Inform Fusion 76:177–186CrossRef Xu H, Ma J (2021) EMFusion: An unsupervised enhanced medical image fusion network. Inform Fusion 76:177–186CrossRef
31.
go back to reference Azam MA, Khan KB, Salahuddin S et al (2022) A review on multimodal medical image fusion: compendious analysis of medical modalities, multimodal databases, fusion techniques and quality metrics. Comput Biol Med 144:105253PubMedCrossRef Azam MA, Khan KB, Salahuddin S et al (2022) A review on multimodal medical image fusion: compendious analysis of medical modalities, multimodal databases, fusion techniques and quality metrics. Comput Biol Med 144:105253PubMedCrossRef
32.
go back to reference Zhang G, Nie R, Cao J et al (2023) FDGNet: A pair feature difference guided network for multimodal medical image fusion. Biomed Signal Process Control 81:104545CrossRef Zhang G, Nie R, Cao J et al (2023) FDGNet: A pair feature difference guided network for multimodal medical image fusion. Biomed Signal Process Control 81:104545CrossRef
33.
go back to reference Fu J, He B, Yang J et al (2023) CDRNet: Cascaded dense residual network for grayscale and pseudocolor medical image fusion. Comput Methods Programs Biomed 234:107506PubMedCrossRef Fu J, He B, Yang J et al (2023) CDRNet: Cascaded dense residual network for grayscale and pseudocolor medical image fusion. Comput Methods Programs Biomed 234:107506PubMedCrossRef
34.
go back to reference Fu J, Li W, Peng X et al (2023) MDRANet: A multiscale dense residual attention network for magnetic resonance and nuclear medicine image fusion. Biomed Signal Process Control 80:104382CrossRef Fu J, Li W, Peng X et al (2023) MDRANet: A multiscale dense residual attention network for magnetic resonance and nuclear medicine image fusion. Biomed Signal Process Control 80:104382CrossRef
35.
go back to reference Ding ZS, Li HY, Guo Y et al (2023) M4FNet: Multimodal medical image fusion network via multi-receptive-field and multi-scale feature integration. Comput Biol Med 159:106923PubMedCrossRef Ding ZS, Li HY, Guo Y et al (2023) M4FNet: Multimodal medical image fusion network via multi-receptive-field and multi-scale feature integration. Comput Biol Med 159:106923PubMedCrossRef
36.
go back to reference Panigrahy C, Seal A, Gonzalo-Martín C et al (2023) Parameter adaptive unit-linking pulse coupled neural network based MRI-PET/SPECT image fusion. Biomed Signal Process Control 83:104659CrossRef Panigrahy C, Seal A, Gonzalo-Martín C et al (2023) Parameter adaptive unit-linking pulse coupled neural network based MRI-PET/SPECT image fusion. Biomed Signal Process Control 83:104659CrossRef
37.
go back to reference Zhao C, Wang T, Lei B (2021) Medical image fusion method based on dense block and deep convolutional generative adversarial network. Neural Comput Appl 33:6595–6610CrossRef Zhao C, Wang T, Lei B (2021) Medical image fusion method based on dense block and deep convolutional generative adversarial network. Neural Comput Appl 33:6595–6610CrossRef
38.
go back to reference Huang J, Le Z, Ma Y et al (2020) MGMDcGAN: medical image fusion using multi-generator multi-discriminator conditional generative adversarial network. IEEE Access 8:55145–55157CrossRef Huang J, Le Z, Ma Y et al (2020) MGMDcGAN: medical image fusion using multi-generator multi-discriminator conditional generative adversarial network. IEEE Access 8:55145–55157CrossRef
39.
go back to reference Ma J, Xu H, Jiang J et al (2020) DDcGAN: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion. IEEE Trans Image Process 29:4980–4995CrossRef Ma J, Xu H, Jiang J et al (2020) DDcGAN: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion. IEEE Trans Image Process 29:4980–4995CrossRef
40.
go back to reference Fu J, Li W, Du J et al (2021) DSAGAN: A generative adversarial network based on dual-stream attention mechanism for anatomical and functional image fusion. Inf Sci 576:484–506CrossRef Fu J, Li W, Du J et al (2021) DSAGAN: A generative adversarial network based on dual-stream attention mechanism for anatomical and functional image fusion. Inf Sci 576:484–506CrossRef
41.
go back to reference Wang J, Yu L, Tian S (2022) MsRAN: a multi-scale residual attention network for multi-model image fusion. Med Biol Eng Compu 60:3615–3634CrossRef Wang J, Yu L, Tian S (2022) MsRAN: a multi-scale residual attention network for multi-model image fusion. Med Biol Eng Compu 60:3615–3634CrossRef
42.
go back to reference He K, Zhang X, Ren S, et al (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 770–778 He K, Zhang X, Ren S, et al (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 770–778
43.
go back to reference Mnih V, Heess N, Graves A (2014) Recurrent models of visual attention. In: Advances in neural information processing systems Mnih V, Heess N, Graves A (2014) Recurrent models of visual attention. In: Advances in neural information processing systems
44.
go back to reference Wang F, Jiang M, Qian C, et al (2017) Residual attention network for image classification. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 3156–3164 Wang F, Jiang M, Qian C, et al (2017) Residual attention network for image classification. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 3156–3164
46.
go back to reference Tang LF, Zhang H, Xu H, Ma JY (2023) Deep learning-based image fusion: a survey. J Image Graphics 28(1):3–36 Tang LF, Zhang H, Xu H, Ma JY (2023) Deep learning-based image fusion: a survey. J Image Graphics 28(1):3–36
47.
go back to reference Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13(4):600–612CrossRefPubMed Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13(4):600–612CrossRefPubMed
48.
go back to reference Wang Z, Li Q (2011) Information content weighting for perceptual image quality assessment. IEEE Trans Image Process 20(5):1185–1198PubMedCrossRef Wang Z, Li Q (2011) Information content weighting for perceptual image quality assessment. IEEE Trans Image Process 20(5):1185–1198PubMedCrossRef
49.
go back to reference Qu G, Zhang D, Yan P (2002) Information measure for performance of image fusion. Electron Lett 38(7):313–315CrossRef Qu G, Zhang D, Yan P (2002) Information measure for performance of image fusion. Electron Lett 38(7):313–315CrossRef
50.
go back to reference Haghighat MBA, Aghagolzadeh A, Seyedarabi H (2011) A non-reference image fusion metric based on mutual information of image features. Comput Electr Eng 37(5):744–756CrossRef Haghighat MBA, Aghagolzadeh A, Seyedarabi H (2011) A non-reference image fusion metric based on mutual information of image features. Comput Electr Eng 37(5):744–756CrossRef
51.
go back to reference Sheikh HR, Bovik AC (2016) Image information and visual quality. IEEE Trans Image Process 15(2):430–444CrossRef Sheikh HR, Bovik AC (2016) Image information and visual quality. IEEE Trans Image Process 15(2):430–444CrossRef
52.
go back to reference Xydeas CS, Petrovic V (2000) Objective image fusion performance measure. Electron Lett 36(4):308–309CrossRef Xydeas CS, Petrovic V (2000) Objective image fusion performance measure. Electron Lett 36(4):308–309CrossRef
53.
go back to reference Hossny M, Nahavandi S, Creighton D et al (2010) Image fusion performance metric based on mutual information and entropy driven quadtree decomposition. Electron Lett 46(18):1266–1268CrossRef Hossny M, Nahavandi S, Creighton D et al (2010) Image fusion performance metric based on mutual information and entropy driven quadtree decomposition. Electron Lett 46(18):1266–1268CrossRef
54.
go back to reference Aslantas V, Bendes E (2015) A new image quality metric for image fusion: the sum of the correlations of differences. Aeu-int J Elect Commun 69(12):1890–1896CrossRef Aslantas V, Bendes E (2015) A new image quality metric for image fusion: the sum of the correlations of differences. Aeu-int J Elect Commun 69(12):1890–1896CrossRef
55.
go back to reference Zhu Z, Zheng M, Qi G et al (2019) A phase congruency and local Laplacian energy based multi-modality medical image fusion method in NSCT domain. IEEE Access 7:20811–20824CrossRef Zhu Z, Zheng M, Qi G et al (2019) A phase congruency and local Laplacian energy based multi-modality medical image fusion method in NSCT domain. IEEE Access 7:20811–20824CrossRef
56.
go back to reference Tan W, Zhang J, Xiang P et al (2020) Infrared and visible image fusion via NSST and PCNN in multiscale morphological gradient domain[C]//Optics, photonics and digital technologies for imaging applications VI. SPIE 11353:297–303 Tan W, Zhang J, Xiang P et al (2020) Infrared and visible image fusion via NSST and PCNN in multiscale morphological gradient domain[C]//Optics, photonics and digital technologies for imaging applications VI. SPIE 11353:297–303
57.
go back to reference Tan W, Thitøn W, Xiang P et al (2021) Multi-modal brain image fusion based on multi-level edge-preserving filtering. Biomed Signal Process Control 64:102280CrossRef Tan W, Thitøn W, Xiang P et al (2021) Multi-modal brain image fusion based on multi-level edge-preserving filtering. Biomed Signal Process Control 64:102280CrossRef
58.
go back to reference Liu Y, Liu S, Wang Z (2015) A general framework for image fusion based on multi-scale transform and sparse representation. Inform Fusion 24:147–164CrossRef Liu Y, Liu S, Wang Z (2015) A general framework for image fusion based on multi-scale transform and sparse representation. Inform Fusion 24:147–164CrossRef
59.
go back to reference Vanitha K, Satyanarayana D, Prasad MNG (2020) Medical image fusion algorithm based on weighted local energy motivated PAPCNN in NSST domain. J Adv Res Dyn Control Syst 12(SP3):960–967CrossRef Vanitha K, Satyanarayana D, Prasad MNG (2020) Medical image fusion algorithm based on weighted local energy motivated PAPCNN in NSST domain. J Adv Res Dyn Control Syst 12(SP3):960–967CrossRef
60.
go back to reference Khorasani A, Tavakoli MB, Saboori M et al (2021) Preliminary study of multiple b-value diffusion-weighted images and T1 post enhancement magnetic resonance imaging images fusion with Laplacian Re-decomposition (LRD) medical fusion algorithm for glioma grading. Eur J Radiol Open 8:100378PubMedPubMedCentralCrossRef Khorasani A, Tavakoli MB, Saboori M et al (2021) Preliminary study of multiple b-value diffusion-weighted images and T1 post enhancement magnetic resonance imaging images fusion with Laplacian Re-decomposition (LRD) medical fusion algorithm for glioma grading. Eur J Radiol Open 8:100378PubMedPubMedCentralCrossRef
61.
go back to reference Li W, Peng X, Fu J et al (2022) A multiscale double-branch residual attention network for anatomical–functional medical image fusion. Comput Biol Med 141:105005PubMedCrossRef Li W, Peng X, Fu J et al (2022) A multiscale double-branch residual attention network for anatomical–functional medical image fusion. Comput Biol Med 141:105005PubMedCrossRef
Metadata
Title
MBRARN: multibranch residual attention reconstruction network for medical image fusion
Authors
Weihao Zhang
Yuting Lu
Haodong Zheng
Lei Yu
Publication date
25-08-2023
Publisher
Springer Berlin Heidelberg
Published in
Medical & Biological Engineering & Computing / Issue 11/2023
Print ISSN: 0140-0118
Electronic ISSN: 1741-0444
DOI
https://doi.org/10.1007/s11517-023-02902-2

Other articles of this Issue 11/2023

Medical & Biological Engineering & Computing 11/2023 Go to the issue

Premium Partner