Skip to main content
Erschienen in: Neural Computing and Applications 7/2018

24.03.2018 | S.I. : Deep Learning for Biomedical and Healthcare Applications

Convolutional neural network-based multimodal image fusion via similarity learning in the shearlet domain

verfasst von: Haithem Hermessi, Olfa Mourali, Ezzeddine Zagrouba

Erschienen in: Neural Computing and Applications | Ausgabe 7/2018

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Recently, deep learning has been shown effectiveness in multimodal image fusion. In this paper, we propose a fusion method for CT and MR medical images based on convolutional neural network (CNN) in the shearlet domain. We initialize the Siamese fully convolutional neural network with a pre-trained architecture learned from natural data; then, we train it with medical images in a transfer learning fashion. Training dataset is made of positive and negative patch pair of shearlet coefficients. Examples are fed in two-stream deep CNN to extract features maps; then, a similarity metric learning based on cross-correlation is performed aiming to learn mapping between features. The minimization of the logistic loss objective function is applied with stochastic gradient descent. Consequently, the fusion process flow starts by decomposing source CT and MR images by the non-subsampled shearlet transform into several subimages. High-frequency subbands are fused based on weighted normalized cross-correlation between feature maps given by the extraction part of the CNN, while low-frequency coefficients are combined using local energy. Training and test datasets include pairs of pre-registered CT and MRI taken from the Harvard Medical School database. Visual analysis and objective assessment proved that the proposed deep architecture provides state-of-the-art performance in terms of subjective and objective assessment. The potential of the proposed CNN for multi-focus image fusion is exhibited in the experiments.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Gao XW, Hui R (2016) A deep learning based approach to classification of CT brain images. In: 2016 SAI computing conference (SAI), London, 13–15 July 2016, pp 28–31 Gao XW, Hui R (2016) A deep learning based approach to classification of CT brain images. In: 2016 SAI computing conference (SAI), London, 13–15 July 2016, pp 28–31
2.
Zurück zum Zitat Yang H, Sun J, Li H, Wang L, Xu Z (2016) Deep fusion net for multi-atlas segmentation: application to cardiac MR images. In: Ourselin S, Joskowicz L, Sabuncu M, Unal G, Wells W (eds) Medical image computing and computer-assisted intervention—MICCAI 2016, Lecture Notes in Computer Science, vol 9901. Springer, Cham, pp 521–528CrossRef Yang H, Sun J, Li H, Wang L, Xu Z (2016) Deep fusion net for multi-atlas segmentation: application to cardiac MR images. In: Ourselin S, Joskowicz L, Sabuncu M, Unal G, Wells W (eds) Medical image computing and computer-assisted intervention—MICCAI 2016, Lecture Notes in Computer Science, vol 9901. Springer, Cham, pp 521–528CrossRef
3.
Zurück zum Zitat Nie D, Zhang H, Adeli E, Liu L, Shen D (2016) 3D deep learning for multi-modal imaging-guided survival time prediction of brain tumor patients. In: Ourselin S, Joskowicz L, Sabuncu M, Unal G, Wells W (eds) Medical image computing and computer-assisted intervention—MICCAI 2016, Lecture Notes in Computer Science, vol 9901. Springer, Cham, pp 212–220CrossRef Nie D, Zhang H, Adeli E, Liu L, Shen D (2016) 3D deep learning for multi-modal imaging-guided survival time prediction of brain tumor patients. In: Ourselin S, Joskowicz L, Sabuncu M, Unal G, Wells W (eds) Medical image computing and computer-assisted intervention—MICCAI 2016, Lecture Notes in Computer Science, vol 9901. Springer, Cham, pp 212–220CrossRef
4.
Zurück zum Zitat James AP, Belur VD (2014) Medical image fusion: a survey of the state of the art. Inf Fusion 19:4–19CrossRef James AP, Belur VD (2014) Medical image fusion: a survey of the state of the art. Inf Fusion 19:4–19CrossRef
5.
Zurück zum Zitat Li S, Kang X, Fang L, Hu J, Yin H (2017) Pixel-level image fusion: a survey of the state of the art. Inf Fusion 33:100–112CrossRef Li S, Kang X, Fang L, Hu J, Yin H (2017) Pixel-level image fusion: a survey of the state of the art. Inf Fusion 33:100–112CrossRef
7.
Zurück zum Zitat Mangai UG, Samanta S, Das S, Chowdhury PR (2010) A survey of decision fusion and feature fusion strategies for pattern classification. IETE Tech Rev 27(4):293–307CrossRef Mangai UG, Samanta S, Das S, Chowdhury PR (2010) A survey of decision fusion and feature fusion strategies for pattern classification. IETE Tech Rev 27(4):293–307CrossRef
8.
Zurück zum Zitat Wu D, Yang A, Zhu L, Zhang C (2014) Survey of multi-sensor image fusion. In: Life system modeling and simulation, pp 358–367 Wu D, Yang A, Zhu L, Zhang C (2014) Survey of multi-sensor image fusion. In: Life system modeling and simulation, pp 358–367
9.
Zurück zum Zitat Luo W, Schwing AG, Urtasun R (2016) Efficient deep learning for stereo matching. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR), pp 5695–5703 Luo W, Schwing AG, Urtasun R (2016) Efficient deep learning for stereo matching. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR), pp 5695–5703
10.
Zurück zum Zitat Bertinetto L, Valmadre J, Henriques JF, Vedaldi A, Torr PHS (2016) Fully-convolutional siamese networks for object tracking. In Computer vision—ECCV 2016 workshops, pp 850–865 Bertinetto L, Valmadre J, Henriques JF, Vedaldi A, Torr PHS (2016) Fully-convolutional siamese networks for object tracking. In Computer vision—ECCV 2016 workshops, pp 850–865
11.
Zurück zum Zitat Simonovsky M, Gutiérrez-Becker B, Mateus D, Navab N, Komodakis N (2016) A deep metric for multimodal registration. In: Medical image computing and computer-assisted intervention—MICCAI 2016, pp 10–18 Simonovsky M, Gutiérrez-Becker B, Mateus D, Navab N, Komodakis N (2016) A deep metric for multimodal registration. In: Medical image computing and computer-assisted intervention—MICCAI 2016, pp 10–18
12.
Zurück zum Zitat Nirmala DE, Vaidehi V (2015) Comparison of pixel-level and feature level image fusion methods. In: 2015 2nd international conference on computing for sustainable global development (INDIACom), pp 743–748 Nirmala DE, Vaidehi V (2015) Comparison of pixel-level and feature level image fusion methods. In: 2015 2nd international conference on computing for sustainable global development (INDIACom), pp 743–748
13.
Zurück zum Zitat Ghassemian H (2016) A review of remote sensing image fusion methods. Inf Fusion 32(Part A):75–89CrossRef Ghassemian H (2016) A review of remote sensing image fusion methods. Inf Fusion 32(Part A):75–89CrossRef
14.
Zurück zum Zitat Du J, Li W, Lu K, Xiao B (2016) An overview of multi-modal medical image fusion. Neurocomputing 215:3–20CrossRef Du J, Li W, Lu K, Xiao B (2016) An overview of multi-modal medical image fusion. Neurocomputing 215:3–20CrossRef
15.
Zurück zum Zitat Kutyniok G, Labate D (2012) Introduction to shearlets. In: Kutyniok G, Labate D (eds) Shearlets: multiscale analysis for multivariate data. Birkhäuser, BostonCrossRef Kutyniok G, Labate D (2012) Introduction to shearlets. In: Kutyniok G, Labate D (eds) Shearlets: multiscale analysis for multivariate data. Birkhäuser, BostonCrossRef
16.
Zurück zum Zitat Easley G, Labate D, Lim WQ (2008) Sparse directional image representations using the discrete shearlet transform. Appl Comput Harmon Anal 25(1):25–46MathSciNetCrossRef Easley G, Labate D, Lim WQ (2008) Sparse directional image representations using the discrete shearlet transform. Appl Comput Harmon Anal 25(1):25–46MathSciNetCrossRef
17.
Zurück zum Zitat Hermessi H, Mourali O, Zagrouba E (2016) Multimodal image fusion based on non-subsampled Shearlet transform and neuro-fuzzy. In: Representations, analysis and recognition of shape and motion from imaging data, pp 161–175CrossRef Hermessi H, Mourali O, Zagrouba E (2016) Multimodal image fusion based on non-subsampled Shearlet transform and neuro-fuzzy. In: Representations, analysis and recognition of shape and motion from imaging data, pp 161–175CrossRef
18.
Zurück zum Zitat Guo Y, Liu Y, Oerlemans A, Lao S, Wu S, Lew MS (2016) Deep learning for visual understanding: a review. Neurocomputing 187:27–48CrossRef Guo Y, Liu Y, Oerlemans A, Lao S, Wu S, Lew MS (2016) Deep learning for visual understanding: a review. Neurocomputing 187:27–48CrossRef
19.
Zurück zum Zitat Liu W, Wang Z, Liu X, Zeng N, Liu Y, Alsaadi FE (2017) A survey of deep neural network architectures and their applications. Neurocomputing 234:11–26CrossRef Liu W, Wang Z, Liu X, Zeng N, Liu Y, Alsaadi FE (2017) A survey of deep neural network architectures and their applications. Neurocomputing 234:11–26CrossRef
20.
Zurück zum Zitat Greenspan H, van Ginneken B, Summers RM (2016) Guest editorial deep learning in medical imaging: overview and future promise of an exciting new technique. IEEE Trans Med Imaging 35(5):1153–1159CrossRef Greenspan H, van Ginneken B, Summers RM (2016) Guest editorial deep learning in medical imaging: overview and future promise of an exciting new technique. IEEE Trans Med Imaging 35(5):1153–1159CrossRef
21.
Zurück zum Zitat Shin HC et al (2016) Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE Trans Med Imaging 35(5):1285–1298CrossRef Shin HC et al (2016) Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE Trans Med Imaging 35(5):1285–1298CrossRef
22.
Zurück zum Zitat Tajbakhsh N et al (2016) Convolutional neural networks for medical image analysis: full training or fine tuning? IEEE Trans Med Imaging 35(5):1299–1312CrossRef Tajbakhsh N et al (2016) Convolutional neural networks for medical image analysis: full training or fine tuning? IEEE Trans Med Imaging 35(5):1299–1312CrossRef
23.
Zurück zum Zitat Zhong J, Yang B, Huang G, Zhong F, Chen Z (2016) Remote sensing image fusion with convolutional neural network. Sens Imaging 17(1):10CrossRef Zhong J, Yang B, Huang G, Zhong F, Chen Z (2016) Remote sensing image fusion with convolutional neural network. Sens Imaging 17(1):10CrossRef
24.
Zurück zum Zitat Liu Y, Chen X, Peng H, Wang Z (2017) Multi-focus image fusion with a deep convolutional neural network. Inf Fusion 36:191–207CrossRef Liu Y, Chen X, Peng H, Wang Z (2017) Multi-focus image fusion with a deep convolutional neural network. Inf Fusion 36:191–207CrossRef
25.
Zurück zum Zitat Kong Y, Deng Y, Dai Q (2015) Discriminative clustering and feature selection for brain MRI segmentation. IEEE Signal Process Lett 22(5):573–577CrossRef Kong Y, Deng Y, Dai Q (2015) Discriminative clustering and feature selection for brain MRI segmentation. IEEE Signal Process Lett 22(5):573–577CrossRef
26.
Zurück zum Zitat Deng Y, Bao F, Deng X, Wang R, Kong Y, Dai Q (2016) Deep and structured robust information theoretic learning for image analysis. IEEE Trans Image Process 25(9):4209–4221MathSciNet Deng Y, Bao F, Deng X, Wang R, Kong Y, Dai Q (2016) Deep and structured robust information theoretic learning for image analysis. IEEE Trans Image Process 25(9):4209–4221MathSciNet
27.
Zurück zum Zitat Singh S, Gupta D, Anand RS, Kumar V (2015) Non-subsampled shearlet based CT and MR medical image fusion using biologically inspired spiking neural network. Biomed Signal Process Control 18:91–101CrossRef Singh S, Gupta D, Anand RS, Kumar V (2015) Non-subsampled shearlet based CT and MR medical image fusion using biologically inspired spiking neural network. Biomed Signal Process Control 18:91–101CrossRef
28.
Zurück zum Zitat Nobariyan BK, Daneshvar S, Foroughi A (2014) A new MRI and PET image fusion algorithm based on pulse coupled neural network. In: 2014 22nd Iranian conference on electrical engineering (ICEE), pp 1950–1955 Nobariyan BK, Daneshvar S, Foroughi A (2014) A new MRI and PET image fusion algorithm based on pulse coupled neural network. In: 2014 22nd Iranian conference on electrical engineering (ICEE), pp 1950–1955
29.
Zurück zum Zitat LeCun Y, Bengio Y, Hinton G (2015) A review: deep learning. Nature 521(7553):436–444CrossRef LeCun Y, Bengio Y, Hinton G (2015) A review: deep learning. Nature 521(7553):436–444CrossRef
30.
Zurück zum Zitat Rezaeilouyeh H, Mollahosseini A, Mahoor MH (2016) Microscopic medical image classification framework via deep learning and shearlet transform. J Med Imaging 3(4):044501CrossRef Rezaeilouyeh H, Mollahosseini A, Mahoor MH (2016) Microscopic medical image classification framework via deep learning and shearlet transform. J Med Imaging 3(4):044501CrossRef
31.
Zurück zum Zitat Li Z et al (2017) Convolutional neural network based clustering and manifold learning method for diabetic plantar pressure imaging dataset. J Med Imaging Health Inf 7(3):639–652CrossRef Li Z et al (2017) Convolutional neural network based clustering and manifold learning method for diabetic plantar pressure imaging dataset. J Med Imaging Health Inf 7(3):639–652CrossRef
32.
Zurück zum Zitat Wang D et al (2017) Image fusion incorporating parameter estimation optimized gaussian mixture model and fuzzy weighted evaluation system: a case study in time-series plantar pressure data set. IEEE Sens J 17(5):1407–1420CrossRef Wang D et al (2017) Image fusion incorporating parameter estimation optimized gaussian mixture model and fuzzy weighted evaluation system: a case study in time-series plantar pressure data set. IEEE Sens J 17(5):1407–1420CrossRef
33.
Zurück zum Zitat Williams T, Li R (2016) Advanced image classification using wavelets and convolutional neural networks. In: 2016 15th IEEE international conference on machine learning and applications (ICMLA), pp 233–239 Williams T, Li R (2016) Advanced image classification using wavelets and convolutional neural networks. In: 2016 15th IEEE international conference on machine learning and applications (ICMLA), pp 233–239
34.
Zurück zum Zitat Sirinukunwattana K, Raza SEA, Tsang YW, Snead David R J, Cree Ian A, Rajpoot NM (2016) Locality sensitive deep learning for detection and classification of nuclei in routine colon cancer histology images. IEEE Trans Med Imaging 35(5):1196–1206CrossRef Sirinukunwattana K, Raza SEA, Tsang YW, Snead David R J, Cree Ian A, Rajpoot NM (2016) Locality sensitive deep learning for detection and classification of nuclei in routine colon cancer histology images. IEEE Trans Med Imaging 35(5):1196–1206CrossRef
35.
Zurück zum Zitat Li Y et al (2015) No-reference image quality assessment with shearlet transform and deep neural networks. Neurocomputing 154:94–109CrossRef Li Y et al (2015) No-reference image quality assessment with shearlet transform and deep neural networks. Neurocomputing 154:94–109CrossRef
36.
Zurück zum Zitat Luo X, Zhang Z, Zhang B, Wu X (2017) Image fusion with contextual statistical similarity and nonsubsampled shearlet transform. IEEE Sens J 17(6):1760–1771CrossRef Luo X, Zhang Z, Zhang B, Wu X (2017) Image fusion with contextual statistical similarity and nonsubsampled shearlet transform. IEEE Sens J 17(6):1760–1771CrossRef
37.
Zurück zum Zitat Nair V, Hinton G (2010) Rectified linear units improve restricted Boltzmann machines. In: Proceedings of 27th international conference on machine learning, pp 807–814 Nair V, Hinton G (2010) Rectified linear units improve restricted Boltzmann machines. In: Proceedings of 27th international conference on machine learning, pp 807–814
38.
Zurück zum Zitat LeCun Y, Bottou L, Orr GB, Müller K-R (1998) Efficient BackProp. In: Orr GB, Müller K-R (eds) Neural networks: tricks of the trade. Springer, Berlin, pp 9–50CrossRef LeCun Y, Bottou L, Orr GB, Müller K-R (1998) Efficient BackProp. In: Orr GB, Müller K-R (eds) Neural networks: tricks of the trade. Springer, Berlin, pp 9–50CrossRef
40.
Zurück zum Zitat Zagoruyko S, Komodakis N (2015) Learning to compare image patches via convolutional neural networks. In: 2015 IEEE conference on computer vision and pattern recognition (CVPR), pp 4353–4361 Zagoruyko S, Komodakis N (2015) Learning to compare image patches via convolutional neural networks. In: 2015 IEEE conference on computer vision and pattern recognition (CVPR), pp 4353–4361
42.
Zurück zum Zitat Krig S (2016) Feature learning and deep learning architecture survey. In: Computer vision metrics. Springer, Cham, pp 375–514CrossRef Krig S (2016) Feature learning and deep learning architecture survey. In: Computer vision metrics. Springer, Cham, pp 375–514CrossRef
43.
Zurück zum Zitat Bronstein MM, Bronstein AM, Michel F, Paragios N (2010) Data fusion through cross-modality metric learning using similarity-sensitive hashing. In: 2010 IEEE computer society conference on computer vision and pattern recognition, pp 3594–3601 Bronstein MM, Bronstein AM, Michel F, Paragios N (2010) Data fusion through cross-modality metric learning using similarity-sensitive hashing. In: 2010 IEEE computer society conference on computer vision and pattern recognition, pp 3594–3601
45.
Zurück zum Zitat Pezeshk A, Petrick N, Chen W, Sahiner B (2017) Seamless lesion insertion for data augmentation in CAD training. IEEE Trans Med Imaging 36(4):1005–1015CrossRef Pezeshk A, Petrick N, Chen W, Sahiner B (2017) Seamless lesion insertion for data augmentation in CAD training. IEEE Trans Med Imaging 36(4):1005–1015CrossRef
46.
Zurück zum Zitat Moonon A-U, Hu J (2015) Multi-focus image fusion based on NSCT and NSST. Sens Imaging 16(1):4CrossRef Moonon A-U, Hu J (2015) Multi-focus image fusion based on NSCT and NSST. Sens Imaging 16(1):4CrossRef
47.
Zurück zum Zitat Vedaldi A, Lenc K (2015) MatConvNet: convolutional neural networks for MATLAB. In: Proceedings of the 23rd ACM international conference on multimedia, New York, NY, USA, pp 689–692 Vedaldi A, Lenc K (2015) MatConvNet: convolutional neural networks for MATLAB. In: Proceedings of the 23rd ACM international conference on multimedia, New York, NY, USA, pp 689–692
48.
Zurück zum Zitat Naji MA, Aghagolzadeh A (2015) Multi-focus image fusion in DCT domain based on correlation coefficient. In: 2015 2nd international conference on knowledge-based engineering and innovation (KBEI), pp 632–639 Naji MA, Aghagolzadeh A (2015) Multi-focus image fusion in DCT domain based on correlation coefficient. In: 2015 2nd international conference on knowledge-based engineering and innovation (KBEI), pp 632–639
49.
Zurück zum Zitat Wang L, Li B, Tian L (2014) EGGDD: an explicit dependency model for multi-modal medical image fusion in shift-invariant shearlet transform domain. Inf Fusion 19:29–37CrossRef Wang L, Li B, Tian L (2014) EGGDD: an explicit dependency model for multi-modal medical image fusion in shift-invariant shearlet transform domain. Inf Fusion 19:29–37CrossRef
50.
Zurück zum Zitat Geng P, Wang Z, Zhang Z, Xiao Z (2012) Image fusion by pulse couple neural network with shearlet. Opt Eng 51(6):067005-1CrossRef Geng P, Wang Z, Zhang Z, Xiao Z (2012) Image fusion by pulse couple neural network with shearlet. Opt Eng 51(6):067005-1CrossRef
51.
Zurück zum Zitat Jagalingam P, Hegde AV (2015) A review of quality metrics for fused image. Aquat Proc 4:133–142CrossRef Jagalingam P, Hegde AV (2015) A review of quality metrics for fused image. Aquat Proc 4:133–142CrossRef
53.
Zurück zum Zitat Chen Y, Blum RS (2009) A new automated quality assessment algorithm for image fusion. Image Vis Comput 27(10):1421–1432CrossRef Chen Y, Blum RS (2009) A new automated quality assessment algorithm for image fusion. Image Vis Comput 27(10):1421–1432CrossRef
Metadaten
Titel
Convolutional neural network-based multimodal image fusion via similarity learning in the shearlet domain
verfasst von
Haithem Hermessi
Olfa Mourali
Ezzeddine Zagrouba
Publikationsdatum
24.03.2018
Verlag
Springer London
Erschienen in
Neural Computing and Applications / Ausgabe 7/2018
Print ISSN: 0941-0643
Elektronische ISSN: 1433-3058
DOI
https://doi.org/10.1007/s00521-018-3441-1

Weitere Artikel der Ausgabe 7/2018

Neural Computing and Applications 7/2018 Zur Ausgabe

Premium Partner