Skip to main content

Advertisement

Log in

A Review on Deep Learning in Medical Image Reconstruction

  • Published:
Journal of the Operations Research Society of China Aims and scope Submit manuscript

Abstract

Medical imaging is crucial in modern clinics to provide guidance to the diagnosis and treatment of diseases. Medical image reconstruction is one of the most fundamental and important components of medical imaging, whose major objective is to acquire high-quality medical images for clinical usage at the minimal cost and risk to the patients. Mathematical models in medical image reconstruction or, more generally, image restoration in computer vision have been playing a prominent role. Earlier mathematical models are mostly designed by human knowledge or hypothesis on the image to be reconstructed, and we shall call these models handcrafted models. Later, handcrafted plus data-driven modeling started to emerge which still mostly relies on human designs, while part of the model is learned from the observed data. More recently, as more data and computation resources are made available, deep learning based models (or deep models) pushed the data-driven modeling to the extreme where the models are mostly based on learning with minimal human designs. Both handcrafted and data-driven modeling have their own advantages and disadvantages. Typical handcrafted models are well interpretable with solid theoretical supports on the robustness, recoverability, complexity, etc., whereas they may not be flexible and sophisticated enough to fully leverage large data sets. Data-driven models, especially deep models, on the other hand, are generally much more flexible and effective in extracting useful information from large data sets, while they are currently still in lack of theoretical foundations. Therefore, one of the major research trends in medical imaging is to combine handcrafted modeling with deep modeling so that we can enjoy benefits from both approaches. The major part of this article is to provide a conceptual review of some recent works on deep modeling from the unrolling dynamics viewpoint. This viewpoint stimulates new designs of neural network architectures with inspirations from optimization algorithms and numerical differential equations. Given the popularity of deep modeling, there are still vast remaining challenges in the field, as well as opportunities which we shall discuss at the end of this article.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Pavlovic, G., Tekalp, A.M.: Maximum likelihood parametric blur identification based on a continuous spatial domain model. IEEE Trans. Image Process. 1(4), 496–504 (1992)

    Google Scholar 

  2. Bertalmio, M., Sapiro, G., Caselles, V., Ballester, C.: Image inpainting. In: Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, pp. 417–424. ACM Press/Addison-Wesley Publishing Co. (2000)

  3. Brown, R.W., Haacke, E.M., Cheng, Y.C.N., Thompson, M.R., Venkatesan, R.: Magnetic Resonance Imaging: Physical Principles and Sequence Design. Wiley, Hoboken (2014)

    Google Scholar 

  4. Buzug, T.M.: Computed Tomography: From Photon Statistics to Modern Cone-Beam CT. Springer, Berlin (2008)

    Google Scholar 

  5. Choi, J.K., Park, H.S., Wang, S., Wang, Y., Seo, J.K.: Inverse problem in quantitative susceptibility mapping. SIAM J. Imaging Sci. 7(3), 1669–1689 (2014)

    MathSciNet  MATH  Google Scholar 

  6. Natterer, F.: Image reconstruction in quantitative susceptibility mapping. SIAM J. Imaging Sci. 9(3), 1127–1131 (2016)

    MathSciNet  MATH  Google Scholar 

  7. de Rochefort, L., Liu, T., Kressler, B., Liu, J., Spincemaille, P., Lebon, V., Wu, J., Wang, Y.: Quantitative susceptibility map reconstruction from MR phase data using Bayesian regularization: validation and application to brain imaging. Magn. Reson. Med. 63(1), 194–206 (2010)

    Google Scholar 

  8. Wang, Y., Liu, T.: Quantitative susceptibility mapping (QSM): decoding MRI data for a tissue magnetic biomarker. Magn. Reson. Med. 73(1), 82–101 (2015)

    Google Scholar 

  9. Rudin, L., Lions, P.L., Osher, S.: Multiplicative denoising and deblurring: theory and algorithms. In: Osher, S., Paragios, N. (eds.) Geometric Level Set Methods in Imaging, Vision, and Graphics, pp. 103–119. Springer, Berlin (2003)

    Google Scholar 

  10. Aubert, G., Kornprobst, P.: Mathematical Problems in Image Processing: Partial Differential Equations and the Calculus of Variations. Springer, Berlin (2006)

    MATH  Google Scholar 

  11. Chan, T.F., Shen, J.: Image Processing and Analysis: Variational, PDE, Wavelet, and Stochastic Methods. SIAM, Philadelphia (2005)

    MATH  Google Scholar 

  12. Dong, B., Shen, Z.: Image restoration: a data-driven perspective. In: Proceedings of the International Congress of Industrial and Applied Mathematics (ICIAM), pp. 65–108 (2015)

  13. Shen, Z.: Wavelet frames and image restorations. In: Proceedings of the International Congress of Mathematicians, vol. 4, pp. 2834–2863. World Scientific (2010)

  14. Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., Manzagol, P.A.: Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. 11(12), 3371–3408 (2010)

    MathSciNet  MATH  Google Scholar 

  15. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Medical Image Computing and Computer Assisted Intervention Society, pp. 234–241 (2015)

  16. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

  17. He, K., Zhang, X., Ren, S., Sun, J.: Identity mappings in deep residual networks. In: European Conference on Computer Vision, pp. 630–645 (2016)

  18. Rudin, L.I., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Physica D 60(1), 259–268 (1992)

    MathSciNet  MATH  Google Scholar 

  19. Perona, P., Shiota, T., Malik, J.: Anisotropic diffusion. In: Romeny, B.M.H. (ed.) Geometry-Driven Diffusion in Computer Vision, pp. 73–92. Springer, Berlin (1994)

    Google Scholar 

  20. Perona, P., Malik, J.: Scale-space and edge detection using anisotropic diffusion. IEEE Trans. Pattern Anal. Mach. Intell. 12(7), 629–639 (1990)

    Google Scholar 

  21. Osher, S., Rudin, L.I.: Feature-oriented image enhancement using shock filters. SIAM J. Numer. Anal. 27(4), 919–940 (1990)

    MATH  Google Scholar 

  22. Alvarez, L., Mazorra, L.: Signal and image restoration using shock filters and anisotropic diffusion. SIAM J. Numer. Anal. 31(2), 590–605 (1994)

    MathSciNet  MATH  Google Scholar 

  23. Buades, A., Coll, B., Morel, J.M.: A non-local algorithm for image denoising. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 60–65 (2005)

  24. Buades, A., Coll, B., Morel, J.M.: A review of image denoising algorithms, with a new one. Multiscale Model. Simul. 4(2), 490–530 (2005)

    MathSciNet  MATH  Google Scholar 

  25. Buades, A., Coll, B., Morel, J.M.: Image denoising methods. A new nonlocal principle. SIAM Rev. 52(1), 113–147 (2010)

    MathSciNet  MATH  Google Scholar 

  26. Lou, Y., Zhang, X., Osher, S., Bertozzi, A.: Image recovery via nonlocal operators. J. Sci. Comput. 42(2), 185–197 (2010)

    MathSciNet  MATH  Google Scholar 

  27. Dabov, K., Foi, A., Katkovnik, V., Egiazarian, K.: Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007)

    MathSciNet  Google Scholar 

  28. Daubechies, I.: Ten Lectures on Wavelets. SIAM, Philadelphia (1992)

    MATH  Google Scholar 

  29. Mallat, S.: A Wavelet Tour of Signal Processing, The Sparse Way, 3rd edn. Academic Press, Burlington, MA (2009)

    MATH  Google Scholar 

  30. Ron, A., Shen, Z.: Affine systems in \(l_{2}({\mathbb{R}}^{d})\): the analysis of the analysis operator. J. Funct. Anal. 148(2), 408–447 (1997)

    MathSciNet  MATH  Google Scholar 

  31. Dong, B., Shen, Z.: MRA-based wavelet frames and applications. In: Zhao, H.-K. (ed.) Mathematics in Image Processing. IAS Lecture Notes Series, vol. 19. American Mathematical Society, Providence (2013)

    Google Scholar 

  32. Gu, S., Zhang, L., Zuo, W., Feng, X.: Weighted nuclear norm minimization with application to image denoising. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 2862–2869 (2014)

  33. Engan, K., Aase, S.O., Husoy, J.H.: Method of optimal directions for frame design. In: IEEE International Conference on Acoustics, Speech, and Signal Processing(ICASSP), vol. 5, pp. 2443–2446. IEEE (1999)

  34. Aharon, M., Elad, M., Bruckstein, A., et al.: K-SVD: an algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans. Signal Process. 54(11), 4311 (2006)

    MATH  Google Scholar 

  35. Liu, R., Lin, Z., Zhang, W., Su, Z.: Learning PDEs for image restoration via optimal control. In: European Conference on Computer Vision, pp. 115–128. Springer (2010)

  36. Cai, J.F., Ji, H., Shen, Z., Ye, G.B.: Data-driven tight frame construction and image denoising. Appl. Comput. Harmon. Anal. 37(1), 89–105 (2014)

    MathSciNet  MATH  Google Scholar 

  37. Bao, C., Ji, H., Shen, Z.: Convergence analysis for iterative data-driven tight frame construction scheme. Appl. Comput. Harmon. Anal. 38(3), 510–523 (2015)

    MathSciNet  MATH  Google Scholar 

  38. Tai, C., Weinan, E.: Multiscale adaptive representation of signals: I. The basic framework. J. Mach. Learn. Res. 17(1), 4875–4912 (2016)

    MathSciNet  MATH  Google Scholar 

  39. Wright, J., Ganesh, A., Rao, S., Peng, Y., Ma, Y.: Robust principal component analysis: exact recovery of corrupted low-rank matrices via convex optimization. In: Neural Information Processing Systems, pp. 2080–2088 (2009)

  40. Liu, G., Lin, Z., Yan, S., Sun, J., Yu, Y., Ma, Y.: Robust recovery of subspace structures by low-rank representation. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 171–184 (2013)

    Google Scholar 

  41. Cai, J.F., Jia, X., Gao, H., Jiang, S.B., Shen, Z., Zhao, H.: Cine cone beam CT reconstruction using low-rank matrix factorization: algorithm and a proof-of-principle study. IEEE Trans. Med. Imaging 33(8), 1581–1591 (2014)

    Google Scholar 

  42. Candès, E.J., Recht, B.: Exact matrix completion via convex optimization. Found. Comput. Math. 9(6), 717 (2009)

    MathSciNet  MATH  Google Scholar 

  43. Cai, J.F., Candès, E.J., Shen, Z.: A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 20(4), 1956–1982 (2010)

    MathSciNet  MATH  Google Scholar 

  44. Mumford, D., Shah, J.: Optimal approximations by piecewise smooth functions and associated variational problems. Commun. Pure Appl. Math. 42(5), 577–685 (1989)

    MathSciNet  MATH  Google Scholar 

  45. Cai, J.F., Dong, B., Shen, Z.: Image restoration: a wavelet frame based model for piecewise smooth functions and beyond. Appl. Comput. Harmon. Anal. 41(1), 94–138 (2016)

    MathSciNet  MATH  Google Scholar 

  46. Heimann, T., Meinzer, H.P.: Statistical shape models for 3D medical image segmentation: a review. Med. Image Anal. 13(4), 543–563 (2009)

    Google Scholar 

  47. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Neural Information Processing Systems, pp. 1097–1105 (2012)

  48. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Neural Information Processing Systems, pp. 2672–2680 (2014)

  49. Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3(1), 1–122 (2011)

    MATH  Google Scholar 

  50. Gabay, D., Mercier, B.: A dual algorithm for the solution of nonlinear variational problems via finite element approximation. Comput. Math. Appl. 2(1), 17–40 (1976)

    MATH  Google Scholar 

  51. Glowinski, R., Marroco, A.: Sur l’approximation, par éléments finis d’ordre un, et la résolution, par pénalisation-dualité d’une classe de problèmes de dirichlet non linéaires. Revue française d’automatique, informatique, recherche opérationnelle. Analyse numérique 9(R2), 41–76 (1975)

    MathSciNet  MATH  Google Scholar 

  52. Zhu, M., Chan, T.: An efficient primal-dual hybrid gradient algorithm for total variation image restoration. UCLA CAM Report, vol. 34 (2008)

  53. Esser, E., Zhang, X., Chan, T.F.: A general framework for a class of first order primal-dual algorithms for convex optimization in imaging science. SIAM J. Imaging Sci. 3(4), 1015–1046 (2010)

    MathSciNet  MATH  Google Scholar 

  54. Chambolle, A., Pock, T.: A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40(1), 120–145 (2011)

    MathSciNet  MATH  Google Scholar 

  55. Cai, J.F., Osher, S., Shen, Z.: Split Bregman methods and frame based image restoration. Multiscale Model. Simul. 8(2), 337–369 (2009)

    MathSciNet  MATH  Google Scholar 

  56. Goldstein, T., Osher, S.: The split Bregman method for \(l_1\)-regularized problems. SIAM J. Imaging Sci. 2(2), 323–343 (2009)

    MathSciNet  MATH  Google Scholar 

  57. Yin, W., Osher, S., Goldfarb, D., Darbon, J.: Bregman iterative algorithms for \(\ell _1\)-minimization with applications to compressed sensing. SIAM J. Imaging Sci. 1(1), 143–168 (2008)

    MathSciNet  MATH  Google Scholar 

  58. Osher, S., Mao, Y., Dong, B., Yin, W.: Fast linearized Bregman iteration for compressive sensing and sparse denoising. Commun. Math. Sci. 8(1), 93–111 (2010)

    MathSciNet  MATH  Google Scholar 

  59. Daubechies, I., Defrise, M., De Mol, C.: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 57(11), 1413–1457 (2004)

    MathSciNet  MATH  Google Scholar 

  60. Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)

    MathSciNet  MATH  Google Scholar 

  61. Bruck Jr., R.E.: On the weak convergence of an ergodic iteration for the solution of variational inequalities for monotone operators in Hilbert space. J. Math. Anal. Appl. 61(1), 159–164 (1977)

    MathSciNet  MATH  Google Scholar 

  62. Passty, G.B.: Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72, 383–290 (1979)

    MathSciNet  MATH  Google Scholar 

  63. Shen, Z., Toh, K.C., Yun, S.: An accelerated proximal gradient algorithm for frame-based image restoration via the balanced approach. SIAM J. Imaging Sci. 4(2), 573–596 (2011)

    MathSciNet  MATH  Google Scholar 

  64. Nesterov, Y.E.: A method for solving the convex programming problem with convergence rate \(O(1/k^2)\). Dokl. Akad. Nauk SSSR 269, 543–547 (1983)

    MathSciNet  Google Scholar 

  65. Nocedal, J., Wright, S.J.: Numerical Optimization, 2nd edn. Springer, Berlin (2006)

    MATH  Google Scholar 

  66. Bottou, L.: Large-scale machine learning with stochastic gradient descent. In: Proceedings of COMPSTAT, pp. 177–186. Springer (2010)

  67. Robbins, H., Monro, S.: A stochastic approximation method. Ann. Math. Stat. 22(3), 400–407 (1951)

    MathSciNet  MATH  Google Scholar 

  68. Bottou, L.: Stochastic gradient descent tricks. In: Orr, G.B., Müller, K.R. (eds.) Neural Networks: Tricks of the Trade, pp. 421–436. Springer, Berlin (2012)

    Google Scholar 

  69. Zhang, T.: Solving large scale linear prediction problems using stochastic gradient descent algorithms. In: International Conference on Machine Learning, pp. 116–123. ACM (2004)

  70. Nitanda, A.: Stochastic proximal gradient descent with acceleration techniques. In: Neural Information Processing Systems, pp. 1574–1582 (2014)

  71. Zhang, Y., Xiao, L.: Stochastic primal-dual coordinate method for regularized empirical risk minimization. J. Mach. Learn. Res. 18(1), 2939–2980 (2017)

    MathSciNet  MATH  Google Scholar 

  72. Konečnỳ, J., Liu, J., Richtárik, P., Takáč, M.: Mini-batch semi-stochastic gradient descent in the proximal setting. IEEE J. Sel. Top. Signal Process. 10(2), 242–255 (2016)

    Google Scholar 

  73. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations (2015)

  74. Duchi, J., Hazan, E., Singer, Y.: Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res. 12(Jul), 2121–2159 (2011)

    MathSciNet  MATH  Google Scholar 

  75. Hinton, G.: Neural networks for machine learning. Coursera, video lectures (2012)

  76. Bottou, L., Curtis, F.E., Nocedal, J.: Optimization methods for large-scale machine learning. SIAM Rev. 60(2), 223–311 (2018)

    MathSciNet  MATH  Google Scholar 

  77. Gregor, K., LeCun, Y.: Learning fast approximations of sparse coding. In: International Conference on Machine Learning, pp. 399–406 (2010)

  78. Chen, Y., Yu, W., Pock, T.: On learning optimized reaction diffusion processes for effective image restoration. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 5261–5269 (2015)

  79. Yang, Y., Sun, J., Li, H., Xu, Z.: Deep ADMM-Net for compressive sensing MRI. In: Neural Information Processing Systems, pp. 10–18 (2016)

  80. Adler, J., Öktem, O.: Learned primal-dual reconstruction. IEEE Trans. Med. Imaging 37(6), 1322–1332 (2018)

    Google Scholar 

  81. Solomon, O., Cohen, R., Zhang, Y., Yang, Y., Qiong, H., Luo, J., van Sloun, R.J., Eldar, Y.C.: Deep unfolded robust PCA with application to clutter suppression in ultrasound. arXiv preprint arXiv:1811.08252 (2018)

  82. Chen, X., Liu, J., Wang, Z., Yin, W.: Theoretical linear convergence of unfolded ISTA and its practical weights and thresholds. In: Neural Information Processing Systems, pp. 9079–9089 (2018)

  83. Liu, R., Cheng, S., He, Y., Fan, X., Lin, Z., Luo, Z.: On the convergence of learning-based iterative methods for nonconvex inverse problems. IEEE Trans. Pattern Anal. Mach. Intell. (2019). https://doi.org/10.1109/TPAMI.2019.2920591

    Article  Google Scholar 

  84. Li, H., Yang, Y., Chen, D., Lin, Z.: Optimization algorithm inspired deep neural network structure design. In: Asian Conference on Machine Learning, pp. 614–629 (2018)

  85. Zhang, H., Dong, B., Liu, B.: JSR-Net: a deep network for joint spatial-radon domain CT reconstruction from incomplete data. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)-2019, pp. 3657–3661 (2019). https://doi.org/10.1109/ICASSP.2019.8682178

  86. Weinan, E.: A proposal on machine learning via dynamical systems. Commun. Math. Stat. 5(1), 1–11 (2017)

    MathSciNet  MATH  Google Scholar 

  87. Chang, B., Meng, L., Haber, E., Tung, F., Begert, D.: Multi-level residual networks from dynamical systems view. In: International Conference on Learning Representations Poster (2018)

  88. Li, Z., Shi, Z.: Deep residual learning and PDEs on manifold. arXiv:1708.05115 (2017)

  89. Chang, B., Meng, L., Haber, E., Ruthotto, L., Begert, D., Holtham, E.: Reversible architectures for arbitrarily deep residual neural networks. In: AAAI Conference on Artificial Intelligence (2018)

  90. Lu, Y., Zhong, A., Li, Q., Dong, B.: Beyond finite layer neural networks: bridging deep architectures and numerical differential equations. In: International Conference on Machine Learning, pp. 3276–3285 (2018)

  91. Wang, B., Yuan, B., Shi, Z., Osher, S.J.: Enresnet: Resnet ensemble via the Feynman–Kac formalism. arXiv:1811.10745 (2018)

  92. Ruthotto, L., Haber, E.: Deep neural networks motivated by partial differential equations. arXiv:1804.04272 (2018)

  93. Tao, Y., Sun, Q., Du, Q., Liu, W.: Nonlocal neural networks, nonlocal diffusion and nonlocal modeling. In: Neural Information Processing Systems, pp. 494–504. Curran Associates, Inc. (2018)

  94. Zhang, D., Zhang, T., Lu, Y., Zhu, Z., Dong, B.: You only propagate once: accelerating adversarial training via maximal principle. In: Neural Information Processing Systems (2019)

  95. Zhang, X., Lu, Y., Liu, J., Dong, B.: Dynamically unfolding recurrent restorer: a moving endpoint control method for image restoration. In: International Conference on Learning Representations (2019)

  96. Long, Z., Lu, Y., Ma, X., Dong, B.: PDE-Net: learning PDEs from data. In: International Conference on Machine Learning, pp. 3214–3222 (2018)

  97. Long, Z., Lu, Y., Dong, B.: PDE-Net 2.0: Learning PDEs from data with a numeric-symbolic hybrid deep network. J. Comput. Phys. 339, 108925 (2019)

    MathSciNet  Google Scholar 

  98. Lu, Y., Li, Z., He, D., Sun, Z., Dong, B., Qin, T., Wang, L., Liu, T.Y.: Understanding and improving transformer from a multi-particle dynamic system point of view. arXiv:1906.02762 (2019)

  99. He, J., Xu, J.: MgNet: a unified framework of multigrid and convolutional neural network. Sci. China Math. 62, 1331–1354 (2019)

    MathSciNet  MATH  Google Scholar 

  100. Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 4700–4708 (2017)

  101. Bengio, Y., Lamblin, P., Popovici, D., Larochelle, H.: Greedy layer-wise training of deep networks. In: Neural Information Processing Systems, pp. 153–160 (2007)

  102. Poultney, C., Chopra, S., Cun, Y.L., et al.: Efficient learning of sparse representations with an energy-based model. In: Neural Information Processing Systems, pp. 1137–1144 (2007)

  103. Badrinarayanan, V., Kendall, A., Cipolla, R.: Segnet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39(12), 2481–2495 (2017)

    Google Scholar 

  104. Mao, X., Shen, C., Yang, Y.B.: Image restoration using very deep convolutional encoder–decoder networks with symmetric skip connections. In: Neural Information Processing Systems, pp. 2802–2810 (2016)

  105. Chen, H., Zhang, Y., Kalra, M.K., Lin, F., Chen, Y., Liao, P., Zhou, J., Wang, G.: Low-dose CT with a residual encoder–decoder convolutional neural network. IEEE Trans. Med. Imaging 36(12), 2524–2535 (2017)

    Google Scholar 

  106. Milletari, F., Navab, N., Ahmadi, S.A.: V-net: fully convolutional neural networks for volumetric medical image segmentation. In: International Conference on 3D Vision (3DV), pp. 565–571. IEEE (2016)

  107. Yin, R., Gao, T., Lu, Y.M., Daubechies, I.: A tale of two bases: local-nonlocal regularization on image patches with convolution framelets. SIAM J. Imaging Sci. 10(2), 711–750 (2017)

    MathSciNet  MATH  Google Scholar 

  108. Ye, J.C., Han, Y., Cha, E.: Deep convolutional framelets: a general deep learning framework for inverse problems. SIAM J. Imaging Sci. 11(2), 991–1048 (2018)

    MathSciNet  MATH  Google Scholar 

  109. Falk, T., Mai, D., Bensch, R., Çiçek, Ö., Abdulkadir, A., Marrakchi, Y., Böhm, A., Deubner, J., Jäckel, Z., Seiwald, K., et al.: U-Net: deep learning for cell counting, detection, and morphometry. Nat. Methods 16, 67–70 (2019)

    Google Scholar 

  110. DeVore, R., Lorentz, G.: Constructive Approximation. Springer, Berlin (1993)

    MATH  Google Scholar 

  111. Hornik, K.: Approximation capabilities of multilayer feedforward networks. Neural Netw. 4(2), 251–257 (1991)

    MathSciNet  Google Scholar 

  112. Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural Netw. 2(5), 359–366 (1989)

    MATH  Google Scholar 

  113. Pinkus, A.: Approximation theory of the MLP model in neural networks. Acta Numer. 8, 143–195 (1999)

    MathSciNet  MATH  Google Scholar 

  114. Cybenko, G.: Approximation by superpositions of a sigmoidal function. Math. Control Signal Syst. 2(4), 303–314 (1989)

    MathSciNet  MATH  Google Scholar 

  115. Funahashi, K.I.: On the approximate realization of continuous mappings by neural networks. Neural Netw. 2(3), 183–192 (1989)

    Google Scholar 

  116. Barron, A.R.: Universal approximation bounds for superpositions of a sigmoidal function. IEEE Trans. Inf. Theory 39(3), 930–945 (1993)

    MathSciNet  MATH  Google Scholar 

  117. Liang, S., Srikant, R.: Why deep neural networks for function approximation? In: International Conference on Learning Representations (2017)

  118. Mhaskar, H., Liao, Q., Poggio, T.: Learning functions: when is deep better than shallow. arXiv:1603.00988 (2016)

  119. Eldan, R., Shamir, O.: The power of depth for feedforward neural networks. In: Conference on Learning Theory, pp. 907–940 (2016)

  120. Cohen, N., Sharir, O., Shashua, A.: On the expressive power of deep learning: a tensor analysis. In: Conference on Learning Theory, pp. 698–728 (2016)

  121. Delalleau, O., Bengio, Y.: Shallow vs. deep sum-product networks. In: Neural Information Processing Systems, pp. 666–674 (2011)

  122. Telgarsky, M.: Representation benefits of deep feedforward networks. arXiv:1509.08101 (2015)

  123. Telgarsky, M.: Benefits of depth in neural networks. In: Conference on Learning Theory, vol. 49, pp. 1–23 (2016)

  124. Lu, Z., Pu, H., Wang, F., Hu, Z., Wang, L.: The expressive power of neural networks: a view from the width. In: Neural Information Processing Systems, pp. 6231–6239 (2017)

  125. Hanin, B., Sellke, M.: Approximating continuous functions by ReLU nets of minimal width. arXiv:1710.11278 (2017)

  126. Hanin, B.: Universal function approximation by deep neural nets with bounded width and ReLU activations. Mathematics 7(10), 992 (2019)

    Google Scholar 

  127. Yarotsky, D.: Optimal approximation of continuous functions by very deep ReLU networks. In: Conference on Learning Theory (2018)

  128. Rolnick, D., Tegmark, M.: The power of deeper networks for expressing natural functions. In: International Conference on Learning Representations (2018)

  129. Shen, Z., Yang, H., Zhang, S.: Nonlinear approximation via compositions. Neural Netw. 119, 74–84 (2019)

    MATH  Google Scholar 

  130. Veit, A., Wilber, M.J., Belongie, S.: Residual networks behave like ensembles of relatively shallow networks. In: Neural Information Processing Systems, pp. 550–558 (2016)

  131. Lin, H., Jegelka, S.: ResNet with one-neuron hidden layers is a universal approximator. In: Neural Information Processing Systems, pp. 6172–6181 (2018)

  132. E, W., Ma, C., Wang, Q.: A priori estimates of the population risk for residual networks (2019)

  133. He, J., Li, L., Xu, J., Zheng, C.: ReLU deep neural networks and linear finite elements. arXiv:1807.03973 (2018)

  134. Nochetto, R.H., Veeser, A.: Primer of adaptive finite element methods. In: Naldi, G., Russo, G. (eds.) Multiscale and Adaptivity: Modeling, Numerics and Applications, pp. 125–225. Springer, Berlin (2011)

    Google Scholar 

  135. Cessac, B.: A view of neural networks as dynamical systems. Int. J. Bifurc. Chaos 20(06), 1585–1629 (2010)

    MathSciNet  MATH  Google Scholar 

  136. Sonoda, S., Murata, N.: Double continuum limit of deep neural networks. In: ICML Workshop (2017)

  137. Thorpe, M., van Gennip, Y.: Deep limits of residual neural networks. arXiv:1810.11741 (2018)

  138. Weinan, E., Han, J., Li, Q.: A mean-field optimal control formulation of deep learning. Res. Math. Sci. 6(10), 1–41 (2019). https://doi.org/10.1007/s40687-018-0172-y

    Article  MathSciNet  MATH  Google Scholar 

  139. Li, Q., Chen, L., Tai, C., Weinan, E.: Maximum principle based algorithms for deep learning. J. Mach. Learn. Res. 18(1), 5998–6026 (2017)

    MathSciNet  Google Scholar 

  140. Chen, T.Q., Rubanova, Y., Bettencourt, J., Duvenaud, D.K.: Neural ordinary differential equations. In: Neural Information Processing Systems, pp. 6572–6583 (2018)

  141. Zhang, X., Li, Z., Loy, C.C., Lin, D.: Polynet: a pursuit of structural diversity in very deep networks. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 3900–3908 (2017)

  142. Larsson, G., Maire, M., Shakhnarovich, G.: Fractalnet: ultra-deep neural networks without residuals. In: International Conference on Learning Representations (2016)

  143. Gomez, A.N., Ren, M., Urtasun, R., Grosse, R.B.: The reversible residual network: backpropagation without storing activations. In: Neural Information Processing Systems, pp. 2214–2224 (2017)

  144. Zhang, J., Han, B., Wynter, L., Low, K.H., Kankanhalli, M.: Towards robust ResNet: a small step but a giant leap. In: International Joint Conference on Artificial Intelligence, pp. 4285–4291 (2019)

  145. Ascher, U.M., Petzold, L.R.: Computer Methods for Ordinary Differential Equations and Differential-Algebraic Equations, vol. 61. SIAM, Philadelphia (1998)

    MATH  Google Scholar 

  146. Zhu, M., Chang, B., Fu, C.: Convolutional neural networks combined with Runge–Kutta methods. arXiv:1802.08831 (2018)

  147. Warming, R., Hyett, B.: The modified equation approach to the stability and accuracy analysis of finite-difference methods. J. Comput. Phys. 14(2), 159–179 (1974)

    MathSciNet  MATH  Google Scholar 

  148. Su, W., Boyd, S., Candès, E.: A differential equation for modeling Nesterov’s accelerated gradient method: theory and insights. In: Neural Information Processing Systems, pp. 2510–2518 (2014)

  149. Wilson, A.C., Recht, B., Jordan, M.I.: A Lyapunov analysis of momentum methods in optimization. arXiv:1611.02635 (2016)

  150. Dong, B., Jiang, Q., Shen, Z.: Image restoration: wavelet frame shrinkage, nonlinear evolution PDEs, and beyond. Multiscale Model. Simul. 15(1), 606–660 (2017)

    MathSciNet  MATH  Google Scholar 

  151. Gastaldi, X.: Shake-shake regularization. In: International Conference on Learning Representations Workshop (2017)

  152. Huang, G., Sun, Y., Liu, Z., Sedra, D., Weinberger, K.Q.: Deep networks with stochastic depth. In: European Conference on Computer Vision, pp. 646–661 (2016)

  153. Sun, Q., Tao, Y., Du, Q.: Stochastic training of residual networks: a differential equation viewpoint. arXiv preprint arXiv:1812.00174 (2018)

  154. Natterer, F.: The Mathematics of Computerized Tomography. SIAM, Philadelphia (2001)

    MATH  Google Scholar 

  155. Zeng, G.L.: Medical Image Reconstruction: A Conceptual Tutorial. Springer, Berlin (2010)

    Google Scholar 

  156. Scherzer, O. (ed.): Handbook of Mathematical Methods in Imaging, 2nd edn. Springer, New York (2015)

    MATH  Google Scholar 

  157. Herman, G.T.: Fundamentals of Computerized Tomography: Image Reconstruction from Projections. Springer, Berlin (2009)

    MATH  Google Scholar 

  158. Zhu, B., Liu, J.Z., Cauley, S.F., Rosen, B.R., Rosen, M.S.: Image reconstruction by domain-transform manifold learning. Nature 555(7697), 487 (2018)

    Google Scholar 

  159. Kalra, M., Wang, G., Orton, C.G.: Radiomics in lung cancer: its time is here. Med. Phys. 45(3), 997–1000 (2018)

    Google Scholar 

  160. Wu, D., Kim, K., Dong, B., El Fakhri, G., Li, Q.: End-to-end lung nodule detection in computed tomography. In: International Workshop on Machine Learning in Medical Imaging, pp. 37–45. Springer (2018)

  161. Liu, D., Wen, B., Liu, X., Wang, Z., Huang, T.S.: When image denoising meets high-level vision tasks: a deep learning approach. In: International Joint Conference on Artificial Intelligence, pp. 842–848 (2018)

  162. Liu, D., Wen, B., Jiao, J., Liu, X., Wang, Z., Huang, T.S.: Connecting image denoising and high-level vision tasks via deep learning. arXiv preprint arXiv:1809.01826 (2018)

  163. Zhang, Z., Liang, X., Dong, X., Xie, Y., Cao, G.: A sparse-view CT reconstruction method based on combination of densenet and deconvolution. IEEE Trans. Med. Imaging 37(6), 1407–1417 (2018)

    Google Scholar 

  164. Yang, Q., Yan, P., Zhang, Y., Yu, H., Shi, Y., Mou, X., Kalra, M.K., Zhang, Y., Sun, L., Wang, G.: Low-dose CT image denoising using a generative adversarial network with Wasserstein distance and perceptual loss. IEEE Trans. Med. Imaging 37(6), 1348–1357 (2018)

    Google Scholar 

  165. Jin, K.H., Mccann, M.T., Froustey, E., Unser, M.: Deep convolutional neural network for inverse problems in imaging. IEEE Trans. Image Process. 26(9), 4509–4522 (2017)

    MathSciNet  MATH  Google Scholar 

  166. Han, Y.S., Yoo, J., Ye, J.C.: Deep residual learning for compressed sensing CT reconstruction via persistent homology analysis. arXiv preprint arXiv:1611.06391 (2016)

  167. Liu, J., Chen, X., Wang, Z., Yin, W.: ALISTA: Analytic weights are as good as learned weights in International Conference on Learning Representations. In: ICLR (2019)

  168. Xie, X., Wu, J., Zhong, Z., Liu, G., Lin, Z.: Differentiable linearized ADMM. In: International Conference on Machine Learning (2019)

  169. Yang, Y., Sun, J., Li, H., Xu, Z.: ADMM-Net: a deep learning approach for compressive sensing MRI. arXiv preprint arXiv:1705.06869 (2017)

  170. Parikh, N., Boyd, S., et al.: Proximal algorithms. Found. Trends® Optim. 1(3), 127–239 (2014)

    Google Scholar 

  171. Adler, J., Öktem, O.: Solving ill-posed inverse problems using iterative deep neural networks. Inverse Probl. 33, 124007 (2017)

    MathSciNet  MATH  Google Scholar 

  172. Dong, B., Li, J., Shen, Z.: X-ray CT image reconstruction via wavelet frame based regularization and radon domain inpainting. J. Sci. Comput. 54(2), 333–349 (2013)

    MathSciNet  MATH  Google Scholar 

  173. Burger, M., Müller, J., Papoutsellis, E., Schönlieb, C.B.: Total variation regularization in measurement and image space for PET reconstruction. Inverse Probl. 30(10), 105003 (2014)

    MathSciNet  MATH  Google Scholar 

  174. Zhan, R., Dong, B.: CT image reconstruction by spatial-radon domain data-driven tight frame regularization. SIAM J. Imaging Sci. 9(3), 1063–1083 (2016)

    MathSciNet  MATH  Google Scholar 

  175. Zhang, H., Dong, B., Liu, B.: A reweighted joint spatial-radon domain CT image reconstruction model for metal artifact reduction. SIAM J. Imaging Sci. 11(1), 707–733 (2018)

    MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bin Dong.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

The work of Hai-Miao Zhang was funded by China Postdoctoral Science Foundation (No. 2018M641056). The work of Bin Dong was supported in part by the National Natural Science Foundation of China (No. 11831002), and Natural Science Foundation of Beijing (No. Z180001).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, HM., Dong, B. A Review on Deep Learning in Medical Image Reconstruction. J. Oper. Res. Soc. China 8, 311–340 (2020). https://doi.org/10.1007/s40305-019-00287-4

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s40305-019-00287-4

Keywords

Mathematics Subject Classification

Navigation