Skip to main content
main-content
Top

Hint

Swipe to navigate through the chapters of this book

2019 | OriginalPaper | Chapter

Synthesising Images and Labels Between MR Sequence Types with CycleGAN

Authors : Eric Kerfoot, Esther Puyol-Antón, Bram Ruijsink, Rina Ariga, Ernesto Zacur, Pablo Lamata, Julia Schnabel

Published in: Domain Adaptation and Representation Transfer and Medical Image Learning with Less Labels and Imperfect Data

Publisher: Springer International Publishing

share
SHARE

Abstract

Real-time (RT) sequences for cardiac magnetic resonance imaging (CMR) have recently been proposed as alternatives to standard cine CMR sequences for subjects unable to hold the breath or suffering from arrhythmia. RT image acquisitions during free breathing produce comparatively poor quality images, a trade-off necessary to achieve the high temporal resolution needed for RT imaging and hence are less suitable in the clinical assessment of cardiac function. We demonstrate the application of a CycleGAN architecture to train autoencoder networks for synthesising cine-like images from RT images and vice versa. Applying this conversion to real-time data produces clearer images with sharper distinctions between myocardial and surrounding tissues, giving clinicians a more precise means of visually inspecting subjects. Furthermore, applying the transformation to segmented cine data to produce pseudo-real-time images allows this label information to be transferred to the real-time image domain. We demonstrate the feasibility of this approach by training a U-net based architecture using these pseudo-real-time images which can effectively segment actual real-time images.
Literature
1.
go back to reference Bernard, O., Lalande, A., Zotti, C., et al.: Deep learning techniques for automatic MRI cardiac multi-structures segmentation and diagnosis: is the problem solved? IEEE Trans. Med. Imaging 37(11), 2514–2525 (2018) CrossRef Bernard, O., Lalande, A., Zotti, C., et al.: Deep learning techniques for automatic MRI cardiac multi-structures segmentation and diagnosis: is the problem solved? IEEE Trans. Med. Imaging 37(11), 2514–2525 (2018) CrossRef
3.
go back to reference Feng, L., Srichai, M.B., Lim, R.P., et al.: Highly accelerated real-time cardiac cine MRI using k-t sparse-sense. Magn. Reson. Med. 70(1), 64–74 (2013) CrossRef Feng, L., Srichai, M.B., Lim, R.P., et al.: Highly accelerated real-time cardiac cine MRI using k-t sparse-sense. Magn. Reson. Med. 70(1), 64–74 (2013) CrossRef
4.
go back to reference He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1026–1034 (2015) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1026–1034 (2015)
6.
go back to reference Huang, G., Liu, Z., Weinberger, K.Q.: Densely connected convolutional networks. CoRR abs/1608.06993 (2016) Huang, G., Liu, Z., Weinberger, K.Q.: Densely connected convolutional networks. CoRR abs/1608.06993 (2016)
7.
go back to reference Huo, Y., Xu, Z., Moon, H., et al.: Synseg-net: synthetic segmentation without target modality ground truth. IEEE Trans. Med. Imaging 38(4), 1016–1025 (2018) CrossRef Huo, Y., Xu, Z., Moon, H., et al.: Synseg-net: synthetic segmentation without target modality ground truth. IEEE Trans. Med. Imaging 38(4), 1016–1025 (2018) CrossRef
8.
go back to reference Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017) Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017)
11.
go back to reference Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012) Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)
12.
go back to reference La Gerche, A., Claessen, G., Van de Bruaene, A., et al.: Cardiac MRI: a new gold standard for ventricular volume quantification during high-intensity exercise. Circ. Cardiovasc. imaging 6(2), 329–38 (2013) CrossRef La Gerche, A., Claessen, G., Van de Bruaene, A., et al.: Cardiac MRI: a new gold standard for ventricular volume quantification during high-intensity exercise. Circ. Cardiovasc. imaging 6(2), 329–38 (2013) CrossRef
13.
go back to reference Lurz, P., Muthurangu, V., Schievano, S., et al.: Feasibility and reproducibility of biventricular volumetric assessment of cardiac function during exercise using real-time radial k-t SENSE magnetic resonance imaging. J. Magn. Reson. Imaging 29(5), 1062–1070 (2009) CrossRef Lurz, P., Muthurangu, V., Schievano, S., et al.: Feasibility and reproducibility of biventricular volumetric assessment of cardiac function during exercise using real-time radial k-t SENSE magnetic resonance imaging. J. Magn. Reson. Imaging 29(5), 1062–1070 (2009) CrossRef
16.
go back to reference Setser, R.M., Fischer, S.E., Lorenz, C.H.: Quantification of left ventricular function with magnetic resonance images acquired in real time. J. Magn. Reson. Imaging 12(3), 430–438 (2000) CrossRef Setser, R.M., Fischer, S.E., Lorenz, C.H.: Quantification of left ventricular function with magnetic resonance images acquired in real time. J. Magn. Reson. Imaging 12(3), 430–438 (2000) CrossRef
17.
go back to reference Shi, W., et al.: Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1874–1883 (2016) Shi, W., et al.: Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1874–1883 (2016)
18.
go back to reference Simard, P.Y., Steinkraus, D., Platt, J.C., et al.: Best practices for convolutional neural networks applied to visual document analysis. In: ICDAR, vol. 3 (2003) Simard, P.Y., Steinkraus, D., Platt, J.C., et al.: Best practices for convolutional neural networks applied to visual document analysis. In: ICDAR, vol. 3 (2003)
19.
go back to reference Welander, P., Karlsson, S., Eklund, A.: Generative adversarial networks for image-to-image translation on multi-contrast MR images-a comparison of cyclegan and unit. arXiv preprint arXiv:​1806.​07777 (2018) Welander, P., Karlsson, S., Eklund, A.: Generative adversarial networks for image-to-image translation on multi-contrast MR images-a comparison of cyclegan and unit. arXiv preprint arXiv:​1806.​07777 (2018)
21.
go back to reference Zhang, Z., Liu, Q., Wang, Y.: Road extraction by deep residual u-net. IEEE Geosci. Remote Sens. Lett. 15(5), 749–753 (2018) CrossRef Zhang, Z., Liu, Q., Wang, Y.: Road extraction by deep residual u-net. IEEE Geosci. Remote Sens. Lett. 15(5), 749–753 (2018) CrossRef
22.
go back to reference Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: 2017 IEEE International Conference on Computer Vision (ICCV) (2017) Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: 2017 IEEE International Conference on Computer Vision (ICCV) (2017)
Metadata
Title
Synthesising Images and Labels Between MR Sequence Types with CycleGAN
Authors
Eric Kerfoot
Esther Puyol-Antón
Bram Ruijsink
Rina Ariga
Ernesto Zacur
Pablo Lamata
Julia Schnabel
Copyright Year
2019
DOI
https://doi.org/10.1007/978-3-030-33391-1_6

Premium Partner