Skip to main content

Tipp

Weitere Kapitel dieses Buchs durch Wischen aufrufen

2019 | OriginalPaper | Buchkapitel

Synthesising Images and Labels Between MR Sequence Types with CycleGAN

verfasst von : Eric Kerfoot, Esther Puyol-Antón, Bram Ruijsink, Rina Ariga, Ernesto Zacur, Pablo Lamata, Julia Schnabel

Erschienen in: Domain Adaptation and Representation Transfer and Medical Image Learning with Less Labels and Imperfect Data

Verlag: Springer International Publishing

share
TEILEN

Abstract

Real-time (RT) sequences for cardiac magnetic resonance imaging (CMR) have recently been proposed as alternatives to standard cine CMR sequences for subjects unable to hold the breath or suffering from arrhythmia. RT image acquisitions during free breathing produce comparatively poor quality images, a trade-off necessary to achieve the high temporal resolution needed for RT imaging and hence are less suitable in the clinical assessment of cardiac function. We demonstrate the application of a CycleGAN architecture to train autoencoder networks for synthesising cine-like images from RT images and vice versa. Applying this conversion to real-time data produces clearer images with sharper distinctions between myocardial and surrounding tissues, giving clinicians a more precise means of visually inspecting subjects. Furthermore, applying the transformation to segmented cine data to produce pseudo-real-time images allows this label information to be transferred to the real-time image domain. We demonstrate the feasibility of this approach by training a U-net based architecture using these pseudo-real-time images which can effectively segment actual real-time images.
Literatur
1.
Zurück zum Zitat Bernard, O., Lalande, A., Zotti, C., et al.: Deep learning techniques for automatic MRI cardiac multi-structures segmentation and diagnosis: is the problem solved? IEEE Trans. Med. Imaging 37(11), 2514–2525 (2018) CrossRef Bernard, O., Lalande, A., Zotti, C., et al.: Deep learning techniques for automatic MRI cardiac multi-structures segmentation and diagnosis: is the problem solved? IEEE Trans. Med. Imaging 37(11), 2514–2525 (2018) CrossRef
3.
Zurück zum Zitat Feng, L., Srichai, M.B., Lim, R.P., et al.: Highly accelerated real-time cardiac cine MRI using k-t sparse-sense. Magn. Reson. Med. 70(1), 64–74 (2013) CrossRef Feng, L., Srichai, M.B., Lim, R.P., et al.: Highly accelerated real-time cardiac cine MRI using k-t sparse-sense. Magn. Reson. Med. 70(1), 64–74 (2013) CrossRef
4.
Zurück zum Zitat He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1026–1034 (2015) He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1026–1034 (2015)
6.
Zurück zum Zitat Huang, G., Liu, Z., Weinberger, K.Q.: Densely connected convolutional networks. CoRR abs/1608.06993 (2016) Huang, G., Liu, Z., Weinberger, K.Q.: Densely connected convolutional networks. CoRR abs/1608.06993 (2016)
7.
Zurück zum Zitat Huo, Y., Xu, Z., Moon, H., et al.: Synseg-net: synthetic segmentation without target modality ground truth. IEEE Trans. Med. Imaging 38(4), 1016–1025 (2018) CrossRef Huo, Y., Xu, Z., Moon, H., et al.: Synseg-net: synthetic segmentation without target modality ground truth. IEEE Trans. Med. Imaging 38(4), 1016–1025 (2018) CrossRef
8.
Zurück zum Zitat Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017) Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017)
11.
Zurück zum Zitat Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012) Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)
12.
Zurück zum Zitat La Gerche, A., Claessen, G., Van de Bruaene, A., et al.: Cardiac MRI: a new gold standard for ventricular volume quantification during high-intensity exercise. Circ. Cardiovasc. imaging 6(2), 329–38 (2013) CrossRef La Gerche, A., Claessen, G., Van de Bruaene, A., et al.: Cardiac MRI: a new gold standard for ventricular volume quantification during high-intensity exercise. Circ. Cardiovasc. imaging 6(2), 329–38 (2013) CrossRef
13.
Zurück zum Zitat Lurz, P., Muthurangu, V., Schievano, S., et al.: Feasibility and reproducibility of biventricular volumetric assessment of cardiac function during exercise using real-time radial k-t SENSE magnetic resonance imaging. J. Magn. Reson. Imaging 29(5), 1062–1070 (2009) CrossRef Lurz, P., Muthurangu, V., Schievano, S., et al.: Feasibility and reproducibility of biventricular volumetric assessment of cardiac function during exercise using real-time radial k-t SENSE magnetic resonance imaging. J. Magn. Reson. Imaging 29(5), 1062–1070 (2009) CrossRef
16.
Zurück zum Zitat Setser, R.M., Fischer, S.E., Lorenz, C.H.: Quantification of left ventricular function with magnetic resonance images acquired in real time. J. Magn. Reson. Imaging 12(3), 430–438 (2000) CrossRef Setser, R.M., Fischer, S.E., Lorenz, C.H.: Quantification of left ventricular function with magnetic resonance images acquired in real time. J. Magn. Reson. Imaging 12(3), 430–438 (2000) CrossRef
17.
Zurück zum Zitat Shi, W., et al.: Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1874–1883 (2016) Shi, W., et al.: Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1874–1883 (2016)
18.
Zurück zum Zitat Simard, P.Y., Steinkraus, D., Platt, J.C., et al.: Best practices for convolutional neural networks applied to visual document analysis. In: ICDAR, vol. 3 (2003) Simard, P.Y., Steinkraus, D., Platt, J.C., et al.: Best practices for convolutional neural networks applied to visual document analysis. In: ICDAR, vol. 3 (2003)
19.
Zurück zum Zitat Welander, P., Karlsson, S., Eklund, A.: Generative adversarial networks for image-to-image translation on multi-contrast MR images-a comparison of cyclegan and unit. arXiv preprint arXiv:​1806.​07777 (2018) Welander, P., Karlsson, S., Eklund, A.: Generative adversarial networks for image-to-image translation on multi-contrast MR images-a comparison of cyclegan and unit. arXiv preprint arXiv:​1806.​07777 (2018)
21.
Zurück zum Zitat Zhang, Z., Liu, Q., Wang, Y.: Road extraction by deep residual u-net. IEEE Geosci. Remote Sens. Lett. 15(5), 749–753 (2018) CrossRef Zhang, Z., Liu, Q., Wang, Y.: Road extraction by deep residual u-net. IEEE Geosci. Remote Sens. Lett. 15(5), 749–753 (2018) CrossRef
22.
Zurück zum Zitat Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: 2017 IEEE International Conference on Computer Vision (ICCV) (2017) Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: 2017 IEEE International Conference on Computer Vision (ICCV) (2017)
Metadaten
Titel
Synthesising Images and Labels Between MR Sequence Types with CycleGAN
verfasst von
Eric Kerfoot
Esther Puyol-Antón
Bram Ruijsink
Rina Ariga
Ernesto Zacur
Pablo Lamata
Julia Schnabel
Copyright-Jahr
2019
DOI
https://doi.org/10.1007/978-3-030-33391-1_6

Premium Partner