Skip to main content
main-content

Tipp

Weitere Kapitel dieses Buchs durch Wischen aufrufen

2018 | OriginalPaper | Buchkapitel

Cross-Modality Image Synthesis from Unpaired Data Using CycleGAN

Effects of Gradient Consistency Loss and Training Data Size

verfasst von: Yuta Hiasa, Yoshito Otake, Masaki Takao, Takumi Matsuoka, Kazuma Takashima, Aaron Carass, Jerry L. Prince, Nobuhiko Sugano, Yoshinobu Sato

Erschienen in: Simulation and Synthesis in Medical Imaging

Verlag: Springer International Publishing

share
TEILEN

Abstract

CT is commonly used in orthopedic procedures. MRI is used along with CT to identify muscle structures and diagnose osteonecrosis due to its superior soft tissue contrast. However, MRI has poor contrast for bone structures. Clearly, it would be helpful if a corresponding CT were available, as bone boundaries are more clearly seen and CT has a standardized (i.e., Hounsfield) unit. Therefore, we aim at MR-to-CT synthesis. While the CycleGAN was successfully applied to unpaired CT and MR images of the head, these images do not have as much variation of intensity pairs as do images in the pelvic region due to the presence of joints and muscles. In this paper, we extended the CycleGAN approach by adding the gradient consistency loss to improve the accuracy at the boundaries. We conducted two experiments. To evaluate image synthesis, we investigated dependency of image synthesis accuracy on (1) the number of training data and (2) incorporation of the gradient consistency loss. To demonstrate the applicability of our method, we also investigated segmentation accuracy on synthesized images.
Literatur
1.
Zurück zum Zitat Cvitanic, O.: MRI diagnosis of tears of the hip abductor tendons (gluteus medius and gluteus minimus). Am. J. Roentgenol. 182(1), 137–143 (2004) CrossRef Cvitanic, O.: MRI diagnosis of tears of the hip abductor tendons (gluteus medius and gluteus minimus). Am. J. Roentgenol. 182(1), 137–143 (2004) CrossRef
2.
Zurück zum Zitat Torrado-Carvajal, A.: Fast patch-based pseudo-CT synthesis from T1-weighted MR images for PET/MR attenuation correction in brain studies. J. Nuclear Med. 57(1), 136–143 (2016) CrossRef Torrado-Carvajal, A.: Fast patch-based pseudo-CT synthesis from T1-weighted MR images for PET/MR attenuation correction in brain studies. J. Nuclear Med. 57(1), 136–143 (2016) CrossRef
5.
Zurück zum Zitat Zhu, J.Y., et al.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2223–2232 (2017) Zhu, J.Y., et al.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2223–2232 (2017)
7.
Zurück zum Zitat Gilles, B.: Musculoskeletal MRI segmentation using multi-resolution simplex meshes with medial representations. Med. Image Anal. 14(3), 291–302 (2010) CrossRef Gilles, B.: Musculoskeletal MRI segmentation using multi-resolution simplex meshes with medial representations. Med. Image Anal. 14(3), 291–302 (2010) CrossRef
10.
Zurück zum Zitat Tustison, N.J.: N4ITK: improved N3 bias correction. IEEE Trans. Med. Imaging 29(6), 1310–1320 (2010) CrossRef Tustison, N.J.: N4ITK: improved N3 bias correction. IEEE Trans. Med. Imaging 29(6), 1310–1320 (2010) CrossRef
11.
Zurück zum Zitat Penney, G.P.: A comparison of similarity measures for use in 2-D-3-D medical image registration. IEEE Trans. Med. Imaging 17(4), 586–595 (1998) CrossRef Penney, G.P.: A comparison of similarity measures for use in 2-D-3-D medical image registration. IEEE Trans. Med. Imaging 17(4), 586–595 (1998) CrossRef
13.
Zurück zum Zitat Isola, P., et al.: Image-to-image translation with conditional adversarial networks. arXiv preprint (2017) Isola, P., et al.: Image-to-image translation with conditional adversarial networks. arXiv preprint (2017)
14.
Zurück zum Zitat Mao, X., et al.: Multi-class generative adversarial networks with the L2 loss function. CoRR, abs/1611.04076 2 (2016) Mao, X., et al.: Multi-class generative adversarial networks with the L2 loss function. CoRR, abs/1611.04076 2 (2016)
17.
Zurück zum Zitat Huo, Y., et al.: Adversarial synthesis learning enables segmentation without target modality ground truth. arXiv preprint arXiv:​1712.​07695 (2017) Huo, Y., et al.: Adversarial synthesis learning enables segmentation without target modality ground truth. arXiv preprint arXiv:​1712.​07695 (2017)
18.
Zurück zum Zitat Zhang, Y., Yang, L., Chen, J., Fredericksen, M., Hughes, D.P., Chen, D.Z.: Deep adversarial networks for biomedical image segmentation utilizing unannotated images. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10435, pp. 408–416. Springer, Cham (2017). https://​doi.​org/​10.​1007/​978-3-319-66179-7_​47 CrossRef Zhang, Y., Yang, L., Chen, J., Fredericksen, M., Hughes, D.P., Chen, D.Z.: Deep adversarial networks for biomedical image segmentation utilizing unannotated images. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10435, pp. 408–416. Springer, Cham (2017). https://​doi.​org/​10.​1007/​978-3-319-66179-7_​47 CrossRef
Metadaten
Titel
Cross-Modality Image Synthesis from Unpaired Data Using CycleGAN
verfasst von
Yuta Hiasa
Yoshito Otake
Masaki Takao
Takumi Matsuoka
Kazuma Takashima
Aaron Carass
Jerry L. Prince
Nobuhiko Sugano
Yoshinobu Sato
Copyright-Jahr
2018
DOI
https://doi.org/10.1007/978-3-030-00536-8_4

Premium Partner