Skip to main content

2020 | OriginalPaper | Buchkapitel

Unified Cross-Modality Feature Disentangler for Unsupervised Multi-domain MRI Abdomen Organs Segmentation

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Our contribution is a unified cross-modality feature disentagling approach for multi-domain image translation and multiple organ segmentation. Using CT as the labeled source domain, our approach learns to segment multi-modal (T1-weighted and T2-weighted) MRI having no labeled data. Our approach uses a variational auto-encoder (VAE) to disentangle the image content from style. The VAE constrains the style feature encoding to match a universal prior (Gaussian) that is assumed to span the styles of all the source and target modalities. The extracted image style is converted into a latent style scaling code, which modulates the generator to produce multi-modality images according to the target domain code from the image content features. Finally, we introduce a joint distribution matching discriminator that combines the translated images with task-relevant segmentation probability maps to further constrain and regularize image-to-image (I2I) translations. We performed extensive comparisons to multiple state-of-the-art I2I translation and segmentation methods. Our approach resulted in the lowest average multi-domain image reconstruction error of 1.34 ± 0.04. Our approach produced an average Dice similarity coefficient (DSC) of 0.85 for T1w and 0.90 for T2w MRI for multi-organ segmentation, which was highly comparable to a fully supervised MRI multi-organ segmentation network (DSC of 0.86 for T1w and 0.90 for T2w MRI) .

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Anhänge
Nur mit Berechtigung zugänglich
Literatur
1.
Zurück zum Zitat Kupelian, P., Sonke, J.: Magnetic-resonance guided adaptive radiotherapy: a solution to the future. Semin Radiat Oncol 24(3), 227–32 (2014)CrossRef Kupelian, P., Sonke, J.: Magnetic-resonance guided adaptive radiotherapy: a solution to the future. Semin Radiat Oncol 24(3), 227–32 (2014)CrossRef
3.
Zurück zum Zitat Huo, Y., et al.: SynSeg-Net: synthetic segmentation without target modality ground truth. IEEE Trans. Med. Imaging 38(4), 1016–1025 (2018)CrossRef Huo, Y., et al.: SynSeg-Net: synthetic segmentation without target modality ground truth. IEEE Trans. Med. Imaging 38(4), 1016–1025 (2018)CrossRef
5.
Zurück zum Zitat Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems (NeurIPS), pp. 2672–2680 (2014) Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems (NeurIPS), pp. 2672–2680 (2014)
6.
Zurück zum Zitat Zhu, J.Y., Park, T., Isola, P., Efros, A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: International Conference on Computer Vision (ICCV), pp. 2223–2232 (2017) Zhu, J.Y., Park, T., Isola, P., Efros, A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: International Conference on Computer Vision (ICCV), pp. 2223–2232 (2017)
7.
Zurück zum Zitat Arora, S., Ge, R., Liang, Y., Ma, T., Zhang, Y.: Generalization and equilibrium in generative adversarial nets GANs. In: International Conference on Machine Learning (ICML) (2017) Arora, S., Ge, R., Liang, Y., Ma, T., Zhang, Y.: Generalization and equilibrium in generative adversarial nets GANs. In: International Conference on Machine Learning (ICML) (2017)
8.
Zurück zum Zitat Qi, G.J.: Loss-sensitive generative adversarial networks on Lipschitz densities. Int. J. Comput. Vis. 128(5), 1118–1140 (2020) MathSciNetCrossRef Qi, G.J.: Loss-sensitive generative adversarial networks on Lipschitz densities. Int. J. Comput. Vis. 128(5), 1118–1140 (2020) MathSciNetCrossRef
9.
Zurück zum Zitat Zhu, J.Y., et al.: Toward multimodal image-to-image translation. In: International Conference on Neural Information Processing Systems (NeurIPS), pp. 465–476 (2017) Zhu, J.Y., et al.: Toward multimodal image-to-image translation. In: International Conference on Neural Information Processing Systems (NeurIPS), pp. 465–476 (2017)
10.
Zurück zum Zitat Donahue, J., Krähenbühl, P., Darrell, T.: Adversarial feature learning. CoRR abs/1605.09782 (2016) Donahue, J., Krähenbühl, P., Darrell, T.: Adversarial feature learning. CoRR abs/1605.09782 (2016)
11.
Zurück zum Zitat Peng, X., Huang, Z., Sun, X., Saenko, K.: Domain agnostic learning with disentangled representations. In: International Conference on Machine Learning, pp. 5102–5112 (2019) Peng, X., Huang, Z., Sun, X., Saenko, K.: Domain agnostic learning with disentangled representations. In: International Conference on Machine Learning, pp. 5102–5112 (2019)
12.
Zurück zum Zitat Liu, A., Liu, Y.C., Yeh, Y.Y., Wang, Y.C.F.: A unified feature disentangler for multi-domain image translation and manipulation. In: International Conference on Neural Information Processing Systems (NeurIPS), pp. 2595–2604 (2018) Liu, A., Liu, Y.C., Yeh, Y.Y., Wang, Y.C.F.: A unified feature disentangler for multi-domain image translation and manipulation. In: International Conference on Neural Information Processing Systems (NeurIPS), pp. 2595–2604 (2018)
13.
Zurück zum Zitat Hoffman, J., et al.: CyCADA: cycle-consistent adversarial domain adaptation. In: International Conference on Machine Learning, pp. 1989–1998 (2018) Hoffman, J., et al.: CyCADA: cycle-consistent adversarial domain adaptation. In: International Conference on Machine Learning, pp. 1989–1998 (2018)
14.
Zurück zum Zitat Huang, X., Liu, M.Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: European Conference on Computer Vision (ECCV), pp. 172–189 (2018) Huang, X., Liu, M.Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: European Conference on Computer Vision (ECCV), pp. 172–189 (2018)
15.
Zurück zum Zitat Liu, Y., Yeh, Y., Fu, T., Wang, S., Chiu, W., Wang, Y.F.: Detach and adapt: learning cross-domain disentangled deep representation. In: IEEE Confernce on Computer Vision and Pattern Recognition(CVPR), pp. 8867–8876 (2018) Liu, Y., Yeh, Y., Fu, T., Wang, S., Chiu, W., Wang, Y.F.: Detach and adapt: learning cross-domain disentangled deep representation. In: IEEE Confernce on Computer Vision and Pattern Recognition(CVPR), pp. 8867–8876 (2018)
16.
Zurück zum Zitat Lee, H.Y., Tseng, H.Y., Huang, J.B., Singh, M., Yang, M.H.: Diverse image-to-image translation via disentangled representations. In: European Conference on Computer Vision (ECCV), pp. 35–51 (2018) Lee, H.Y., Tseng, H.Y., Huang, J.B., Singh, M., Yang, M.H.: Diverse image-to-image translation via disentangled representations. In: European Conference on Computer Vision (ECCV), pp. 35–51 (2018)
17.
Zurück zum Zitat Choi, Y., Choi, M., Kim, M., Ha, J.W., Kim, S., Choo, J.: StarGAN: Unified generative adversarial networks for multi-domain image-to-image translation. In: IEEE Confernce on Computer Vision and Pattern Recognition (CVPR), pp. 8789–8797 (2018) Choi, Y., Choi, M., Kim, M., Ha, J.W., Kim, S., Choo, J.: StarGAN: Unified generative adversarial networks for multi-domain image-to-image translation. In: IEEE Confernce on Computer Vision and Pattern Recognition (CVPR), pp. 8789–8797 (2018)
18.
Zurück zum Zitat Liu, Y., Wang, Z., Jin, H., Wassell, I.: Multi-task adversarial network for disentangled feature learning. In: IEEE Confernce on Computer Vision and Pattern Recognition (CVPR), pp. 3743–3751 (2018) Liu, Y., Wang, Z., Jin, H., Wassell, I.: Multi-task adversarial network for disentangled feature learning. In: IEEE Confernce on Computer Vision and Pattern Recognition (CVPR), pp. 3743–3751 (2018)
19.
Zurück zum Zitat Mao, Q., Lee, H., Tseng, H., Ma, S., Yang, M.: Mode seeking generative adversarial networks for diverse image synthesis. In: IEEE Confernce on Computer Vision and Pattern Recognition (CVPR), pp. 1429–1437 (2019) Mao, Q., Lee, H., Tseng, H., Ma, S., Yang, M.: Mode seeking generative adversarial networks for diverse image synthesis. In: IEEE Confernce on Computer Vision and Pattern Recognition (CVPR), pp. 1429–1437 (2019)
21.
Zurück zum Zitat Alharbi, Y., Smith, N., Wonka, P.: Latent filter scaling for multimodal unsupervised image-to-image translation. In: IEEE Confernce on Computer Vision and Pattern Recognition (CVPR), pp. 1458–1466 (2019) Alharbi, Y., Smith, N., Wonka, P.: Latent filter scaling for multimodal unsupervised image-to-image translation. In: IEEE Confernce on Computer Vision and Pattern Recognition (CVPR), pp. 1458–1466 (2019)
22.
Zurück zum Zitat Lee, H.Y., et al.: DRIT++: diverse image-to-image translation via disentangled representations. Int. J. Comput. Vis. 1–16 (2020) Lee, H.Y., et al.: DRIT++: diverse image-to-image translation via disentangled representations. Int. J. Comput. Vis. 1–16 (2020)
23.
Zurück zum Zitat Yu, X., Ying, Z., Li, T., Liu, S., Li, G.: Multi-mapping image-to-image translation with central biasing normalization. arXiv preprint arXiv:1806.10050 (2018) Yu, X., Ying, Z., Li, T., Liu, S., Li, G.: Multi-mapping image-to-image translation with central biasing normalization. arXiv preprint arXiv:​1806.​10050 (2018)
24.
Zurück zum Zitat Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: International Conference on Learning Representations (ICLR) (2014) Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: International Conference on Learning Representations (ICLR) (2014)
26.
Zurück zum Zitat Ali, E., Alper, S.M., Oğuz, D., Mustafa, B., Sinem, N.G.: CHAOS - combined ( CT-MR) healthy abdominal organ segmentation challenge data Ali, E., Alper, S.M., Oğuz, D., Mustafa, B., Sinem, N.G.: CHAOS - combined ( CT-MR) healthy abdominal organ segmentation challenge data
27.
Zurück zum Zitat Landman, B., Xu, Z., Igelsias, J., Styner, M., Langerak, T., Klein, A.: MICCAI multi-atlas labeling beyond the cranial vault-workshop and challenge (2015) Landman, B., Xu, Z., Igelsias, J., Styner, M., Langerak, T., Klein, A.: MICCAI multi-atlas labeling beyond the cranial vault-workshop and challenge (2015)
28.
Zurück zum Zitat Hou, L., Agarwal, A., Samaras, D., Kurc, T.M., Gupta, R.R., Saltz, J.H.: Unsupervised histopathology image synthesis. arXiv preprint arXiv:1712.05021 (2017) Hou, L., Agarwal, A., Samaras, D., Kurc, T.M., Gupta, R.R., Saltz, J.H.: Unsupervised histopathology image synthesis. arXiv preprint arXiv:​1712.​05021 (2017)
29.
Zurück zum Zitat Chen, C., Dou, Q., Chen, H., Qin, J., Heng, P.A.: Synergistic image and feature adaptation: Towards cross-modality domain adaptation for medical image segmentation. In: Proceedings of the (AAAI) Conference on Artificial Intelligence, vol. 33, pp. 865–872 (2019) Chen, C., Dou, Q., Chen, H., Qin, J., Heng, P.A.: Synergistic image and feature adaptation: Towards cross-modality domain adaptation for medical image segmentation. In: Proceedings of the (AAAI) Conference on Artificial Intelligence, vol. 33, pp. 865–872 (2019)
31.
Zurück zum Zitat Chartsias, A., et al.: Factorised spatial representation learning: application in semi-supervised myocardial segmentation. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11071, pp. 490–498. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00934-2_55CrossRef Chartsias, A., et al.: Factorised spatial representation learning: application in semi-supervised myocardial segmentation. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11071, pp. 490–498. Springer, Cham (2018). https://​doi.​org/​10.​1007/​978-3-030-00934-2_​55CrossRef
32.
Zurück zum Zitat Ouyang, C., Kamnitsas, K., Biffi, C., Duan, J., Rueckert, D.: Data Efficient Unsupervised Domain Adaptation For Cross-modality Image Segmentation. In: Shen, D., Liu, T., Peters, T.M., Staib, L.H., Essert, C., Zhou, S., Yap, P.-T., Khan, A. (eds.) MICCAI 2019. LNCS, vol. 11765, pp. 669–677. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32245-8_74CrossRef Ouyang, C., Kamnitsas, K., Biffi, C., Duan, J., Rueckert, D.: Data Efficient Unsupervised Domain Adaptation For Cross-modality Image Segmentation. In: Shen, D., Liu, T., Peters, T.M., Staib, L.H., Essert, C., Zhou, S., Yap, P.-T., Khan, A. (eds.) MICCAI 2019. LNCS, vol. 11765, pp. 669–677. Springer, Cham (2019). https://​doi.​org/​10.​1007/​978-3-030-32245-8_​74CrossRef
33.
Zurück zum Zitat Li, H., Pan, S.J., Wang, S., Kot, A.C.: Domain generalization with adversarial feature learning. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 5400–5409 (2018) Li, H., Pan, S.J., Wang, S., Kot, A.C.: Domain generalization with adversarial feature learning. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 5400–5409 (2018)
Metadaten
Titel
Unified Cross-Modality Feature Disentangler for Unsupervised Multi-domain MRI Abdomen Organs Segmentation
verfasst von
Jue Jiang
Harini Veeraraghavan
Copyright-Jahr
2020
DOI
https://doi.org/10.1007/978-3-030-59713-9_34