Skip to main content
main-content
Top

Hint

Swipe to navigate through the chapters of this book

2019 | OriginalPaper | Chapter

Temporal Consistency Objectives Regularize the Learning of Disentangled Representations

Authors: Gabriele Valvano, Agisilaos Chartsias, Andrea Leo, Sotirios A. Tsaftaris

Published in: Domain Adaptation and Representation Transfer and Medical Image Learning with Less Labels and Imperfect Data

Publisher: Springer International Publishing

share
SHARE

Abstract

There has been an increasing focus in learning interpretable feature representations, particularly in applications such as medical image analysis that require explainability, whilst relying less on annotated data (since annotations can be tedious and costly). Here we build on recent innovations in style-content representations to learn anatomy, imaging characteristics (appearance) and temporal correlations. By introducing a self-supervised objective of predicting future cardiac phases we improve disentanglement. We propose a temporal transformer architecture that given an image conditioned on phase difference, it predicts a future frame. This forces the anatomical decomposition to be consistent with the temporal cardiac contraction in cine MRI and to have semantic meaning with less need for annotations. We demonstrate that using this regularization, we achieve competitive results and improve semi-supervised segmentation, especially when very few labelled data are available. Specifically, we show Dice increase of up to 19% and 7% compared to supervised and semi-supervised approaches respectively on the ACDC dataset. Code is available at: https://​github.​com/​gvalvano/​sdtnet.
Appendix
Available only for authorised users
Literature
2.
go back to reference Bengio, Y., Courville, A., Vincent, P.: Representation learning: a review and new perspectives. IEEE PAMI 35(8), 1798–1828 (2013) CrossRef Bengio, Y., Courville, A., Vincent, P.: Representation learning: a review and new perspectives. IEEE PAMI 35(8), 1798–1828 (2013) CrossRef
4.
go back to reference Bernard, O., et al.: Deep learning techniques for automatic MRI cardiac multi-structures segmentation and diagnosis: is the problem solved? IEEE TMI 37(11), 2514–2525 (2018) Bernard, O., et al.: Deep learning techniques for automatic MRI cardiac multi-structures segmentation and diagnosis: is the problem solved? IEEE TMI 37(11), 2514–2525 (2018)
5.
go back to reference Chartsias, A., et al.: Disentangled representation learning in cardiac image analysis. Med. Image Anal. 58, 101535 (2019) CrossRef Chartsias, A., et al.: Disentangled representation learning in cardiac image analysis. Med. Image Anal. 58, 101535 (2019) CrossRef
6.
go back to reference Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., Abbeel, P.: InfoGAN: Interpretable representation learning by information maximizing generative adversarial nets. In: NeurIPS, pp. 2172–2180 (2016) Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., Abbeel, P.: InfoGAN: Interpretable representation learning by information maximizing generative adversarial nets. In: NeurIPS, pp. 2172–2180 (2016)
7.
go back to reference Hsieh, J.T., Liu, B., Huang, D.A., Fei-Fei, L.F., Niebles, J.C.: Learning to decompose and disentangle representations for video prediction. In: NeurIPS, pp. 517–526 (2018) Hsieh, J.T., Liu, B., Huang, D.A., Fei-Fei, L.F., Niebles, J.C.: Learning to decompose and disentangle representations for video prediction. In: NeurIPS, pp. 517–526 (2018)
8.
go back to reference Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: ICLR (2015) Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: ICLR (2015)
9.
go back to reference Kingma, D.P., Welling, M.: Auto-encoding variational bayes. In: ICLR (2014) Kingma, D.P., Welling, M.: Auto-encoding variational bayes. In: ICLR (2014)
10.
go back to reference Lee, H.Y., Tseng, H.Y., Huang, J.B., Singh, M., Yang, M.H.: Diverse image-to-image translation via disentangled representations. In: ECCV, pp. 35–51 (2018) Lee, H.Y., Tseng, H.Y., Huang, J.B., Singh, M., Yang, M.H.: Diverse image-to-image translation via disentangled representations. In: ECCV, pp. 35–51 (2018)
11.
go back to reference Mao, X., Li, Q., Xie, H., Lau, R.Y.K., Wang, Z., Smolley, S.P.: On the effectiveness of least squares generative adversarial networks. IEEE PAMI PP(99), 1–13 (2018) CrossRef Mao, X., Li, Q., Xie, H., Lau, R.Y.K., Wang, Z., Smolley, S.P.: On the effectiveness of least squares generative adversarial networks. IEEE PAMI PP(99), 1–13 (2018) CrossRef
13.
go back to reference Qin, C., Shi, B., Liao, R., Mansi, T., Rueckert, D., Kamen, A.: Unsupervised deformable registration for multi-modal images via disentangled representations. arXiv preprint arXiv:​1903.​09331 (2019) Qin, C., Shi, B., Liao, R., Mansi, T., Rueckert, D., Kamen, A.: Unsupervised deformable registration for multi-modal images via disentangled representations. arXiv preprint arXiv:​1903.​09331 (2019)
15.
go back to reference Smith, L.N.: Cyclical learning rates for training neural networks. In: 2017 IEEE WACV, pp. 464–472. IEEE (2017) Smith, L.N.: Cyclical learning rates for training neural networks. In: 2017 IEEE WACV, pp. 464–472. IEEE (2017)
16.
go back to reference Van Steenkiste, S., Locatello, F., Schmidhuber, J., Bachem, O.: Are disentangled representations helpful for abstract visual reasoning? arXiv preprint arXiv:​1905.​12506 (2019) Van Steenkiste, S., Locatello, F., Schmidhuber, J., Bachem, O.: Are disentangled representations helpful for abstract visual reasoning? arXiv preprint arXiv:​1905.​12506 (2019)
17.
go back to reference Wood, J.N.: A smoothness constraint on the development of object recognition. Cognition 153, 140–145 (2016) CrossRef Wood, J.N.: A smoothness constraint on the development of object recognition. Cognition 153, 140–145 (2016) CrossRef
Metadata
Title
Temporal Consistency Objectives Regularize the Learning of Disentangled Representations
Authors
Gabriele Valvano
Agisilaos Chartsias
Andrea Leo
Sotirios A. Tsaftaris
Copyright Year
2019
DOI
https://doi.org/10.1007/978-3-030-33391-1_2

Premium Partner