Skip to main content
Erschienen in: International Journal of Computer Vision 10-11/2020

21.03.2020

Adversarial Confidence Learning for Medical Image Segmentation and Synthesis

verfasst von: Dong Nie, Dinggang Shen

Erschienen in: International Journal of Computer Vision | Ausgabe 10-11/2020

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Generative adversarial networks (GAN) are widely used in medical image analysis tasks, such as medical image segmentation and synthesis. In these works, adversarial learning is directly applied to the original supervised segmentation (synthesis) networks. The usage of adversarial learning is effective in improving visual perception performance since adversarial learning works as realistic regularization for supervised generators. However, the quantitative performance often cannot improve as much as the qualitative performance, and it can even become worse in some cases. In this paper, we explore how we can take better advantage of adversarial learning in supervised segmentation (synthesis) models and propose an adversarial confidence learning framework to better model these problems. We analyze the roles of discriminator in the classic GANs and compare them with those in supervised adversarial systems. Based on this analysis, we propose adversarial confidence learning, i.e., besides the adversarial learning for emphasizing visual perception, we use the confidence information provided by the adversarial network to enhance the design of supervised segmentation (synthesis) network. In particular, we propose using a fully convolutional adversarial network for confidence learning to provide voxel-wise and region-wise confidence information for the segmentation (synthesis) network. With these settings, we propose a difficulty-aware attention mechanism to properly handle hard samples or regions by taking structural information into consideration so that we can better deal with the irregular distribution of medical data. Furthermore, we investigate the loss functions of various GANs and propose using the binary cross entropy loss to train the proposed adversarial system so that we can retain the unlimited modeling capacity of the discriminator. Experimental results on clinical and challenge datasets show that our proposed network can achieve state-of-the-art segmentation (synthesis) accuracy. Further analysis also indicates that adversarial confidence learning can both improve the visual perception performance and the quantitative performance.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Literatur
Zurück zum Zitat Chaitanya, K., Karani, N., Baumgartner, C. F., Becker, A., Donati, O., & Konukoglu, E. (2019). Semi-supervised and task-driven data augmentation. In International conference on information processing in medical imaging (pp. 29–41). Springer. Chaitanya, K., Karani, N., Baumgartner, C. F., Becker, A., Donati, O., & Konukoglu, E. (2019). Semi-supervised and task-driven data augmentation. In International conference on information processing in medical imaging (pp. 29–41). Springer.
Zurück zum Zitat Chen, C., Dou, Q., Chen, H., & Heng, P. A. (2018a). Semantic-aware generative adversarial nets for unsupervised domain adaptation in chest x-ray segmentation. In International workshop on machine learning in medical imaging (pp. 143–151). Springer. Chen, C., Dou, Q., Chen, H., & Heng, P. A. (2018a). Semantic-aware generative adversarial nets for unsupervised domain adaptation in chest x-ray segmentation. In International workshop on machine learning in medical imaging (pp. 143–151). Springer.
Zurück zum Zitat Chen, H., Qi, X., Yu, L., & Heng, P. A. (2016). DCAN: Deep contour-aware networks for accurate gland segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2487–2496). Chen, H., Qi, X., Yu, L., & Heng, P. A. (2016). DCAN: Deep contour-aware networks for accurate gland segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2487–2496).
Zurück zum Zitat Chen, L. C., Zhu, Y., Papandreou, G., Schroff, F., & Adam, H. (2018b). Encoder–decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European conference on computer vision (ECCV) (pp. 801–818). Chen, L. C., Zhu, Y., Papandreou, G., Schroff, F., & Adam, H. (2018b). Encoder–decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European conference on computer vision (ECCV) (pp. 801–818).
Zurück zum Zitat Costa, P., Galdran, A., Meyer, M. I., Niemeijer, M., Abràmoff, M., Mendonça, A. M., et al. (2018). End-to-end adversarial retinal image synthesis. IEEE Transactions on Medical Imaging, 37(3), 781–791. Costa, P., Galdran, A., Meyer, M. I., Niemeijer, M., Abràmoff, M., Mendonça, A. M., et al. (2018). End-to-end adversarial retinal image synthesis. IEEE Transactions on Medical Imaging, 37(3), 781–791.
Zurück zum Zitat Dong, C., Loy, C. C., He, K., & Tang, X. (2016). Image super-resolution using deep convolutional networks. IEEE TPAMI, 38(2), 295–307. Dong, C., Loy, C. C., He, K., & Tang, X. (2016). Image super-resolution using deep convolutional networks. IEEE TPAMI, 38(2), 295–307.
Zurück zum Zitat Glorot, X., & Bengio, Y. (2010). Understanding the difficulty of training deep feedforward neural networks. In AISTATS (pp. 249–256). Glorot, X., & Bengio, Y. (2010). Understanding the difficulty of training deep feedforward neural networks. In AISTATS (pp. 249–256).
Zurück zum Zitat Goodfellow, I., et al. (2014). Generative adversarial nets. In NIPS. Goodfellow, I., et al. (2014). Generative adversarial nets. In NIPS.
Zurück zum Zitat Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., & Courville, A. C. (2017). Improved training of wasserstein GANs. In NIPS (pp. 5767–5777). Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., & Courville, A. C. (2017). Improved training of wasserstein GANs. In NIPS (pp. 5767–5777).
Zurück zum Zitat Guo, Y., et al. (2016). Deformable MR prostate segmentation via deep feature learning and sparse patch matching. IEEE TMI, 35, 1077–1089. Guo, Y., et al. (2016). Deformable MR prostate segmentation via deep feature learning and sparse patch matching. IEEE TMI, 35, 1077–1089.
Zurück zum Zitat Han, X. (2017). MR-based synthetic CT generation using a deep convolutional neural network method. Medical Physics, 44(4), 1408–1419. Han, X. (2017). MR-based synthetic CT generation using a deep convolutional neural network method. Medical Physics, 44(4), 1408–1419.
Zurück zum Zitat Hardy, C., Merrer, E. L., & Sericola, B. (2018). MD-GAN: Multi-discriminator generative adversarial networks for distributed datasets. arXiv preprint arXiv:1811.03850. Hardy, C., Merrer, E. L., & Sericola, B. (2018). MD-GAN: Multi-discriminator generative adversarial networks for distributed datasets. arXiv preprint arXiv:​1811.​03850.
Zurück zum Zitat He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In CVPR (pp. 770–778). He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In CVPR (pp. 770–778).
Zurück zum Zitat Huang, Y., Shao, L., & Frangi, A. F. (2017). Simultaneous super-resolution and cross-modality synthesis of 3D medical images using weakly-supervised joint convolutional sparse coding. arXiv preprint arXiv:1705.02596. Huang, Y., Shao, L., & Frangi, A. F. (2017). Simultaneous super-resolution and cross-modality synthesis of 3D medical images using weakly-supervised joint convolutional sparse coding. arXiv preprint arXiv:​1705.​02596.
Zurück zum Zitat Hung, W. C., Tsai, Y. H., Liou, Y. T., Lin, Y. Y., & Yang, M. H. (2018). Adversarial learning for semi-supervised semantic segmentation. arXiv preprint arXiv:1802.07934. Hung, W. C., Tsai, Y. H., Liou, Y. T., Lin, Y. Y., & Yang, M. H. (2018). Adversarial learning for semi-supervised semantic segmentation. arXiv preprint arXiv:​1802.​07934.
Zurück zum Zitat Huo, Y., Xu, Z., Bao, S., Assad, A., Abramson, R. G., & Landman, B. A. (2018). Adversarial synthesis learning enables segmentation without target modality ground truth. In 2018 IEEE 15th international symposium on biomedical imaging (ISBI 2018) (pp. 1217–1220). IEEE. Huo, Y., Xu, Z., Bao, S., Assad, A., Abramson, R. G., & Landman, B. A. (2018). Adversarial synthesis learning enables segmentation without target modality ground truth. In 2018 IEEE 15th international symposium on biomedical imaging (ISBI 2018) (pp. 1217–1220). IEEE.
Zurück zum Zitat Isola, P., Zhu, J. Y., Zhou, T., & Efros, A. A. (2017). Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1125–1134). Isola, P., Zhu, J. Y., Zhou, T., & Efros, A. A. (2017). Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1125–1134).
Zurück zum Zitat Johnson, J., Alahi, A., & Fei-Fei, L. (2016). Perceptual losses for real-time style transfer and super-resolution. In European conference on computer vision (pp. 694–711). Springer. Johnson, J., Alahi, A., & Fei-Fei, L. (2016). Perceptual losses for real-time style transfer and super-resolution. In European conference on computer vision (pp. 694–711). Springer.
Zurück zum Zitat Kumar, P., & Srivastava, M. M. (2018). Example mining for incremental learning in medical imaging. In 2018 IEEE symposium series on computational intelligence (SSCI) (pp. 48–51). IEEE. Kumar, P., & Srivastava, M. M. (2018). Example mining for incremental learning in medical imaging. In 2018 IEEE symposium series on computational intelligence (SSCI) (pp. 48–51). IEEE.
Zurück zum Zitat Li, C., & Wand, M. (2016). Precomputed real-time texture synthesis with Markovian generative adversarial networks. In European conference on computer vision (pp. 702–716). Springer. Li, C., & Wand, M. (2016). Precomputed real-time texture synthesis with Markovian generative adversarial networks. In European conference on computer vision (pp. 702–716). Springer.
Zurück zum Zitat Lin, G., Milan, A., Shen, C., & Reid, I. (2017a). Refinenet: Multi-path refinement networks for high-resolution semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1925–1934). Lin, G., Milan, A., Shen, C., & Reid, I. (2017a). Refinenet: Multi-path refinement networks for high-resolution semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1925–1934).
Zurück zum Zitat Litjens, G., et al. (2014). Evaluation of prostate segmentation algorithms for MRI: The PROMISE12 challenge. MIA, 18(2), 359–373. Litjens, G., et al. (2014). Evaluation of prostate segmentation algorithms for MRI: The PROMISE12 challenge. MIA, 18(2), 359–373.
Zurück zum Zitat Long, J., et al. (2015). Fully convolutional networks for semantic segmentation. In CVPR (pp. 3431–3440). Long, J., et al. (2015). Fully convolutional networks for semantic segmentation. In CVPR (pp. 3431–3440).
Zurück zum Zitat Luc, P., Couprie, C., Chintala, S., & Verbeek, J. (2016). Semantic segmentation using adversarial networks. arXiv preprint arXiv:1611.08408. Luc, P., Couprie, C., Chintala, S., & Verbeek, J. (2016). Semantic segmentation using adversarial networks. arXiv preprint arXiv:​1611.​08408.
Zurück zum Zitat Lucic, M., Kurach, K., Michalski, M., Gelly, S., & Bousquet, O. (2018). Are GANs created equal? A large-scale study. In Advances in neural information processing systems (pp. 700–709). Lucic, M., Kurach, K., Michalski, M., Gelly, S., & Bousquet, O. (2018). Are GANs created equal? A large-scale study. In Advances in neural information processing systems (pp. 700–709).
Zurück zum Zitat Mao, X., Li, Q., Xie, H., Lau, R. Y., Wang, Z., & Smolley, S. P. (2017). Least squares generative adversarial networks. In ICCV (pp. 2813–2821). IEEE. Mao, X., Li, Q., Xie, H., Lau, R. Y., Wang, Z., & Smolley, S. P. (2017). Least squares generative adversarial networks. In ICCV (pp. 2813–2821). IEEE.
Zurück zum Zitat Mathieu, M., Couprie, C., & LeCun, Y. (2015). Deep multi-scale video prediction beyond mean square error. arXiv preprint arXiv:1511.05440. Mathieu, M., Couprie, C., & LeCun, Y. (2015). Deep multi-scale video prediction beyond mean square error. arXiv preprint arXiv:​1511.​05440.
Zurück zum Zitat Menze, B. H., Jakab, A., Bauer, S., Kalpathy-Cramer, J., Farahani, K., Kirby, J., et al. (2015). The multimodal brain tumor image segmentation benchmark (brats). IEEE TMI, 34(10), 1993. Menze, B. H., Jakab, A., Bauer, S., Kalpathy-Cramer, J., Farahani, K., Kirby, J., et al. (2015). The multimodal brain tumor image segmentation benchmark (brats). IEEE TMI, 34(10), 1993.
Zurück zum Zitat Merkow, J., Marsden, A., Kriegman, D., & Tu, Z. (2016). Dense volume-to-volume vascular boundary detection. In MICCAI (pp. 371–379). Springer. Merkow, J., Marsden, A., Kriegman, D., & Tu, Z. (2016). Dense volume-to-volume vascular boundary detection. In MICCAI (pp. 371–379). Springer.
Zurück zum Zitat Mescheder, L., Geiger, A., & Nowozin, S. (2018). Which training methods for GANs do actually converge? In ICML (pp. 3478–3487). Mescheder, L., Geiger, A., & Nowozin, S. (2018). Which training methods for GANs do actually converge? In ICML (pp. 3478–3487).
Zurück zum Zitat Metz, L., Poole, B., Pfau, D., & Sohl-Dickstein, J. (2016). Unrolled generative adversarial networks. arXiv preprint arXiv:1611.02163. Metz, L., Poole, B., Pfau, D., & Sohl-Dickstein, J. (2016). Unrolled generative adversarial networks. arXiv preprint arXiv:​1611.​02163.
Zurück zum Zitat Milletari, F., et al. (2016). V-net: Fully convolutional neural networks for volumetric medical image segmentation. In 3DV (pp. 565–571). IEEE. Milletari, F., et al. (2016). V-net: Fully convolutional neural networks for volumetric medical image segmentation. In 3DV (pp. 565–571). IEEE.
Zurück zum Zitat Moeskops, P., et al. (2017). Adversarial training and dilated convolutions for brain MRI segmentation. arXiv preprint arXiv:1707.03195. Moeskops, P., et al. (2017). Adversarial training and dilated convolutions for brain MRI segmentation. arXiv preprint arXiv:​1707.​03195.
Zurück zum Zitat Mordido, G., Yang, H., & Meinel, C. (2018). Dropout-GAN: Learning from a dynamic ensemble of discriminators. arXiv preprint arXiv:1807.11346. Mordido, G., Yang, H., & Meinel, C. (2018). Dropout-GAN: Learning from a dynamic ensemble of discriminators. arXiv preprint arXiv:​1807.​11346.
Zurück zum Zitat Nie, D., Cao, X., Gao, Y., Wang, L., & Shen, D. (2016). Estimating CT image from MRI data using 3D fully convolutional networks. In DLMIA (pp. 170–178). Springer. Nie, D., Cao, X., Gao, Y., Wang, L., & Shen, D. (2016). Estimating CT image from MRI data using 3D fully convolutional networks. In DLMIA (pp. 170–178). Springer.
Zurück zum Zitat Nie, D., Trullo, R., Lian, J., Petitjean, C., Ruan, S., Wang, Q., & Shen, D. (2017). Medical image synthesis with context-aware generative adversarial networks. In MICCAI. (pp. 417–425). Springer. Nie, D., Trullo, R., Lian, J., Petitjean, C., Ruan, S., Wang, Q., & Shen, D. (2017). Medical image synthesis with context-aware generative adversarial networks. In MICCAI. (pp. 417–425). Springer.
Zurück zum Zitat Nie, D., Trullo, R., Lian, J., Wang, L., Petitjean, C., Ruan, S., et al. (2018a). Medical image synthesis with deep convolutional adversarial networks. IEEE Transactions on Biomedical Engineering, 65(12), 2720–2730. Nie, D., Trullo, R., Lian, J., Wang, L., Petitjean, C., Ruan, S., et al. (2018a). Medical image synthesis with deep convolutional adversarial networks. IEEE Transactions on Biomedical Engineering, 65(12), 2720–2730.
Zurück zum Zitat Nie, D., Wang, L., Gao, Y., Lian, J., & Shen, D. (2018b). Strainet: Spatially varying stochastic residual adversarial networks for MRI pelvic organ segmentation. IEEE Transactions on Neural Networks and Learning Systems, 30, 1552–1564.MathSciNet Nie, D., Wang, L., Gao, Y., Lian, J., & Shen, D. (2018b). Strainet: Spatially varying stochastic residual adversarial networks for MRI pelvic organ segmentation. IEEE Transactions on Neural Networks and Learning Systems, 30, 1552–1564.MathSciNet
Zurück zum Zitat Nie, D., Wang, L., Xiang, L., Zhou, S., Ehsan, A., & Shen, D. (2019). Difficulty-aware attention network with confidence learning for medical image segmentation. In AAAI. Nie, D., Wang, L., Xiang, L., Zhou, S., Ehsan, A., & Shen, D. (2019). Difficulty-aware attention network with confidence learning for medical image segmentation. In AAAI.
Zurück zum Zitat Pan, T., Wang, B., Ding, G., & Yong, J. H. (2017). Fully convolutional neural networks with full-scale-features for semantic segmentation. Pan, T., Wang, B., Ding, G., & Yong, J. H. (2017). Fully convolutional neural networks with full-scale-features for semantic segmentation.
Zurück zum Zitat Radford, A., Metz, L., & Chintala, S. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434. Radford, A., Metz, L., & Chintala, S. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:​1511.​06434.
Zurück zum Zitat Ronneberger, O., et al. (2015). U-net: Convolutional networks for biomedical image segmentation. In MICCAI (pp. 234–241). Springer. Ronneberger, O., et al. (2015). U-net: Convolutional networks for biomedical image segmentation. In MICCAI (pp. 234–241). Springer.
Zurück zum Zitat Roy, A. G., et al. (2015). Concurrent spatial and channel ‘squeeze & excitation’ in fully convolutional networks. In MICCAI. Roy, A. G., et al. (2015). Concurrent spatial and channel ‘squeeze & excitation’ in fully convolutional networks. In MICCAI.
Zurück zum Zitat Sabokrou, M., Pourreza, M., Fayyaz, M., Entezari, R., Fathy, M., Gall, J., & Adeli, E. (2018). Avid: Adversarial visual irregularity detection. In ACCV. Sabokrou, M., Pourreza, M., Fayyaz, M., Entezari, R., Fathy, M., Gall, J., & Adeli, E. (2018). Avid: Adversarial visual irregularity detection. In ACCV.
Zurück zum Zitat Shi, F., Cheng, J., Wang, L., Yap, P. T., & Shen, D. (2015). LRTV: MR image super-resolution with low-rank and total variation regularizations. IEEE Transactions on Medical Imaging, 34(12), 2459–2466. Shi, F., Cheng, J., Wang, L., Yap, P. T., & Shen, D. (2015). LRTV: MR image super-resolution with low-rank and total variation regularizations. IEEE Transactions on Medical Imaging, 34(12), 2459–2466.
Zurück zum Zitat Shrivastava, A., Gupta, A., & Girshick, R. (2016). Training region-based object detectors with online hard example mining. In CVPR (pp. 761–769). Shrivastava, A., Gupta, A., & Girshick, R. (2016). Training region-based object detectors with online hard example mining. In CVPR (pp. 761–769).
Zurück zum Zitat Sudre, C. H., et al. (2017). Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations. In DLMIA. Springer. Sudre, C. H., et al. (2017). Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations. In DLMIA. Springer.
Zurück zum Zitat Tsai, Y. H., Hung, W. C., Schulter, S., Sohn, K., Yang, M. H., & Chandraker, M. (2018). Learning to adapt structured output space for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7472–7481). Tsai, Y. H., Hung, W. C., Schulter, S., Sohn, K., Yang, M. H., & Chandraker, M. (2018). Learning to adapt structured output space for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7472–7481).
Zurück zum Zitat Vu, T. H., Jain, H., Bucher, M., Cord, M., & Pérez, P. (2019). Advent: Adversarial entropy minimization for domain adaptation in semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2517–2526). Vu, T. H., Jain, H., Bucher, M., Cord, M., & Pérez, P. (2019). Advent: Adversarial entropy minimization for domain adaptation in semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2517–2526).
Zurück zum Zitat Wolterink, J. M., et al. (2017). Generative adversarial networks for noise reduction in low-dose CT. TMI, 36(12), 2536–2545. Wolterink, J. M., et al. (2017). Generative adversarial networks for noise reduction in low-dose CT. TMI, 36(12), 2536–2545.
Zurück zum Zitat Xiao, H., Wei, Y., Liu, Y., Zhang, M., & Feng, J. (2017). Transferable semi-supervised semantic segmentation. arXiv preprint arXiv:1711.06828. Xiao, H., Wei, Y., Liu, Y., Zhang, M., & Feng, J. (2017). Transferable semi-supervised semantic segmentation. arXiv preprint arXiv:​1711.​06828.
Zurück zum Zitat Xue, Y., Xu, T., Zhang, H., Long, L. R., & Huang, X. (2018). Segan: Adversarial network with multi-scale \(L_1\) loss for medical image segmentation. Neuroinformatics, 16, 1–10. Xue, Y., Xu, T., Zhang, H., Long, L. R., & Huang, X. (2018). Segan: Adversarial network with multi-scale \(L_1\) loss for medical image segmentation. Neuroinformatics, 16, 1–10.
Zurück zum Zitat Xue, Y., Zhou, Q., Ye, J., Long, L. R., Antani, S., Cornwell, C., Xue, Z., & Huang, X. (2019). Synthetic augmentation and feature-based filtering for improved cervical histopathology image classification. In International conference on medical image computing and computer-assisted intervention (pp. 387–396). Springer. Xue, Y., Zhou, Q., Ye, J., Long, L. R., Antani, S., Cornwell, C., Xue, Z., & Huang, X. (2019). Synthetic augmentation and feature-based filtering for improved cervical histopathology image classification. In International conference on medical image computing and computer-assisted intervention (pp. 387–396). Springer.
Zurück zum Zitat Yang, Q., Yan, P., Zhang, Y., Yu, H., Shi, Y., Mou, X., et al. (2018). Low-dose CT image denoising using a generative adversarial network with Wasserstein distance and perceptual loss. IEEE Transactions on Medical Imaging, 37(6), 1348–1357. Yang, Q., Yan, P., Zhang, Y., Yu, H., Shi, Y., Mou, X., et al. (2018). Low-dose CT image denoising using a generative adversarial network with Wasserstein distance and perceptual loss. IEEE Transactions on Medical Imaging, 37(6), 1348–1357.
Zurück zum Zitat Yang, X., Yu, L., Wu, L., Wang, Y., Ni, D., Qin, J., & Heng, P. A. (2017). Fine-grained recurrent neural networks for automatic prostate segmentation in ultrasound images. In AAAI (pp. 1633–1639). Yang, X., Yu, L., Wu, L., Wang, Y., Ni, D., Qin, J., & Heng, P. A. (2017). Fine-grained recurrent neural networks for automatic prostate segmentation in ultrasound images. In AAAI (pp. 1633–1639).
Zurück zum Zitat Yu, F., Koltun, V., & Funkhouser, T. A. (2017a). Dilated residual networks. Yu, F., Koltun, V., & Funkhouser, T. A. (2017a). Dilated residual networks.
Zurück zum Zitat Yu, L., et al. (2017b). Volumetric ConvNets with mixed residual connections for automated prostate segmentation from 3D MR images. In AAAI. Yu, L., et al. (2017b). Volumetric ConvNets with mixed residual connections for automated prostate segmentation from 3D MR images. In AAAI.
Zurück zum Zitat Zhang, Y., Yang, L., Chen, J., Fredericksen, M., Hughes, D. P., & Chen, D. Z. (2017). Deep adversarial networks for biomedical image segmentation utilizing unannotated images. In MICCAI (pp. 408–416). Springer. Zhang, Y., Yang, L., Chen, J., Fredericksen, M., Hughes, D. P., & Chen, D. Z. (2017). Deep adversarial networks for biomedical image segmentation utilizing unannotated images. In MICCAI (pp. 408–416). Springer.
Zurück zum Zitat Zhang, Z., Yang, L., & Zheng, Y. (2018). Translating and segmenting multimodal medical volumes with cycle-and shape-consistency generative adversarial network. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 9242–9251). Zhang, Z., Yang, L., & Zheng, Y. (2018). Translating and segmenting multimodal medical volumes with cycle-and shape-consistency generative adversarial network. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 9242–9251).
Zurück zum Zitat Zhao, C., et al. (2018). A deep learning based anti-aliasing self super-resolution algorithm for MRI. In MICCAI (pp. 100–108). Springer. Zhao, C., et al. (2018). A deep learning based anti-aliasing self super-resolution algorithm for MRI. In MICCAI (pp. 100–108). Springer.
Zurück zum Zitat Zhou, X. Y., Shen, M., Riga, C., Yang, G. Z., & Lee, S. L. (2017). Focal fcn: Towards small object segmentation with limited training data. arXiv preprint arXiv:1711.01506. Zhou, X. Y., Shen, M., Riga, C., Yang, G. Z., & Lee, S. L. (2017). Focal fcn: Towards small object segmentation with limited training data. arXiv preprint arXiv:​1711.​01506.
Zurück zum Zitat Zhu, Q., et al. (2019). Boundary-weighted domain adaptive neural network for prostate MR image segmentation. arXiv preprint arXiv:1902.08128. Zhu, Q., et al. (2019). Boundary-weighted domain adaptive neural network for prostate MR image segmentation. arXiv preprint arXiv:​1902.​08128.
Zurück zum Zitat Zhu, W., Xiang, X., Tran, T. D., Hager, G. D., & Xie, X. (2018). Adversarial deep structured nets for mass segmentation from mammograms. In ISBI (pp. 847–850). IEEE. Zhu, W., Xiang, X., Tran, T. D., Hager, G. D., & Xie, X. (2018). Adversarial deep structured nets for mass segmentation from mammograms. In ISBI (pp. 847–850). IEEE.
Metadaten
Titel
Adversarial Confidence Learning for Medical Image Segmentation and Synthesis
verfasst von
Dong Nie
Dinggang Shen
Publikationsdatum
21.03.2020
Verlag
Springer US
Erschienen in
International Journal of Computer Vision / Ausgabe 10-11/2020
Print ISSN: 0920-5691
Elektronische ISSN: 1573-1405
DOI
https://doi.org/10.1007/s11263-020-01321-2

Weitere Artikel der Ausgabe 10-11/2020

International Journal of Computer Vision 10-11/2020 Zur Ausgabe

Premium Partner