Skip to main content
Top

Hint

Swipe to navigate through the chapters of this book

Published in:
Cover of the book

2021 | OriginalPaper | Chapter

Towards Robust General Medical Image Segmentation

Authors : Laura Daza, Juan C. Pérez, Pablo Arbeláez

Published in: Medical Image Computing and Computer Assisted Intervention – MICCAI 2021

Publisher: Springer International Publishing

share
SHARE

Abstract

The reliability of Deep Learning systems depends on their accuracy but also on their robustness against adversarial perturbations to the input data. Several attacks and defenses have been proposed to improve the performance of Deep Neural Networks under the presence of adversarial noise in the natural image domain. However, robustness in computer-aided diagnosis for volumetric data has only been explored for specific tasks and with limited attacks. We propose a new framework to assess the robustness of general medical image segmentation systems. Our contributions are two-fold: (i) we propose a new benchmark to evaluate robustness in the context of the Medical Segmentation Decathlon (MSD) by extending the recent AutoAttack natural image classification framework to the domain of volumetric data segmentation, and (ii) we present a novel lattice architecture for RObust Generic medical image segmentation (ROG). Our results show that ROG is capable of generalizing across different tasks of the MSD and largely surpasses the state-of-the-art under sophisticated adversarial attacks.

To get access to this content you need the following product:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 69.000 Bücher
  • über 500 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt 90 Tage mit der neuen Mini-Lizenz testen!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 50.000 Bücher
  • über 380 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe



 


Jetzt 90 Tage mit der neuen Mini-Lizenz testen!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 58.000 Bücher
  • über 300 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko





Jetzt 90 Tage mit der neuen Mini-Lizenz testen!

Appendix
Available only for authorised users
Literature
2.
go back to reference Arnab, A., Miksik, O., Torr, P.H.: On the robustness of semantic segmentation models to adversarial attacks. In: CVPR (2018) Arnab, A., Miksik, O., Torr, P.H.: On the robustness of semantic segmentation models to adversarial attacks. In: CVPR (2018)
3.
go back to reference Brendel, W., Rauber, J., Bethge, M.: Decision-based adversarial attacks: reliable attacks against black-box machine learning models. In: ICLR (2018) Brendel, W., Rauber, J., Bethge, M.: Decision-based adversarial attacks: reliable attacks against black-box machine learning models. In: ICLR (2018)
4.
go back to reference Cai, Q.Z., Liu, C., Song, D.: Curriculum adversarial training. In: International Joint Conference on Artificial Intelligence (IJCAI) (2018) Cai, Q.Z., Liu, C., Song, D.: Curriculum adversarial training. In: International Joint Conference on Artificial Intelligence (IJCAI) (2018)
6.
go back to reference Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP) (2017) Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP) (2017)
8.
go back to reference Croce, F., Hein, M.: Minimally distorted adversarial examples with a fast adaptive boundary attack. In: International conference on machine learning (ICML) (2020) Croce, F., Hein, M.: Minimally distorted adversarial examples with a fast adaptive boundary attack. In: International conference on machine learning (ICML) (2020)
9.
go back to reference Croce, F., Hein, M.: Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In: International Conference on Machine Learning (ICML) (2020) Croce, F., Hein, M.: Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In: International Conference on Machine Learning (ICML) (2020)
10.
go back to reference Goodfellow, I., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: ICLR (2015) Goodfellow, I., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: ICLR (2015)
11.
go back to reference Gupta, S., Dube, P., Verma, A.: Improving the affordability of robustness training for DNNs. In: CVPRW (2020) Gupta, S., Dube, P., Verma, A.: Improving the affordability of robustness training for DNNs. In: CVPRW (2020)
12.
go back to reference Isensee, F., Jaeger, P.F., Full, P.M., Vollmuth, P., Maier-Hein, K.H.: nnU-net for brain tumor segmentation. arXiv preprint arXiv:​2011.​00848 (2020) Isensee, F., Jaeger, P.F., Full, P.M., Vollmuth, P., Maier-Hein, K.H.: nnU-net for brain tumor segmentation. arXiv preprint arXiv:​2011.​00848 (2020)
13.
go back to reference Isensee, F., et al.: nnU-net: self-adapting framework for u-net-based medical image segmentation. CoRR abs/1809.10486 (2018) Isensee, F., et al.: nnU-net: self-adapting framework for u-net-based medical image segmentation. CoRR abs/1809.10486 (2018)
15.
go back to reference Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: ICLR (2015) Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: ICLR (2015)
16.
18.
go back to reference Ma, X., et al.: Understanding adversarial attacks on deep learning based medical image analysis systems. Pattern Recogn. 110, 107332 (2020) CrossRef Ma, X., et al.: Understanding adversarial attacks on deep learning based medical image analysis systems. Pattern Recogn. 110, 107332 (2020) CrossRef
19.
go back to reference Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: ICLR (2018) Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: ICLR (2018)
20.
go back to reference Milletari, F., Navab, N., Ahmadi, S.A.: V-net: fully convolutional neural networks for volumetric medical image segmentation. In: International Conference on 3D vision (3DV) (2016) Milletari, F., Navab, N., Ahmadi, S.A.: V-net: fully convolutional neural networks for volumetric medical image segmentation. In: International Conference on 3D vision (3DV) (2016)
21.
go back to reference Moosavi-Dezfooli, S.M., Fawzi, A., Fawzi, O., Frossard, P.: Universal adversarial perturbations. In: CVPR (2017) Moosavi-Dezfooli, S.M., Fawzi, A., Fawzi, O., Frossard, P.: Universal adversarial perturbations. In: CVPR (2017)
22.
go back to reference Moosavi-Dezfooli, S.M., Fawzi, A., Frossard, P.: DeepFool: a simple and accurate method to fool deep neural networks. In: CVPR (2016) Moosavi-Dezfooli, S.M., Fawzi, A., Frossard, P.: DeepFool: a simple and accurate method to fool deep neural networks. In: CVPR (2016)
23.
go back to reference Mummadi, C.K., Brox, T., Metzen, J.H.: Defending against universal perturbations with shared adversarial training. In: ICCV (2019) Mummadi, C.K., Brox, T., Metzen, J.H.: Defending against universal perturbations with shared adversarial training. In: ICCV (2019)
26.
go back to reference Ramachandran, P., Zoph, B., Le, Q.: Searching for activation functions. In: ICLR (2018) Ramachandran, P., Zoph, B., Le, Q.: Searching for activation functions. In: ICLR (2018)
27.
go back to reference Shafahi, A., et al.: Adversarial training for free! In: NeurIPS (2019) Shafahi, A., et al.: Adversarial training for free! In: NeurIPS (2019)
28.
go back to reference Simpson, A.L., et al.: A large annotated medical image dataset for the development and evaluation of segmentation algorithms. CoRR abs/1902.09063 (2019) Simpson, A.L., et al.: A large annotated medical image dataset for the development and evaluation of segmentation algorithms. CoRR abs/1902.09063 (2019)
29.
go back to reference Szegedy, C., et al.: Intriguing properties of neural networks. In: ICLR (2014) Szegedy, C., et al.: Intriguing properties of neural networks. In: ICLR (2014)
31.
go back to reference Xie, C., Tan, M., Gong, B., Yuille, A.L., Le, Q.V.: Smooth adversarial training. CoRR abs/2006.14536 (2020) Xie, C., Tan, M., Gong, B., Yuille, A.L., Le, Q.V.: Smooth adversarial training. CoRR abs/2006.14536 (2020)
32.
go back to reference Xie, C., Wang, J., Zhang, Z., Zhou, Y., Xie, L., Yuille, A.: Adversarial examples for semantic segmentation and object detection. In: ICCV (2017) Xie, C., Wang, J., Zhang, Z., Zhou, Y., Xie, L., Yuille, A.: Adversarial examples for semantic segmentation and object detection. In: ICCV (2017)
33.
go back to reference Yu, Q., et al.: C2FNAS: coarse-to-fine neural architecture search for 3D medical image segmentation. In: CVPR (2020) Yu, Q., et al.: C2FNAS: coarse-to-fine neural architecture search for 3D medical image segmentation. In: CVPR (2020)
34.
go back to reference Zhang, H., Wang, J.: Towards adversarially robust object detection. In: ICCV (2019) Zhang, H., Wang, J.: Towards adversarially robust object detection. In: ICCV (2019)
35.
go back to reference Zhang, H., Yu, Y., Jiao, J., Xing, E.P., Ghaoui, L.E., Jordan, M.I.: Theoretically principled trade-off between robustness and accuracy. In: International Conference on Machine Learning (ICML) (2019) Zhang, H., Yu, Y., Jiao, J., Xing, E.P., Ghaoui, L.E., Jordan, M.I.: Theoretically principled trade-off between robustness and accuracy. In: International Conference on Machine Learning (ICML) (2019)
36.
go back to reference Zhou, Z., Siddiquee, M.M.R., Tajbakhsh, N., Liang, J.: Unet++: redesigning skip connections to exploit multiscale features in image segmentation. Trans. Med. Imaging (2019) Zhou, Z., Siddiquee, M.M.R., Tajbakhsh, N., Liang, J.: Unet++: redesigning skip connections to exploit multiscale features in image segmentation. Trans. Med. Imaging (2019)
37.
go back to reference Zhu, Z., Liu, C., Yang, D., Yuille, A., Xu, D.: V-NAS: neural architecture search for volumetric medical image segmentation. In: 2019 International Conference on 3D Vision (3DV) (2019) Zhu, Z., Liu, C., Yang, D., Yuille, A., Xu, D.: V-NAS: neural architecture search for volumetric medical image segmentation. In: 2019 International Conference on 3D Vision (3DV) (2019)
38.
go back to reference Zhu, Z., Xia, Y., Shen, W., Fishman, E., Yuille, A.: A 3D coarse-to-fine framework for volumetric medical image segmentation. In: International Conference on 3D Vision (3DV) (2018) Zhu, Z., Xia, Y., Shen, W., Fishman, E., Yuille, A.: A 3D coarse-to-fine framework for volumetric medical image segmentation. In: International Conference on 3D Vision (3DV) (2018)
Metadata
Title
Towards Robust General Medical Image Segmentation
Authors
Laura Daza
Juan C. Pérez
Pablo Arbeláez
Copyright Year
2021
DOI
https://doi.org/10.1007/978-3-030-87199-4_1

Premium Partner