Skip to main content

2021 | OriginalPaper | Buchkapitel

EndoUDA: A Modality Independent Segmentation Approach for Endoscopy Imaging

verfasst von : Numan Celik, Sharib Ali, Soumya Gupta, Barbara Braden, Jens Rittscher

Erschienen in: Medical Image Computing and Computer Assisted Intervention – MICCAI 2021

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Gastrointestinal (GI) cancer precursors require frequent monitoring for risk stratification of patients. Automated segmentation methods can help to assess risk areas more accurately, and assist in therapeutic procedures or even removal. In clinical practice, addition to the conventional white-light imaging (WLI), complimentary modalities such as narrow-band imaging (NBI) and fluorescence imaging are used. While, today most segmentation approaches are supervised and only concentrated on a single modality dataset, this work exploits to use a target-independent unsupervised domain adaptation (UDA) technique that is capable to generalize to an unseen target modality. In this context, we propose a novel UDA-based segmentation method that couples the variational autoencoder and U-Net with a common EfficientNet-B4 backbone, and uses a joint loss for latent-space optimization for target samples. We show that our model can generalize to unseen target NBI (target) modality when trained using only WLI (source) modality. Our experiments on both upper and lower GI endoscopy data show the effectiveness of our approach compared to naive supervised approach and state-of-the-art UDA segmentation methods.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Anhänge
Nur mit Berechtigung zugänglich
Literatur
1.
Zurück zum Zitat Ali, S., Dmitrieva, M., Ghatwary, N., Bano, S., et al.: Deep learning for detection and segmentation of artefact and disease instances in gastrointestinal endoscopy. Med. Image Anal. 70, 102002 (2021) Ali, S., Dmitrieva, M., Ghatwary, N., Bano, S., et al.: Deep learning for detection and segmentation of artefact and disease instances in gastrointestinal endoscopy. Med. Image Anal. 70, 102002 (2021)
2.
Zurück zum Zitat Arnold, M., et al.: Global burden of 5 major types of gastrointestinal cancer. Gastroenterology 159(1), 335–349 (2020)CrossRef Arnold, M., et al.: Global burden of 5 major types of gastrointestinal cancer. Gastroenterology 159(1), 335–349 (2020)CrossRef
3.
Zurück zum Zitat Baheti, B., Innani, S., Gajre, S., Talbar, S.: Eff-UNet: a novel architecture for semantic segmentation in unstructured environment. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1473–1481 (2020) Baheti, B., Innani, S., Gajre, S., Talbar, S.: Eff-UNet: a novel architecture for semantic segmentation in unstructured environment. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1473–1481 (2020)
4.
Zurück zum Zitat Borgli, H., Thambawita, V., Smedsrud, P.H., Hicks, S., et al.: HyperKvasir, a comprehensive multi-class image and video dataset for gastrointestinal endoscopy. Sci. Data 7(1), 1–14 (2020)CrossRef Borgli, H., Thambawita, V., Smedsrud, P.H., Hicks, S., et al.: HyperKvasir, a comprehensive multi-class image and video dataset for gastrointestinal endoscopy. Sci. Data 7(1), 1–14 (2020)CrossRef
5.
6.
Zurück zum Zitat Diakogiannis, F.I., Waldner, F., Caccetta, P., Wu, C.: ResUNet-a: a deep learning framework for semantic segmentation of remotely sensed data. ISPRS J. Photogramm. Remote. Sens. 162, 94–114 (2020)CrossRef Diakogiannis, F.I., Waldner, F., Caccetta, P., Wu, C.: ResUNet-a: a deep learning framework for semantic segmentation of remotely sensed data. ISPRS J. Photogramm. Remote. Sens. 162, 94–114 (2020)CrossRef
7.
Zurück zum Zitat Dong, N., Kampffmeyer, M., Liang, X., Wang, Z., Dai, W., Xing, E.: Unsupervised domain adaptation for automatic estimation of cardiothoracic ratio. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11071, pp. 544–552. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00934-2_61CrossRef Dong, N., Kampffmeyer, M., Liang, X., Wang, Z., Dai, W., Xing, E.: Unsupervised domain adaptation for automatic estimation of cardiothoracic ratio. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11071, pp. 544–552. Springer, Cham (2018). https://​doi.​org/​10.​1007/​978-3-030-00934-2_​61CrossRef
8.
Zurück zum Zitat Guo, Y., Bernal, J., Matuszewski, B.J.: Polyp segmentation with fully convolutional deep neural networks-extended evaluation study. J. Imaging 6(7), 69 (2020)CrossRef Guo, Y., Bernal, J., Matuszewski, B.J.: Polyp segmentation with fully convolutional deep neural networks-extended evaluation study. J. Imaging 6(7), 69 (2020)CrossRef
9.
Zurück zum Zitat Haq, M.M., Huang, J.: Adversarial domain adaptation for cell segmentation. In: Proceedings of the Third Conference on Medical Imaging with Deep Learning (MIDL), pp. 277–287 (2020) Haq, M.M., Huang, J.: Adversarial domain adaptation for cell segmentation. In: Proceedings of the Third Conference on Medical Imaging with Deep Learning (MIDL), pp. 277–287 (2020)
10.
Zurück zum Zitat Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3431–3440 (2015) Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3431–3440 (2015)
11.
Zurück zum Zitat Menon, S., Trudgill, N.: How commonly is upper gastrointestinal cancer missed at endoscopy? A meta-analysis. Endosc. Int. Open 2(2), E46 (2014)CrossRef Menon, S., Trudgill, N.: How commonly is upper gastrointestinal cancer missed at endoscopy? A meta-analysis. Endosc. Int. Open 2(2), E46 (2014)CrossRef
12.
Zurück zum Zitat Milletari, F., Navab, N., Ahmadi, S.: V-Net: fully convolutional neural networks for volumetric medical image segmentation. In: International Conference on 3D Vision (3DV), pp. 565–571 (2016) Milletari, F., Navab, N., Ahmadi, S.: V-Net: fully convolutional neural networks for volumetric medical image segmentation. In: International Conference on 3D Vision (3DV), pp. 565–571 (2016)
13.
Zurück zum Zitat Pandey, P., Tyagi, A.K., Ambekar, S., Prathosh, A.P.: Unsupervised domain adaptation for semantic segmentation of NIR images through generative latent search. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12351, pp. 413–429. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58539-6_25CrossRef Pandey, P., Tyagi, A.K., Ambekar, S., Prathosh, A.P.: Unsupervised domain adaptation for semantic segmentation of NIR images through generative latent search. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12351, pp. 413–429. Springer, Cham (2020). https://​doi.​org/​10.​1007/​978-3-030-58539-6_​25CrossRef
15.
Zurück zum Zitat Subramanian, V., Ragunath, K.: Advanced endoscopic imaging: a review of commercially available technologies. Clin. Gastroenterol. Hepatol. 12(3), 368–376 (2014)CrossRef Subramanian, V., Ragunath, K.: Advanced endoscopic imaging: a review of commercially available technologies. Clin. Gastroenterol. Hepatol. 12(3), 368–376 (2014)CrossRef
16.
Zurück zum Zitat Tan, M., Le, Q.: EfficientNet: rethinking model scaling for convolutional neural networks. In: Proceedings of International Conference on Machine Learning (ICML), pp. 6105–6114 (2019) Tan, M., Le, Q.: EfficientNet: rethinking model scaling for convolutional neural networks. In: Proceedings of International Conference on Machine Learning (ICML), pp. 6105–6114 (2019)
17.
Zurück zum Zitat Tsai, Y., Hung, W., Schulter, S., Sohn, K., Yang, M., Chandraker, M.: Learning to adapt structured output space for semantic segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7472–7481 (2018) Tsai, Y., Hung, W., Schulter, S., Sohn, K., Yang, M., Chandraker, M.: Learning to adapt structured output space for semantic segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7472–7481 (2018)
18.
Zurück zum Zitat Vu, T.H., Jain, H., Bucher, M., Cord, M., Pérez, P.: Advent: adversarial entropy minimization for domain adaptation in semantic segmentation. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2512–2521 (2019) Vu, T.H., Jain, H., Bucher, M., Cord, M., Pérez, P.: Advent: adversarial entropy minimization for domain adaptation in semantic segmentation. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2512–2521 (2019)
19.
Zurück zum Zitat Wu, Z., Ge, R., Wen, M., Liu, G., et al.: ELNet: automatic classification and segmentation for esophageal lesions using convolutional neural network. Med. Image Anal. 67, 101838 (2021) Wu, Z., Ge, R., Wen, M., Liu, G., et al.: ELNet: automatic classification and segmentation for esophageal lesions using convolutional neural network. Med. Image Anal. 67, 101838 (2021)
21.
Zurück zum Zitat Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)CrossRef Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)CrossRef
Metadaten
Titel
EndoUDA: A Modality Independent Segmentation Approach for Endoscopy Imaging
verfasst von
Numan Celik
Sharib Ali
Soumya Gupta
Barbara Braden
Jens Rittscher
Copyright-Jahr
2021
DOI
https://doi.org/10.1007/978-3-030-87199-4_29