Skip to main content

2019 | OriginalPaper | Buchkapitel

Deep Residual Learning for Instrument Segmentation in Robotic Surgery

verfasst von : Daniil Pakhomov, Vittal Premachandran, Max Allan, Mahdi Azizian, Nassir Navab

Erschienen in: Machine Learning in Medical Imaging

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Detection, tracking, and pose estimation of surgical instruments provide critical information that can be used to correct inaccuracies in kinematic data in robotic-assisted surgery. Such information can be used for various purposes including integration of pre- and intra-operative images into the endoscopic view. In some cases, automatic segmentation of surgical instruments is a crucial step towards full instrument pose estimation but it can also be solely used to improve user interactions with the robotic system. In our work we focus on binary instrument segmentation, where the objective is to label every pixel as instrument or background and instrument part segmentation, where different semantically separate parts of the instrument are labeled. We improve upon previous work by leveraging recent techniques such as deep residual learning and dilated convolutions and advance both binary-segmentation and instrument part segmentation performance on the EndoVis 2017 Robotic Instruments dataset. The source code for the experiments reported in the paper has been made public (https://​github.​com/​warmspringwinds/​pytorch-segmentation-detection).

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Fußnoten
1
We follow the practice of previous work and use simplified definition without mirroring and centering the filter [5].
 
Literatur
3.
Zurück zum Zitat Bhayani, S.B., Andriole, G.L.: Three-dimensional (3D) vision: does it improve laparoscopic skills? An assessment of a 3D head-mounted visualization system. Rev. Urol. 7(4), 211 (2005) Bhayani, S.B., Andriole, G.L.: Three-dimensional (3D) vision: does it improve laparoscopic skills? An assessment of a 3D head-mounted visualization system. Rev. Urol. 7(4), 211 (2005)
4.
Zurück zum Zitat Bouget, D., Benenson, R., Omran, M., Riffaud, L., Schiele, B., Jannin, P.: Detecting surgical tools by modelling local appearance and global shape. IEEE Trans. Med. Imaging 34(12), 2603–2617 (2015)CrossRef Bouget, D., Benenson, R., Omran, M., Riffaud, L., Schiele, B., Jannin, P.: Detecting surgical tools by modelling local appearance and global shape. IEEE Trans. Med. Imaging 34(12), 2603–2617 (2015)CrossRef
5.
Zurück zum Zitat Chen, L.-C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. arXiv preprint arXiv:1606.00915 (2016) Chen, L.-C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. arXiv preprint arXiv:​1606.​00915 (2016)
7.
Zurück zum Zitat He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
10.
Zurück zum Zitat Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440 (2015) Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440 (2015)
11.
Zurück zum Zitat Okamura, A.M.: Haptic feedback in robot-assisted minimally invasive surgery. Curr. Opin. Urol. 19(1), 102 (2009)CrossRef Okamura, A.M.: Haptic feedback in robot-assisted minimally invasive surgery. Curr. Opin. Urol. 19(1), 102 (2009)CrossRef
12.
Zurück zum Zitat Pezzementi, Z., Voros, S., Hager, G.D.: Articulated object tracking by rendering consistent appearance parts. In: 2009 IEEE International Conference on Robotics and Automation, ICRA 2009, pp. 3940–3947. IEEE (2009) Pezzementi, Z., Voros, S., Hager, G.D.: Articulated object tracking by rendering consistent appearance parts. In: 2009 IEEE International Conference on Robotics and Automation, ICRA 2009, pp. 3940–3947. IEEE (2009)
13.
Zurück zum Zitat Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. CoRR, abs/1505.04597 (2015) Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. CoRR, abs/1505.04597 (2015)
14.
Zurück zum Zitat Speidel, S., Delles, M., Gutt, C., Dillmann, R.: Tracking of instruments in minimally invasive surgery for surgical skill analysis. In: Yang, G.-Z., Jiang, T.Z., Shen, D., Gu, L., Yang, J. (eds.) MIAR 2006. LNCS, vol. 4091, pp. 148–155. Springer, Heidelberg (2006). https://doi.org/10.1007/11812715_19CrossRef Speidel, S., Delles, M., Gutt, C., Dillmann, R.: Tracking of instruments in minimally invasive surgery for surgical skill analysis. In: Yang, G.-Z., Jiang, T.Z., Shen, D., Gu, L., Yang, J. (eds.) MIAR 2006. LNCS, vol. 4091, pp. 148–155. Springer, Heidelberg (2006). https://​doi.​org/​10.​1007/​11812715_​19CrossRef
16.
Zurück zum Zitat Tonet, O., Ramesh, T.U., Megali, G., Dario, P.: Tracking endoscopic instruments without localizer: image analysis-based approach. Stud. Health Technol. Inform. 119, 544–549 (2005) Tonet, O., Ramesh, T.U., Megali, G., Dario, P.: Tracking endoscopic instruments without localizer: image analysis-based approach. Stud. Health Technol. Inform. 119, 544–549 (2005)
Metadaten
Titel
Deep Residual Learning for Instrument Segmentation in Robotic Surgery
verfasst von
Daniil Pakhomov
Vittal Premachandran
Max Allan
Mahdi Azizian
Nassir Navab
Copyright-Jahr
2019
DOI
https://doi.org/10.1007/978-3-030-32692-0_65

Premium Partner