Skip to main content
Top

2020 | OriginalPaper | Chapter

Semi-supervised Learning for Instrument Detection with a Class Imbalanced Dataset

Authors : Jihun Yoon, Jiwon Lee, SungHyun Park, Woo Jin Hyung, Min-Kook Choi

Published in: Interpretable and Annotation-Efficient Learning for Medical Image Computing

Publisher: Springer International Publishing

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

The automated recognition of surgical instruments in surgical videos is an essential factor for the evaluation and analysis of surgery. The analysis of surgical instrument localization information can help in analyses related to surgical evaluation and decision making during surgery. To solve the problem of the localization of surgical instruments, we used an object detector with bounding box labels to train the localization of the surgical tools shown in a surgical video. In this study, we propose a semi-supervised learning-based training method to solve the class imbalance between surgical instruments, which makes it challenging to train the detectors of the surgical instruments. First, we labeled gastrectomy videos for gastric cancer performed in 24 cases of robotic surgery to detect the initial bounding box of the surgical instruments. Next, a trained instrument detector was used to discern the unlabeled videos, and new labels were added to the tools causing class imbalance based on the previously acquired statistics of the labeled videos. We also performed object tracking-based label generation in the spatio-temporal domain to obtain accurate label information from the unlabeled videos in an automated manner. We were able to generate dense labels for the surgical instruments lacking labels through bidirectional object tracking using a single object tracker; thus, we achieved improved instrument detection in a fully or semi-automated manner.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Jin, A., et al.: Tool detection and operative skill assessment in surgical videos using region-based convolutional neural networks. In: Proceedings of WACV (2018) Jin, A., et al.: Tool detection and operative skill assessment in surgical videos using region-based convolutional neural networks. In: Proceedings of WACV (2018)
2.
go back to reference Twinanda, A.P., Shehata, S., Mutter, D., Marescaux, J., de Mathelin, M., Padoy, N.: EndoNet a deep architecture for recognition tasks on laparoscopic videos. IEEE Trans. Med. Imaging 36(1), 86–97 (2017)CrossRef Twinanda, A.P., Shehata, S., Mutter, D., Marescaux, J., de Mathelin, M., Padoy, N.: EndoNet a deep architecture for recognition tasks on laparoscopic videos. IEEE Trans. Med. Imaging 36(1), 86–97 (2017)CrossRef
4.
go back to reference Ahmidi, N., et al.: A dataset and benchmarks for segmentation and recognition of gestures in robotic surgery. Trans. Biomed. Eng. 64(9), 2025–2041 (2017)CrossRef Ahmidi, N., et al.: A dataset and benchmarks for segmentation and recognition of gestures in robotic surgery. Trans. Biomed. Eng. 64(9), 2025–2041 (2017)CrossRef
5.
go back to reference Gao, Y., et al.: The JHU-ISI gesture and skill assessment working set (JIGSAWS): a surgical activity dataset for human motion modeling. In: Proceedings of MICCAIW (2014) Gao, Y., et al.: The JHU-ISI gesture and skill assessment working set (JIGSAWS): a surgical activity dataset for human motion modeling. In: Proceedings of MICCAIW (2014)
6.
go back to reference Misra, I., Shrivastava, A., Hebert, M.: Watch and learn: semi-supervised learning for object detectors from video. In: Proceedings of CVPR (2015) Misra, I., Shrivastava, A., Hebert, M.: Watch and learn: semi-supervised learning for object detectors from video. In: Proceedings of CVPR (2015)
7.
8.
go back to reference Choi, M.-K., et al.: Co-occurrence matrix analysis-based semi-supervised training for object detection. In: Proceedings of ICIP (2018) Choi, M.-K., et al.: Co-occurrence matrix analysis-based semi-supervised training for object detection. In: Proceedings of ICIP (2018)
9.
go back to reference Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN towards real-time object detection with region proposal networks. In: Proceedings of NIPS (2015) Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN towards real-time object detection with region proposal networks. In: Proceedings of NIPS (2015)
10.
go back to reference Dai, J., Li, Y., He, K., Sun, J.: R-FCN: object detection via region-based fully convolutional networks. In: Proceedings of NIPS (2016) Dai, J., Li, Y., He, K., Sun, J.: R-FCN: object detection via region-based fully convolutional networks. In: Proceedings of NIPS (2016)
11.
go back to reference Dai, J., Qi, H., Xiong, Y., Li, Y., Zhang, G., Hu, H., Wei, Y.: Deformable convolutional networks. In: Proceedings of ICCV (2017) Dai, J., Qi, H., Xiong, Y., Li, Y., Zhang, G., Hu, H., Wei, Y.: Deformable convolutional networks. In: Proceedings of ICCV (2017)
12.
go back to reference Cai, Z., Vasconcelos, N.: Cascade R-CNN: delving into high quality object detection. In: Proceedings of CVPR (2018) Cai, Z., Vasconcelos, N.: Cascade R-CNN: delving into high quality object detection. In: Proceedings of CVPR (2018)
13.
go back to reference Wang, Q., Zhang, L., Bertinetto, L., Hu, W., Torr, P.H.S.: Fast online object tracking and segmentation: a unifying approach. In: Proceedings of CVPR (2019) Wang, Q., Zhang, L., Bertinetto, L., Hu, W., Torr, P.H.S.: Fast online object tracking and segmentation: a unifying approach. In: Proceedings of CVPR (2019)
15.
go back to reference Paszke, A., et al.: PyTorch: an imperative style, high-performance deep learning library. In: Proceedings of NeurIPS (2019) Paszke, A., et al.: PyTorch: an imperative style, high-performance deep learning library. In: Proceedings of NeurIPS (2019)
17.
go back to reference Lin, T.-Y., Goyal, P., Girshick, R., He, K., Dollar, P.: Focal loss for dense object detection. In: Proceedings of ICCV (2017) Lin, T.-Y., Goyal, P., Girshick, R., He, K., Dollar, P.: Focal loss for dense object detection. In: Proceedings of ICCV (2017)
18.
go back to reference Tian, Z., Shen, C., Chen, H., He, T.: FCOS: fully convolutional one-stage object detection. In: Proceedings of ICCV (2019) Tian, Z., Shen, C., Chen, H., He, T.: FCOS: fully convolutional one-stage object detection. In: Proceedings of ICCV (2019)
19.
go back to reference Jung, H., Choi, M.-K., Jung, J., Lee, J.-H., Kwon, S., Jung, W.Y.: ResNet-based vehicle classification and localization in traffic surveillance systems. In: Proceedings of CVPRW (2017) Jung, H., Choi, M.-K., Jung, J., Lee, J.-H., Kwon, S., Jung, W.Y.: ResNet-based vehicle classification and localization in traffic surveillance systems. In: Proceedings of CVPRW (2017)
22.
go back to reference Lin, T.-Y., Dollár, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: Proceedings of CVPR (2017) Lin, T.-Y., Dollár, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: Proceedings of CVPR (2017)
Metadata
Title
Semi-supervised Learning for Instrument Detection with a Class Imbalanced Dataset
Authors
Jihun Yoon
Jiwon Lee
SungHyun Park
Woo Jin Hyung
Min-Kook Choi
Copyright Year
2020
DOI
https://doi.org/10.1007/978-3-030-61166-8_28

Premium Partner