Skip to main content

AI in Surgical Robotics

  • Living reference work entry
  • First Online:
Artificial Intelligence in Medicine

Abstract

The future of surgery is tightly knit with the evolution of artificial intelligence (AI) and its thorough involvement in surgical robotics. Robotics long ago became an integral part of the manufacturing industry. The area of healthcare though adds several more layers of complication. In this chapter we elaborate a broad range of issues to be dealt with when a robotic system enters the surgical theater and interacts with human surgeons – from overcoming the limitations of minimally invasive surgery to the enhancement of performance in open surgery. We present the latest from the fields of cognitive surgical robots, focusing on proprioception, intraoperative decision-making, and, ultimately, autonomy. More specifically, we discuss how AI has advanced the research field of surgical tool tracking, haptic feedback and tissue interaction sensing, advanced intraoperative visualization, robot-assisted task execution, and finally land in the crucial development of context-aware decision support.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Similar content being viewed by others

References

  1. Yang GZ, Cambias J, Cleary K, Daimler E, Drake J, et al. Medical robotics – regulatory, ethical, and legal considerations for increasing levels of autonomy. Sci Robot. 2017;2:eaam8638.

    PubMed  Google Scholar 

  2. Eigen D, Puhrsch C, Fergus R. Depth map prediction from a single image using a multi-scale deep network. In: Ghahramani Z, Welling M, Cortes C, Lawrence ND, Weinberger KQ, editors. Advances in neural information processing systems 27. Curran Associates; 2014. p. 2366–74. http://papers.nips.cc/paper/5539-depth-map-prediction-from-a-single-image-using-a-multi-scale-deep-network.pdf.

    Google Scholar 

  3. Cao Y ZW, Shen C. Estimating depth from monocular images as classification using deep fully convolutional residual networks; 2018. p. 3174–82. https://ieeexplore.ieee.org/document/8010878/authorsauthors

  4. Fu H, Gong M, Wang C, Batmanghelich K, Tao D. Deep ordinal regression network for monocular depth estimation. In: Proceedings of CVPR. 2018. p. 2002–11.

    Google Scholar 

  5. Garg R, G VKB, Reid ID. Unsupervised CNN for single view depth estimation: geometry to the rescue. European Conference on Computer Vision (ECCV). 2016;abs/1603.04992. http://arxiv.org/abs/1603.04992

  6. Godard C, Mac Aodha O, Brostow GJ. Unsupervised monocular depth estimation with left-right consistency. In: IEEE conference on computer vision and pattern recognition (CVPR). 2017. http://visual.cs.ucl.ac.uk/pubs/monoDepth/

  7. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP. Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process. 2004;13(4):600–12. https://doi.org/10.1109/TIP.2003.819861.

    Article  PubMed  Google Scholar 

  8. Woodford OJ, Torr PHS, Reid ID, Fitzgibbon AW. Global stereo reconstruction under second-order smoothness priors. IEEE Trans Pattern Anal Mach Intell. 2009;31(12):2115–28. https://doi.org/10.1109/TPAMI.2009.131.

    Article  PubMed  Google Scholar 

  9. Hirschmuller H. Stereo processing by semiglobal matching and mutual information. IEEE Trans Pattern Anal Mach Intell. 2008;30(2):328–41.

    PubMed  Google Scholar 

  10. Žbontar J, LeCun Y. Stereo matching by training a convolutional neural network to compare image patches. J Mach Learn Res. 2016;17(1):2287–318.

    Google Scholar 

  11. Chang JR, Chen YS. Pyramid stereo matching network. In: Proceedings of the IEEE conference on computer vision and pattern recognition. IEEE; 2018. p. 5410–8.

    Google Scholar 

  12. Guo X, Yang K, Yang W, Wang X, Li H. Group-wise correlation stereo network. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR). IEEE; 2019. p. 3273–82.

    Google Scholar 

  13. Pilzer A, Xu D, Puscas M, Ricci E, Sebe N. Unsupervised adversarial depth estimation using cycled generative networks. In: 2018 international conference on 3D vision (3DV). IEEE; 2018. p. 587–95.

    Google Scholar 

  14. Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, et al. Generative adversarial nets. In: Ghahramani Z, Welling M, Cortes C, Lawrence N, Weinberger KQ, editors. Advances in neural information processing systems, vol. 27. Curran Associates; 2014. https://proceedings.neurips.cc/paper/2014/file/5ca3e9b122f61f8f06494c97b1afccf3-Paper.pdf.

    Google Scholar 

  15. Allan M, Jonathan McLeod A, et al. Stereo correspondence and reconstruction of endoscopic data challenge. CoRR. 2021. abs/2101.01133.

    Google Scholar 

  16. Xu K, Chen Z, Jia F. Unsupervised binocular depth prediction network for laparoscopic surgery. Comput Assist Surg. 2019;24(Suppl 1):30–5.

    Google Scholar 

  17. Rau A, Edwards PJE, Ahmad OF et al. Implicit domain adaptation with conditional generative adversarial networks for depth prediction in endoscopy. Int J CARS 14, 1167–1176 (2019). https://doi.org/10.1007/s11548-019-01962-w.

  18. Isola P, Zhu JY, Zhou T, Efros A. Image-to-image translation with conditional adversarial networks. CVPR; 2017.

    Google Scholar 

  19. Cartucho J, Tukra S, Li Y, Elson DS, Giannarou S. VisionBlender: a tool to efficiently generate computer vision datasets for robotic surgery. Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization; 2020. p. 1–8.

    Google Scholar 

  20. Triggs B, McLauchlan PF, Hartley RI, Fitzgibbon AW. Bundle adjustment – a modern synthesis. In: Proceedings of the international workshop on vision algorithms: theory and practice. ICCV ‘99. Berlin/Heidelberg: Springer; 1999. p. 298–372.

    Google Scholar 

  21. Mor’e J. The Levenberg-Marquardt algorithm: implementation and theory. In: Watson GA, editor. Numerical analysis. Vol. 630 of Lecture notes in mathematics. Berlin/Heidelberg: Springer; 1978. p. 105–16. https://doi.org/10.1007/BFb0067700.

    Chapter  Google Scholar 

  22. Zhou T, Brown M, Snavely N, Lowe DG. Unsupervised learning of depth and ego-motion from video. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR). IEEE; 2017.

    Google Scholar 

  23. Fragkiadaki A, Seybold B, Schmid C, Sukthankar R, Vijayanarasimhan S, Ricco S. Self- supervised learning of structure and motion from video. arxiv. 2017;2017. https://arxiv.org/abs/1704.07804

  24. Yin Z, Shi J. GeoNet: unsupervised learning of dense depth, optical flow and camera pose. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR). IEEE; 2018.

    Google Scholar 

  25. Godard C, Mac Aodha O, Firman M, Brostow GJ. Digging into self-supervised monocular depth prediction. Proceedings of ICCV 2019, October.

    Google Scholar 

  26. Lin J, Clancy NT, Hu Y, Qi J, Tatla T, Stoyanov D, Maier-Hein L, Elson DS. Endoscopic depth measurement and super-spectral-resolution imaging. In: Medical image computing and computer assisted intervention – MICCAI 2017 – 20th international conference, Quebec City, QC, Canada, September 11–13, 2017, Proceedings, Part II. Springer; 2017. p. 39–47.

    Google Scholar 

  27. Bay L. SURF: speeded up robust features. In: Computer vision – ECCV 2006. Berlin/Heidelberg: Springer; 2006. p. 404–17.

    Google Scholar 

  28. Lucas B, Kanade T. An iterative image registration technique with an application to stereo vision. In: Proceedings of the international joint conference on artificial intelligence. Kaufmann; 1981. p. 674–9.

    Google Scholar 

  29. Giannarou S, Zhang Z, Yang G. Deformable structure from motion by fusing visual and inertial measurement data. 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012, pp. 4816–4821. https://doi.org/10.1109/IROS.2012.6385671.

  30. Tukra S, Marcus HJ, Giannarou S. See-through vision with unsupervised scene occlusion reconstruction. IEEE Trans Pattern Anal Mach Intell. 2021. https://doi.org/10.1109/TPAMI.2021.3058410. Epub ahead of print.

  31. Davison AJ, Reid ID, Molton ND, Stasse O. MonoSLAM: real-time single camera SLAM. IEEE Trans Pattern Anal Mach Intell. 2007;29:1052–2007.

    PubMed  Google Scholar 

  32. Mountney P, Stoyanov D, Davison AJ, Yang G-Z. Simultaneous stereoscope localization and soft-tissue mapping for minimal invasive surgery. In: Medical image computing and computer-assisted intervention – MICCAI 2006, 9th international conference, Copenhagen, Denmark, October 1–6, 2006, Proceedings, Part I. Springer; 2006. p. 347–54.

    Google Scholar 

  33. Grasa ÓG, Bernal E, Casado S, Gil I, Montiel JMM. Visual SLAM for handheld monocular endoscope. IEEE Trans Med Imaging. 2014;33(1):135–46. https://doi.org/10.1109/TMI.2013.2282997.

    Article  PubMed  Google Scholar 

  34. Mur-Artal R, Montiel J, Tardós J. ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Trans Robot. 2015;31(5):1147–63.

    Google Scholar 

  35. Song J, Wang J, Zhao L, Huang S, Dissanayake G. MIS-SLAM: real-time large-scale dense deformable SLAM system in minimal invasive surgery based on heterogeneous computing. IEEE Robot Automat Lett. 2018;3(4):4068–75. https://doi.org/10.1109/LRA.2018.2856519.

    Article  Google Scholar 

  36. Hao R, Ozguner O, Cavusoglu MC. Vision-based surgical tool pose estimation for the Da Vinci® robotic surgical system. In: 2018 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE; 2018. p. 1298–305.

    Google Scholar 

  37. Ye M, Zhang L, Giannarou S, Yang GZ. Real-time 3d tracking of articulated tools for robotic surgery. In: International conference on medical image computing and computer-assisted intervention. Springer; 2016. p. 386–94.

    Google Scholar 

  38. Shao J, Luo H, Xiao D, Hu Q, Jia F. Progressive hand-eye calibration for laparoscopic surgery navigation. In: Computer assisted and robotic endoscopy and clinical image-based procedures. Springer; 2017. p. 42–9.

    Google Scholar 

  39. Kendall A, Grimes M, Cipolla R. Posenet: A convolutional network for realtime 6-dof camera relocalization. In: Proceedings of the IEEE international conference on computer vision. IEEE; 2015. p. 2938–46.

    Google Scholar 

  40. Mahendran S, Ali H, Vidal R. 3d pose regression using convolutional neural networks. In: Proceedings of the IEEE international conference on computer vision workshops. IEEE Computer Society; 2017. p. 2174–82.

    Google Scholar 

  41. Facil JM, Ummenhofer B, Zhou H, Montesano L, Brox T, Civera J. Camconvs: camera-aware multi-scale convolutions for single-view depth. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. IEEE Computer Society; 2019. p. 11826–35.

    Google Scholar 

  42. Reiter A, Allen PK, Zhao T. Articulated surgical tool detection using virtually rendered templates. In: Computer assisted radiology and surgery (CARS). 2012. p. 1–8.

    Google Scholar 

  43. Reiter A, Allen PK, Zhao T. Feature classification for tracking articulated surgical tools. In: MICCAI. Springer; 2012. p. 592–600.

    Google Scholar 

  44. Zhan J, Cartucho J, Giannarou S. Autonomous tissue scanning under free-form motion for intraoperative tissue characterisation. In: ICRA. IEEE; 2020. p. 11147–54.

    Google Scholar 

  45. Ma L, Wang J, Kiyomatsu H, Tsukihara H, Sakuma I, Kobayashi E. Surgical navigation system for laparoscopic lateral pelvic lymph node dissection in rectal cancer surgery using laparoscopic-vision-tracked ultrasonic imaging. Surg Endosc. 2020 Nov 13. https://doi.org/10.1007/s00464-020-08153-8. Epub ahead of print. PMID: 33185764.

  46. Jayarathne UL, McLeod AJ, Peters TM, Chen ECS. Robust intraoperative US probe tracking using a monocular endoscopic camera. In: MICCAI. Springer; 2013. p. 363–70.

    Google Scholar 

  47. Jayarathne UL, Chen EC, Moore J, Peters TM. Robust, intrinsic tracking of a laparoscopic ultrasound probe for ultrasound augmented laparoscopy. IEEE Trans Med Imaging. 2018;38(2):460–9.

    PubMed  Google Scholar 

  48. Zhang L, Ye M, Chan PL, Yang GZ. Real-time surgical tool tracking and pose estimation using a hybrid cylindrical marker. IJCARS. 2017;12(6):921–30.

    Google Scholar 

  49. Gadwe A, Ren H. Real-time 6dof pose estimation of endoscopic instruments using printable markers. IEEE Sensors J. 2018;19(6):2338–46.

    Google Scholar 

  50. Zhou D, Dong X, Zhang F, Chen W. A match method of encircled marker points on external store model. In: ICCSE. IEEE; 2019. p. 533–8.

    Google Scholar 

  51. Huang B, Tsai YY, Cartucho J, Vyas K, Tuch D, Giannarou S, Elson DS. Tracking and visualization of the sensing area for a tethered laparoscopic gamma probe. IJCARS. 2020;15(8):1389–97.

    Google Scholar 

  52. Marcus HJ, Payne CJ, Hughes-Hallett A, Gras G, Leibrandt K, Nandi D, Yang GZ. Making the leap: the translation of innovative surgical devices from the laboratory to the operating room. Ann Surg. 2015;263:1077.

    Google Scholar 

  53. Naghibi H, Hoitzing WB, Stramigioli S, Abayazid M. A flexible endoscopic sensing module for force haptic feedback integration. In: 2018 9th Cairo international biomedical engineering conference. Piscataway: IEEE; 2018. p. 158–61.

    Google Scholar 

  54. Hodgson S, Tavakoli M, Lelevé A, Tu Pham M. High-fidelity sliding mode control of a pneumatic haptic teleoperation system. Adv Robot. 2014;28:659–71.

    Google Scholar 

  55. Ogawa K, Ohnishi K, Ibrahim Y. Development of flexible haptic forceps based on the electrohydraulic transmission system. IEEE Trans Ind Inform. 2018;14:5256–67.

    Google Scholar 

  56. Yilmaz2020. Neural network based inverse dynamics identification and external force estimation on the da Vinci research kit.

    Google Scholar 

  57. Tran2020. A deep learning approach to intrinsic force sensing on the da vinci surgical robot.

    Google Scholar 

  58. Kim W, Seung S, Choi H, Park S, Ko SY, Park JO. Image-based force estimation of deformable tissue using depth map for single-port surgical robot. In: 12th international conference on control, automation and systems (ICCAS). IEEE; 2012. p. 1716–9.

    Google Scholar 

  59. Giannarou S, Ye M, Gras G, Leibrandt K, Marcus HJ, Yang G-Z. Vision-based deformation recovery for intraoperative force estimation of tool-tissue interaction for neurosurgery. Int J Comput Assist Radiol Surg. 2016;11(6):929–36.

    PubMed  PubMed Central  Google Scholar 

  60. Aviles AI, Marban A, Sobrevilla P, Fernandez J, Casals A. A recurrent neural network approach for 3d vision-based force estimation. In: 4th international conference on image processing theory, tools and applications (IPTA). IEEE; 2014. p. 1–6.

    Google Scholar 

  61. Rivero AIA, Alsaleh SM, Hahn JK, Casals A. Towards retrieving force feedback in robotic-assisted surgery: a supervised neuro-recurrent-vision approach. IEEE Trans Haptics. 2017;10(3):431–43.

    Google Scholar 

  62. Marban A, Srinivasan V, Samek W, Fernández J, Casals A. A recurrent convolutional neural network approach for sensorless force estimation in robotic surgery. Biomed Signal Process Control. 2019;50:134–50.

    Google Scholar 

  63. Koivukangas T, Katisko JP, Koivukangas JP. Technical accuracy of optical and the electromagnetic tracking systems. SpringerPlus. 2013;2(1):90.

    PubMed  PubMed Central  Google Scholar 

  64. Liao R, Zhang L, Sun Y, Miao S, Chefd C. A review of recent advances in registration techniques applied to minimally invasive therapy. IEEE Trans Multimedia. 2013;15(5):983–1000.

    Google Scholar 

  65. Wein W. Brain-shift correction with image-based registration and landmark accuracy evaluation. In: Simulation, image processing, and ultrasound systems for assisted diagnosis and navigation. Cham: Springer; 2018. p. 146–51.

    Google Scholar 

  66. Fuerst B, Wein W, Müller M, Navab N. Automatic ultrasound MRI registration for neurosurgery using the 2D and 3D LC2 metric. Med Image Anal. 2014;18(8):1312–9.

    PubMed  Google Scholar 

  67. Balakrishnan A, Zhao M, Sabuncu R, Guttag J, Dalca AV. VoxelMorph: a learning framework for deformable medical image registration. IEEE Trans Med Imaging. 2019;38(8):1788–800.

    Google Scholar 

  68. Hu Y, et al. Weakly-supervised convolutional neural networks for multimodal image registration. Med Image Anal. 2018;49:1–13.

    CAS  PubMed  PubMed Central  Google Scholar 

  69. Esteban J, Grimm M, Unberath M, Zahnd G, Navab N. Towards fully automatic X-ray to CT registration. In: Medical image computing and computer assisted intervention – MICCAI. Cham: Springer; 2019.

    Google Scholar 

  70. Hou B, et al. Predicting slice-to-volume transformation in presence of arbitrary subject motion. In: Medical image computing and computer assisted intervention – MICCAI. Cham: Springer; 2017. p. 296–304.

    Google Scholar 

  71. Bier B, et al. X-ray-transform invariant anatomical landmark detection for pelvic trauma surgery. In: Medical image computing and computer assisted intervention – MICCAI. Cham: Springer; 2018. p. 55–63.

    Google Scholar 

  72. Gao C, Unberath M, Taylor R, Armand M. Localizing dexterous surgical tools in X-ray for image-based navigation. In: Proceedings of IPCAI. Cham: Springer; 2019. p. 1–4.

    Google Scholar 

  73. Liao H, Lin W-A, Zhang J, Zhang J, Luo J, Zhou SK. Multiview 2D/3D rigid registration via a point-of-interest network for tracking and triangulation. In: Proceedings of IEEE conference on computer vision and pattern recognition. IEEE Computer Society; 2019. p. 12638–47.

    Google Scholar 

  74. Gregory TM, Gregory J, Sledge J, Allard R, Mir O. Surgery guided by mixed reality: presentation of a proof of concept. Acta Orthop. 2018;89(5):480–3.

    PubMed  PubMed Central  Google Scholar 

  75. Pratt P, Ives M, Lawton G, Simmons J, Radev N, Spyropoulou L, Amiras D. Through the Hololens’ looking glass: augmented reality for extremity reconstruction surgery using 3d vascular models with perforating vessels. Eur Radiol Exp. 2018;2(1):2.

    PubMed  PubMed Central  Google Scholar 

  76. Bergonzi L, Colombo G, Redaelli D, Lorusso M. An augmented reality approach to visualize biomedical images. Comput Aided Des Appl. 2019;16(6):1195–208.

    Google Scholar 

  77. Sauer IM, Queisner M, Tang P, Moosburner S, Hoepfner O, Horner R, Lohmann R, Pratschke J. Mixed reality in visceral surgery: development of a suitable workflow and evaluation of intraoperative use-cases. Ann Surg. 2017;266(5):706–12.

    PubMed  Google Scholar 

  78. Incekara F, Smits M, Dirven C, Vincent A. Clinical feasibility of a wearable mixed-reality device in neurosurgery. World Neurosurg. 2018;118:e422–7.

    PubMed  Google Scholar 

  79. Cartucho J, Shapira D, Ashrafian H, et al. Multimodal mixed reality visualisation for intraoperative surgical guidance. Int J CARS. 2020;15:819–26.

    Google Scholar 

  80. Sinkin JC, Rahman OF, Nahabedian MY. Google glass in the operating room: the plastic surgeon perspective. Plast Reconstr Surg. 2016;138(1):298–302.

    CAS  PubMed  Google Scholar 

  81. Billings S, Deshmukh N, Kang HJ, Taylor R, Boctor EM. System for robot-assisted real-time laparoscopic ultrasound elastography. In: SPIE medical imaging. International Society for Optics and Photonics; 2012.

    Google Scholar 

  82. Ruszkowski A, Mohareri O, Lichtenstein S, Cook R, Salcudean S. On the feasibility of heart motion compensation on the Davinci® surgical robot for coronary artery bypass surgery: implementation and user studies. In: Robotics and automation (ICRA), 2015 IEEE international conference on. IEEE; 2015. p. 4432–9.

    Google Scholar 

  83. Pratt P, Hughes-Hallett A, Zhang L, Patel N, Mayer E, Darzi A, Yang G-Z. Autonomous ultrasound-guided tissue dissection. In: Medical image computing and computer-assisted intervention–MICCAI 2015. Springer; 2015.

    Google Scholar 

  84. Hu D, Gong Y, Hannaford B, Seibel EJ. Semi-autonomous simulated brain tumor ablation with Ravenii surgical robot using behaviour tree. In: Robotics and automation (ICRA), 2015 IEEE international conference on. IEEE; 2015. p. 3868–75.

    Google Scholar 

  85. Caversaccio M, Wimmer W, Anso J, Mantokoudis G, Gerber N, Rathgeb C, Schneider D, Hermann J, Wagner F, Scheidegger O, et al. Robotic middle ear access for cochlear implantation: first in man. PLoS One. 2019;14(8):e0220543.

    CAS  PubMed  PubMed Central  Google Scholar 

  86. Zhang L, Ye M, Giataganas P, Hughes M, Yang G-Z. Autonomous scanning for endomicroscopic mosaicing and 3d fusion. In: 2017 IEEE international conference on robotics and automation (ICRA). IEEE; 2017. p. 3587–93.

    Google Scholar 

  87. Zhang L, Ye M, Giannarou S, Pratt P, Yang G-Z. Motion-compensated autonomous scanning for tumour localisation using intraoperative ultrasound. In: International conference on medical image computing and computer-assisted intervention. Springer; 2017. p. 619–27.

    Google Scholar 

  88. Zhan J, Cartucho J, Giannarou S. Autonomous tissue scanning under free-form motion for intraoperative tissue characterisation. In: 2020 IEEE international conference on robotics and automation (ICRA). Paris: IEEE; 2020. p. 11147–54.

    Google Scholar 

  89. Zhang L, Ye M, Giataganas P, Hughes M, Bradu A, Podoleanu A, et al. From macro to micro: autonomous multiscale image fusion for robotic surgery. IEEE Robot Automat Mag. 2017;24(2):63–72.

    Google Scholar 

  90. Varghese RJ, Berthet-Rayne P, Giataganas P, Vitiello V, Yang G-Z. A framework for sensorless and autonomous probe-tissue contact management in robotic endomicroscopic scanning. In: 2017 IEEE international conference on robotics and automation (ICRA). IEEE; 2017. p. 1738–45.

    Google Scholar 

  91. Triantafyllou P, Wisanuvej P, Giannarou S, Liu J, Yang G-Z. A framework for sensorless tissue motion tracking in robotic endomicroscopy scanning. In: 2018 IEEE international conference on robotics and automation (ICRA). IEEE; 2018. p. 2694–9.

    Google Scholar 

  92. Rosa B, Erden MS, Vercauteren T, Herman B, Szewczyk J, Morel G. Building large mosaics of confocal edomicroscopic images using visual servoing. IEEE Trans Biomed Eng. 2012;60(4):1041–9.

    PubMed  Google Scholar 

  93. Giataganas P, Hughes M, Payne CJ, Wisanuvej P, Temelkuran B, Yang G-Z. Intraoperative robotic-assisted large-area high-speed microscopic imaging and intervention. IEEE Trans Biomed Eng. 2018;66(1):208–16.

    PubMed  Google Scholar 

  94. O. Zettinig, B. Frisch, S. Virga, M. Esposito, A. Rienmuller, B. Meyer, ¨ C. Hennersperger, Y.-M. Ryang, and N. Navab, “3d ultrasound registration-based visual servoing for neurosurgical navigation,” Int J Comput Assist Radiol Surg, vol. 12, no. 9, pp. 1607–1619, 2017.

    PubMed  Google Scholar 

  95. Virga S, Zettinig O, Esposito M, Pfister K, Frisch B, Neff T, Navab N, Hennersperger C. Automatic force-compliant robotic ultrasound screening of abdominal aortic aneurysms. In: 2016 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE; 2016. p. 508–13.

    Google Scholar 

  96. Merouche S, Allard L, Montagnon E, Soulez G, Bigras P, Cloutier G. A robotic ultrasound scanner for automatic vessel tracking and three-dimensional reconstruction of B-mode images. IEEE Trans Ultrason Ferroelectr Freq Control. 2015;63(1):35–46.

    PubMed  Google Scholar 

  97. Nadeau C, Krupa A, Petr J, Barillot C. Moments-based ultrasound visual servoing: from a mono-to multiplane approach. IEEE Trans Robot. 2016;32(6):1558–64.

    Google Scholar 

  98. Pratt P, Hughes-Hallett A, Zhang L, Patel N, Mayer E, Darzi A, Yang G-Z. Autonomous ultrasound-guided tissue dissection. In: International conference on medical image computing and computer assisted intervention. Springer; 2015. p. 249–57.

    Google Scholar 

  99. Chevrie J, Krupa A, Babel M. Real-time teleoperation of flexible beveled-tip needle insertion using haptic force feedback and 3d ultrasound guidance. In: 2019 international conference on robotics and automation (ICRA). 2019. p. 2700–6.

    Google Scholar 

  100. Huang Q, Lan J, Li X. Robotic arm based automatic ultrasound scanning for three-dimensional imaging. IEEE Trans Ind Inf. 2018;15(2):1173–82.

    Google Scholar 

  101. Hennersperger C, Fuerst B, Virga S, Zettinig O, Frisch B, Neff T, Navab N. Towards mri-based autonomous robotic us acquisitions: a first feasibility study. IEEE Trans Med Imaging. 2016;36(2):538–48.

    PubMed  Google Scholar 

  102. Charalampaki P, Javed M, Daali S, Heiroth HJ, Igressa A, Weber F. Confocal laser endomicroscopy for real-time histomorphological diagnosis: our clinical experience with 150 brain and spinal tumor cases. Neurosurgery. 2015;62:171–6.

    PubMed  Google Scholar 

  103. Tzafetas M, Mitra A, Paraskevaidi M, Bodai Z, Kalliala I, Bowden S, Lathouras K, Rosini F, Szasz M, Savage A, Manoli E, Balog J, McKenzie J, Lyons D, Bennett P, MacIntyre D, Ghaem-Maghami S, Takats Z, Kyrgiou M. The intelligent knife (iKnife) and its intraoperative diagnostic advantage for the treatment of cervical disease. Proc Natl Acad Sci USA. 2020;117(13):7338–46.

    CAS  PubMed  PubMed Central  Google Scholar 

  104. Desroches J, Jermyn M, Pinto M, Picot F, Tremblay M-A, Obaid S, Marple E, Urmey K, Trudel D, Soulez G, Guiot M-C, Wilson BC, Petrecca K, Leblond F. A new method using Raman spectroscopy for in vivo targeted brain cancer tissue biopsy. Sci Rep. 2018;8(1):1792.

    PubMed  PubMed Central  Google Scholar 

  105. Ortega S, Fabelo H, Camacho R, De la Luz Plaza M, Callicó GM, Sarmiento R. Detecting brain tumor in pathological slides using hyperspectral imaging. Biomed Opt Express. 2018;9(2):818–31.

    PubMed  PubMed Central  Google Scholar 

  106. André B, Vercauteren T, Buchner AM, Wallace MB, Ayache N. A smart atlas for endomicroscopy using automated video retrieval. Med Image Anal. 2011;15(4):460–76.

    PubMed  Google Scholar 

  107. André B, Vercauteren T, Buchner AM, Krishna M, Ayache N, Wallace MB. Software for automated classification of probebased confocal laser endomicroscopy videos of colorectal polyps. World J Gastroenterol. 2012;18(39):5560–9.

    PubMed  PubMed Central  Google Scholar 

  108. André B, Vercauteren T, Buchner AM, Wallace MB, Ayache N. Learning semantic and visual similarity for endomicroscopy video retrieval. IEEE Trans Med Imaging. 2012;31(6):1276–88.

    PubMed  Google Scholar 

  109. Wan S, Sun S, Bhattacharya S, Kluckner S, Gigler A, Simon E, Fleischer M, Charalampaki P, Chen T, Kamen A. Towards an efficient computational framework for guiding surgical resection through intra-operative endo-microscopic pathology. In: Medical image computing and computer-assisted intervention (MICCAI). 2015. p. 421–9.

    Google Scholar 

  110. Kamen A, Sun S, Wan S, Kluckner S, Chen T, Gigler AM, Simon E, Fleischer M, Javed M, Daali S, Igressa A, Charalampaki P. Automatic tissue differentiation based on confocal endomicroscopic images for intraoperative guidance in neurosurgery. Biomed Res Int. 2016;2016:6183218.

    PubMed  PubMed Central  Google Scholar 

  111. Gu Y, Yang J, Yang GZ. Multi-view multi-modal feature embedding for endomicroscopy mosaic classification. In: 2016 IEEE conference on computer vision and pattern recognition workshops (CVPRW). 2016. p. 1315–23.

    Google Scholar 

  112. Gu Y, Vyas K, Yang J, Yang GZ. Unsupervised feature learning for endomicroscopy image retrieval. In: Medical image computing and computer-assisted intervention (MICCAI). 2017. p. 64–71.

    Google Scholar 

  113. Li Y, Charalampaki P, Liu Y, Yang GZ, Giannarou S. Context aware decision support in neurosurgical oncology based on an efficient classification of endomicroscopic data. Int J Comput Assist Radiol Surg. 2018;13(8):1187–1199. https://doi.org/10.1007/s11548-018-1806-7. Epub 2018 Jun 13. PMID: 29948845; PMCID: PMC6096753.

  114. Ravì D, Szczotka AB, Pereira SP, Vercauteren T. Adversarial training with cycle consistency for unsupervised super-resolution in endomicroscopy. Med Image Anal. 2019;53:123–31.

    PubMed  PubMed Central  Google Scholar 

  115. Szczotka AB, Ravì D, Shakir DI, Pereira SP, Vercauteren T. Effective deep learning training for single-image super-resolution in endomicroscopy exploiting video-registration-based reconstruction. Int J Comput Assist Radiol Surg. 2018;13(6):917–24.

    PubMed  PubMed Central  Google Scholar 

  116. Baltussen EJM, Kok END, Brouwer de Koning SG, Sanders J, Aalbers AGJ, Kok NFM, Beets GL, Flohil CC, Bruin SC, Kuhlmann KFD, et al. Hyperspectral imaging for tissue classification, a way toward smart laparoscopic colorectal surgery. J Biomed Opt. 2019;24:016002.

    PubMed Central  Google Scholar 

  117. Han Z, Zhang A, Wang X, Sun Z, Wang MD, Xie T. In vivo use of hyperspectral imaging to develop a noncontact endoscopic diagnosis support system for malignant colorectal tumors. J Biomed Opt. 2016;21:016001.

    Google Scholar 

  118. Pourreza-Shahri R, Saki F, Kehtarnavaz N, Leboulluec P, Liu H. Classification of ex-vivo breast cancer positive margins measured by hyperspectral imaging. In: Proceedings of the IEEE international conference on image processing, Melbourne, 15–18 September 2013, p. 1408–12.

    Google Scholar 

  119. Fei B, Lu G, Wang X, Zhang H, Little JV, Patel MR, Griffith CC, El-Diery MW, Chen AY. Label-free reflectance hyperspectral imaging for tumor margin assessment: a pilot study on surgical specimens of cancer patients. J Biomed Opt. 2017;22:086009.

    PubMed Central  Google Scholar 

  120. Jayanthi JL, Nisha GU, Manju S, Philip EK, Jeemon P, Baiju KV, Beena VT, Subhash N. Diffuse reflectance spectroscopy: diagnostic accuracy of a non-invasive screening technique for early detection of malignant changes in the oral cavity. BMJ Open. 2011;1:e000071.

    CAS  PubMed  PubMed Central  Google Scholar 

  121. Regeling B, Laffers W, Gerstner AOHH, Westermann S, Müller NA, Schmidt K, Bendix J, Thies B. Development of an image pre-processor for operational hyperspectral laryngeal cancer detection. J Biophotonics. 2016;9:235–45.

    PubMed  Google Scholar 

  122. Ravi D, Fabelo H, Callic GM, Yang GZ. Manifold embedding and semantic segmentation for intraoperative guidance with hyperspectral brain imaging. IEEE Trans Med Imaging. 2017;36:1845–57.

    PubMed  Google Scholar 

  123. Fabelo H, Ortega S, Ravi D, Kiran BR, Sosa C, Bulters D, Callicó GM, Bulstrode H, Szolna A, Piñeiro JF, et al. Spatio-spectral classification of hyperspectral images for brain cancer detection during surgical operations. PLoS One. 2018;13:e0193721.

    PubMed  PubMed Central  Google Scholar 

  124. Halicek M, Lu G, Little JV, Wang X, Patel M, Griffith CC, El-Deiry MW, Chen AY, Fei B. Deep convolutional neural networks for classifying head and neck cancer using hyperspectral imaging. J Biomed Opt. 2017;22:060503.

    PubMed Central  Google Scholar 

  125. Halicek M, Little JV, Wang X, Chen AY, Fei B. Optical biopsy of head and neck cancer using hyperspectral imaging and convolutional neural networks. J Biomed Opt. 2019;24:036007.

    PubMed Central  Google Scholar 

  126. Fabelo H, Halicek M, Ortega S, Szolna A, Morera J, Sarmiento R, Callicó GM, Fei B. Surgical aid visualization system for glioblastoma tumor identification based on deep learning and in-vivo hyperspectral images of human patients. In: Fei B, Linte CA, editors. Medical imaging 2019: image-guided procedures, robotic interventions, and modeling, vol. 10951. San Diego: International Society for Optics and Photonics; 2019. p. 35.

    Google Scholar 

  127. Fabelo H, Halicek M, Ortega S, Shahedi M, Szolna A, Piñeiro J, Sosa C, O’Shanahan A, Bisshopp S, Espino C, et al. Deep learning-based framework for in vivo identification of glioblastoma tumor using hyperspectral images of human brain. Sensors. 2019;19:920.

    PubMed Central  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Samyakh Tukra .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this entry

Check for updates. Verify currency and authenticity via CrossMark

Cite this entry

Tukra, S., Lidströmer, N., Ashrafian, H., Giannarou, S. (2021). AI in Surgical Robotics. In: Lidströmer, N., Ashrafian, H. (eds) Artificial Intelligence in Medicine. Springer, Cham. https://doi.org/10.1007/978-3-030-58080-3_323-1

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-58080-3_323-1

  • Received:

  • Accepted:

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-58080-3

  • Online ISBN: 978-3-030-58080-3

  • eBook Packages: Springer Reference MedicineReference Module Medicine

Publish with us

Policies and ethics