Skip to main content
Top

2012 | Book

Information Processing in Computer-Assisted Interventions

Third International Conference, IPCAI 2012, Pisa, Italy, June 27, 2012. Proceedings

Editors: Purang Abolmaesumi, Leo Joskowicz, Nassir Navab, Pierre Jannin

Publisher: Springer Berlin Heidelberg

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

This book constitutes the proceedings of the Third International Conference on Information Processing in Computer-Assisted Interventions IPCAI 2012, held in Pisa, Italy, on June 27, 2012. The 17 papers presented were carefully reviewed and selected from 31 submissions during two rounds of reviewing and improvement. The papers present novel technical concepts, clinical needs and applications as well as hardware, software and systems and their validation. The main technological focus is on patient-specific modeling and its use in interventions, image-guided and robotic surgery, real-time tracking and imaging.

Table of Contents

Frontmatter

Laparoscopic Surgery

Template-Based Conformal Shape-from-Motion-and-Shading for Laparoscopy
Abstract
Shape-from-Shading (SfS) is one of the fundamental techniques to recover depth from a single view. Such a method has shown encouraging but limited results in laparoscopic surgery due to the complex reflectance properties of the organ tissues. On the other hand, Template-Based Deformable-Shape-from-Motion (DSfM) has been recently used to recover a coarse 3D shape in laparoscopy.
We propose to combine both geometric and photometric cues to robustly reconstruct 3D human organs. Our method is dubbed Deformable-Shape-from-Motion-and-Shading (DSfMS). It tackles the limits of classical SfS and DSfM methods: First the photometric template is reconstructed using rigid SfM (Shape-from-Motion) while the surgeon is exploring – but not deforming – the peritoneal environment. Second a rough 3D deformed shape is computed using a recent method for elastic surface from a single laparoscopic image. Third a fine 3D deformed shape is recovered using shading and specularities.
The proposed approach has been validated on both synthetic data and in-vivo laparoscopic videos of a uterus. Experimental results illustrate its effectiveness compared to SfS and DSfM.
Abed Malti, Adrien Bartoli, Toby Collins
Towards Live Monocular 3D Laparoscopy Using Shading and Specularity Information
Abstract
We present steps toward the first real-time system for computing and visualising 3D surfaces viewed in live monocular laparoscopy video. Our method is based on estimating 3D shape using shading and specularity information, and seeks to push current Shape from Shading (SfS) boundaries towards practical, reliable reconstruction. We present an accurate method to model any laparoscope’s light source, and a highly-parallelised SfS algorithm that outperforms the fastest current method. We give details of its GPU implementation that facilitates realtime performance of an average frame-rate of 23fps. Our system also incorporates live 3D visualisation with virtual stereoscopic synthesis. We have evaluated using real laparoscopic data with ground-truth, and we present the successful in-vivo reconstruction of the human uterus. We however draw the conclusion that the shading cue alone is insufficient to reliably handle arbitrary laparoscopic images.
Toby Collins, Adrien Bartoli
Automatic Detection and Localization of da Vinci Tool Tips in 3D Ultrasound
Abstract
Radical prostatectomy (RP) is viewed by many as the gold standard treatment for clinically localized prostate cancer. State of the art radical prostatectomy involves the da Vinci surgical system, a laparoscopic robot which provides the surgeon with excellent 3D visualization of the surgical site and improved dexterity over standard laparoscopic instruments. Given the limited field of view of the surgical site in Robot-Assisted Laparoscopic Radical Prostatectomy (RALRP), several groups have proposed the integration of Transrectal Ultrasound (TRUS) imaging in the surgical work flow to assist with the resection of prostate and sparing the Neuro-Vascular Bundle (NVB). Rapid and automatic registration of TRUS imaging coordinates to the da Vinci tools or camera is a critical component of this integration. We propose a fully automatic registration technique based on accurate and automatic localization of robot tool tips pressed against the air-tissue boundary of the prostate, in 3D TRUS. The detection approach uses a multi-scale filtering technique to uniquely identify and localize the tool tip in the ultrasound volume and could also be used to detect other surface fiducials in 3D ultrasound. Feasibility experiments using a phantom and two ex vivo tissue samples yield promising results with target registration error (defined as the root mean square distance of corresponding points after registration) of (\(1.80\ mm\)) that proves the system’s accuracy for registering 3D TRUS to the da Vinci surgical system.
Omid Mohareri, Mahdi Ramezani, Troy Adebar, Purang Abolmaesumi, Septimiu Salcudean
Real-Time Methods for Long-Term Tissue Feature Tracking in Endoscopic Scenes
Abstract
Salient feature tracking for endoscopic images has been investigated in the past for 3D reconstruction of endoscopic scenes as well as tracking of tissue through a video sequence. Recent work in the field has shown success in acquiring dense salient feature profiling of the scene. However, there has been relatively little work in performing long-term feature tracking for capturing tissue deformation. In addition, real-time solutions for tracking tissue features result in sparse densities, rely on restrictive scene and camera assumptions, or are limited in feature distinctiveness. In this paper, we develop a novel framework to enable long-term tracking of image features. We implement two fast and robust feature algorithms, STAR and BRIEF, for application to endoscopic images. We show that we are able to acquire dense sets of salient features at real-time speeds, and are able to track their positions for long periods of time.
Michael C. Yip, David G. Lowe, Septimiu E. Salcudean, Robert N. Rohling, Christopher Y. Nguan

Imaging and Tracking

A Closed-Form Differential Formulation for Ultrasound Spatial Calibration
Abstract
Purpose: Calibration is essential in tracked freehand 3D ultrasound (US) to find the spatial transformation from the image coordinates to the reference coordinate system. Calibration accuracy is especially important in image-guided interventions. Method: We introduce a new mathematical framework that substantially improves the US calibration accuracy. It achieves this by using accurate measurements of axial differences in 1D US signals of a multi-wedge phantom, in a closed-form solution. Results: We report a point reconstruction accuracy of 0.3 mm using 300 independent measurements. Conclusion: The measured calibration errors significantly outperform the currently reported calibration accuracies.
Mohammad Najafi, Narges Afsham, Purang Abolmaesumi, Robert Rohling
Model-Based Respiratory Motion Compensation in MRgHIFU
Abstract
Magnetic Resonance guided High Intensity Focused Ultrasound (MRgHIFU) is an emerging non-invasive technology for the treatment of pathological tissue. The possibility of depositing sharply localised energy deep within the body without affecting the surrounding tissue requires the exact knowledge of the target’s position. The cyclic respiratory organ motion renders targeting challenging, as the treatment focus has to be continuously adapted according to the current target’s displacement in 3D space. In this paper, a combination of a patient-specific dynamic breath model and a population-based statistical motion model is used to compensate for the respiratory induced organ motion. The application of a population based statistical motion model replaces the acquisition of a patient-specific 3D motion model, nevertheless allowing for precise motion compensation.
Patrik Arnold, Frank Preiswerk, Beat Fasel, Rares Salomir, Klaus Scheffler, Philippe Cattin
Non-iterative Multi-modal Partial View to Full View Image Registration Using Local Phase-Based Image Projections
Abstract
Accurate registration of patient anatomy, obtained from intra-operative ultrasound (US) and preoperative computed tomography (CT) images, is an essential step to a successful US-guided computer assisted orthopaedic surgery (CAOS). Most state-of-the-art registration methods in CAOS require either significant manual interaction from the user or are not robust to the typical US artifacts. Furthermore, one of the major stumbling blocks facing existing methods is the requirement of an optimization procedure during the registration, which is time consuming and generally breaks when the initial misalignment between the two registering data sets is large. Finally, due to the limited field of view of US imaging, obtaining scans of the full anatomy is problematic, which causes difficulties during registration. In this paper, we present a new method that registers local phase-based bone features in frequency domain using image projections calculated from three-dimensional (3D) radon transform. The method is fully automatic, non-iterative, and requires no initial alignment between the two registering datasets. We also show the method’s capability in registering partial view US data to full view CT data. Experiments, carried out on a phantom and six clinical pelvis scans, show an average 0.8 mm root-mean-square registration error.
Ilker Hacihaliloglu, David R. Wilson, Michael Gilbart, Michael Hunt, Purang Abolmaesumi
A Hierarchical Strategy for Reconstruction of 3D Acetabular Surface Models from 2D Calibrated X-Ray Images
Abstract
Recent studies have shown the advantage of performing range of motion experiments based on three-dimensional (3D) bone models to diagnose femoro-acetabular impingement (FAI). The relative motion of pelvic and femoral surface models is assessed dynamically in order to analyze potential bony conflicts. 3D surface models are normally retrieved by 3D imaging modalities like computed tomography (CT) or magnetic resonance imaging (MRI). Despite the obvious advantage of using these modalities, surgeons still rely on the acquisition of planar X-ray radiographs to diagnose orthopedic impairments like FAI. Although X-ray imaging has advantages such as accessibility, inexpensiveness and low radiation exposure, it only provides two-dimensional information. Therefore, a 3D reconstruction of the hip joint based on planar X-ray radiographs would bring an enormous benefit for diagnosis and planning of FAI-related problems. In this paper we present a new approach to calibrate conventional X-ray images and to reconstruct a 3D surface model of the acetabulum. Starting from the registration of a statistical shape model (SSM) of the hemi-pelvis, a localized patch-SSM is matched to the calibrated X-ray scene in order to recover the acetabular shape. We validated the proposed approach with X-ray radiographs acquired from 6 different cadaveric hips.
Steffen Schumann, Moritz Tannast, Mathias Bergmann, Michael Thali, Lutz-P. Nolte, Guoyan Zheng

Cardiac Applications

A Navigation Platform for Guidance of Beating Heart Transapical Mitral Valve Repair
Abstract
Traditional approaches for repairing and replacing mitral valves have relied on placing the patient on cardiopulmonary bypass (on-pump) and accessing the arrested heart directly via a median sternotomy. However, because this approach has the potential for adverse neurological, vascular and immunological sequalae, there is a push towards performing such procedures in a minimally-invasive fashion. Nevertheless, preliminary experience on animals and humans has indicated that ultrasound guidance alone is often not sufficient. This paper describes the first porcine trial of the NeoChord DS1000 (Minnetonka, MN), employed to attach neochords to a mitral valve leaflet where the traditional ultrasound guided protocol has been augmented by dynamic virtual geometric models. In addition to demonstrating that the procedure can be performed with significantly increased precision and speed (up to 6 times), we also record and compare the trajectories used by each of five surgeons to navigate the NeoChord instrument.
John Moore, Chris Wedlake, Daniel Bainbridge, Gerard Guiraudon, Michael Chu, Bob Kiaii, Pencilla Lang, Martin Rajchl, Terry Peters
Motion Estimation Model for Cardiac and Respiratory Motion Compensation
Abstract
Catheter ablation is widely accepted as the best remaining option for the treatment of atrial fibrillation if drug therapy fails. Ablation procedures can be guided by 3-D overlay images projected onto live fluoroscopic X-ray images. These overlay images are generated from either MR, CT or C-Arm CT volumes. As the alignment of the overlay is often compromised by cardiac and respiratory motion, motion compensation methods are desirable. The most recent and promising approaches use either a catheter in the coronary sinus vein, or a circumferential mapping catheter placed at the ostium of one of the pulmonary veins. As both methods suffer from different problems, we propose a novel method to achieve motion compensation for fluoroscopy guided cardiac ablation procedures. Our new method localizes the coronary sinus catheter. Based on this information, we estimate the position of the circumferential mapping catheter. As the mapping catheter is placed at the site of ablation, it provides a good surrogate for respiratory and cardiac motion. To correlate the motion of both catheters, our method includes a training phase in which both catheters are tracked together. The training information is then used to estimate the cardiac and respiratory motion of the left atrium by observing the coronary sinus catheter only. The approach yields an average 2-D estimation error of 1.99 ± 1.20 mm.
Sebastian Kaeppler, Alexander Brost, Martin Koch, Wen Wu, Felix Bourier, Terrence Chen, Klaus Kurzidim, Joachim Hornegger, Norbert Strobel
Cardiac Unfold: A Novel Technique for Image-Guided Cardiac Catheterization Procedures
Abstract
X-ray fluoroscopically-guided cardiac catheterization procedures are commonly carried out for the treatment of cardiac arrhythmias, such as atrial fibrillation (AF) and cardiac resynchronization therapy (CRT). X-ray images have poor soft tissue contrast and, for this reason, overlay of a 3D roadmap derived from pre-procedure volumetric image data can be used to add anatomical information. However, current overlay technologies have the limitation that 3D information is displayed on a 2D screen. Therefore, it is not possible for the cardiologist to appreciate the true positional relationship between anatomical/functional data and the position of the interventional devices. We prose a navigation methodology, called cardiac unfold, where an entire cardiac chamber is unfolded from 3D to 2D along with all relevant anatomical and functional information and coupled to real-time device tracking. This would allow more intuitive navigation since the entire 3D scene is displayed simultaneously on a 2D plot. A real-time unfold guidance platform for CRT was developed, where navigation is performed using the standard AHA 16-segment bull’s-eye plot for the left ventricle (LV). The accuracy of the unfold navigation was assessed in 13 patient data sets by computing the registration errors of the LV pacing lead electrodes and was found to be 2.2 ± 0.9 mm. An unfold method was also developed for the left atrium (LA) using trimmed B-spline surfaces. The method was applied to 5 patient data sets and its utility was demonstrated for displaying information from delayed enhancement MRI of patients that had undergone radio-frequency ablation.
YingLiang Ma, Rashed Karim, R. James Housden, Geert Gijsbers, Roland Bullens, Christopher Aldo Rinaldi, Reza Razavi, Tobias Schaeffter, Kawal S. Rhode
Enabling 3D Ultrasound Procedure Guidance through Enhanced Visualization
Abstract
Real-time 3D ultrasound (3DUS) imaging offers improved spatial orientation information relative to 2D ultrasound. However, in order to improve its efficacy in guiding minimally invasive intra-cardiac procedures where real-time visual feedback of an instrument tip location is crucial, 3DUS volume visualization alone is inadequate. This paper presents a set of enhanced visualization functionalities that are able to track the tip of an instrument in slice views at real-time. User study with in vitro porcine heart indicates a speedup of over 30% in task completion time.
Laura J. Brattain, Nikolay V. Vasilyev, Robert D. Howe

Neurosurgery, Surgical Workflow and Skill Evaluation

Fast and Robust Registration Based on Gradient Orientations: Case Study Matching Intra-operative Ultrasound to Pre-operative MRI in Neurosurgery
Abstract
We present a novel approach for the rigid registration of pre-operative magnetic resonance to intra-operative ultrasound in the context of image-guided neurosurgery. Our framework proposes the maximization of gradient orientation alignment in locations with minimal uncertainty of the orientation estimates, permitting fast and robust performance. We evaluated our method on 14 clinical neurosurgical cases of patients with brain tumors, including low-grade and high-grade gliomas. We demonstrate processing times as small as 7 seconds and improved performance with relation to competing intensity-based methods.
Dante De Nigris, D. Louis Collins, Tal Arbel
Atlas-Based Segmentation of the Subthalamic Nucleus, Red Nucleus, and Substantia Nigra for Deep Brain Stimulation by Incorporating Multiple MRI Contrasts
Abstract
Deep brain stimulation (DBS) of the subthalamic nucleus (STN) is an effective therapy for drug-resistant Parkinson’s disease (PD). Pre-operative identification of the STN is an important step for this neurosurgical procedure, but is very challenging because of the accuracy necessary to stimulate only a sub-region of the small STN to achieve the maximum therapeutic benefits without serious side effects. Previously, a technique that automatically identified basal ganglia structures was proposed by registering a 3D histological atlas to a subject’s anatomy shown in the T1w MRI image. In this paper, we improve the accuracy of this technique for the segmentation of the STN, substantia nigra (SN), and red nucleus (RN). This is achieved by using additional MRI contrasts (i.e. T2*w and T2w images) that can better visualize these nuclei compared to the sole T1w contrast used previously. Through validation using the silver standard ground truth obtained for 6 subjects (3 healthy and 3 PD), we observe significant improvements for segmenting the STN, SN, and RN with the new technique when compared to the previous method that only uses T1w-T1w inter-subject registration.
Yiming Xiao, Lara Bailey, M. Mallar Chakravarty, Silvain Beriault, Abbas F. Sadikot, G. Bruce Pike, D. Louis Collins
Towards Systematic Usability Evaluations for the OR: An Introduction to OR-Use Framework
Abstract
It has been shown that usability of intra-operative computer-based systems has a direct impact on patient outcome and patient safety. There are several tools to facilitate the usability testing in other domains, however, there is no such tool for the operating room. In this work, after investigating the features of the existing tools in other domains and observing the practical requirements specific to the OR, we summarize key functionalities that should be provided in a usability testing support tool for intra-operative devices. Furthermore, we introduce the OR-Use framework, a tool developed for supporting usability evaluation for the OR and designed to fulfill the introduced requirements. Finally, we report about several performed tests to evaluate the performance of the proposed framework. We also report about the usability tests which have been conducted up until now using this tool.
Ali Bigdelou, Aslı Okur, Max-Emanuel Hoffmann, Bamshad Azizi, Nassir Navab
Improving the Development of Surgical Skills with Virtual Fixtures in Simulation
Abstract
This paper focuses on the use of virtual fixtures to improve the learning of basic skills for laparoscopic surgery. Five virtual fixtures are defined, integrated into a virtual surgical simulator and used to define an experimental setup based on a trajectory following task.
46 subjects among surgeons and residents underwent a training session based on the proposed setup. Their performance has been logged and used to identify the effect of virtual fixtures on the learning curve from the point of view of accuracy and completion time.
Virtual fixtures prove to be effective in improving the learning and affect differently accuracy and completion time. This suggests the possibility to tailor virtual fixtures on the specific task requirements.
Albert Hernansanz, Davide Zerbato, Lorenza Gasperotti, Michele Scandola, Paolo Fiorini, Alicia Casals
Sparse Hidden Markov Models for Surgical Gesture Classification and Skill Evaluation
Abstract
We consider the problem of classifying surgical gestures and skill level in robotic surgical tasks. Prior work in this area models gestures as states of a hidden Markov model (HMM) whose observations are discrete, Gaussian or factor analyzed. While successful, these approaches are limited in expressive power due to the use of discrete or Gaussian observations. In this paper, we propose a new model called sparse HMMs whose observations are sparse linear combinations of elements from a dictionary of basic surgical motions. Given motion data from many surgeons with different skill levels, we propose an algorithm for learning a dictionary for each gesture together with an HMM grammar describing the transitions among different gestures. We then use these dictionaries and the grammar to represent and classify new motion data. Experiments on a database of surgical motions acquired with the da Vinci system show that our method performs on par with or better than state-of-the-art methods.This suggests that learning a grammar based on sparse motion dictionaries is important in gesture and skill classification.
Lingling Tao, Ehsan Elhamifar, Sanjeev Khudanpur, Gregory D. Hager, René Vidal
Backmatter
Metadata
Title
Information Processing in Computer-Assisted Interventions
Editors
Purang Abolmaesumi
Leo Joskowicz
Nassir Navab
Pierre Jannin
Copyright Year
2012
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-30618-1
Print ISBN
978-3-642-30617-4
DOI
https://doi.org/10.1007/978-3-642-30618-1