Skip to main content
main-content

Über dieses Buch

This book constitutes the proceedings of the 4th International Conference on Information Processing in Computer-Assisted Interventions IPCAI 2013, held in Heidelberg, Germany, on June 26, 2013. The 11 papers presented were carefully reviewed and selected from 20 submissions. The papers are organized in topical sections on simulation, neurosurgical interventions, ultrasound guided interventions, and image guided interventions.

Inhaltsverzeichnis

Frontmatter

Simulation

Development and Procedural Evaluation of Immersive Medical Simulation Environments

Abstract
We present a method in designing a medical simulation environment based on task and crisis analysis of the surgical workflow. The environment consists of real surgical tools and instruments that are augmented with realistic haptic feedback and VR capabilities. Inherently, we also addressed a broad spectrum of human sensory channels such as tactile, auditory and visual in real-time. Lastly, the proposed approach provides a simulation environment facilitating deliberate exposure to adverse events enabling mediation of error recovery strategies. To validate the face validity of our simulator design we chose a spinal procedure, the vertebroplasty, in which four expert surgeons were immersed in our medical simulation environment. Based on a Likert-scale questionnaire, the face validity of our simulation environment was assessed by investigating surgeon behavior and workflow response. The result of the conducted user-study corroborates our unique medical simulation concept of combining VR and human multisensory responses into surgical workflow.
Patrick Wucherer, Philipp Stefan, Simon Weidert, Pascal Fallavollita, Nassir Navab

Generation of Synthetic 4D Cardiac CT Images for Guidance of Minimally Invasive Beating Heart Interventions

Abstract
Off-pump beating heart surgery requires a guidance system that would show both pertinent cardiac anatomy and dynamic motion both peri- and intra-operatively. Optimally, the guidance system should show high quality images and models in a cost-effective way and can be easily integrated into standard clinical workflow. However, such a goal is difficult to accomplish by a single image modality. In this paper we introduce a method of generating a synthetic 4D cardiac CT dataset using a single (static) CT, along with 4D ultrasound images. These synthetic images can be combined with intra-operative ultrasound during the surgery to provide an intuitive and effective augmented virtuality guidance system. The generation method obtains patient specific cardiac motion information by performing non-rigid registrations between pre-operative 4D ultrasound images and applies the deformation to a static CT image to deform it into a series of dynamic CT images. Validations was performed by comparing the synthetic CT images to real dynamic CT images.
Feng P. Li, Martin Rajchl, James A. White, Aashish Goela, Terry M. Peters

Neurosurgical Interventions

Intra-operative Identification of the Subthalamic Nucleus Motor Zone Using Goniometers

Abstract
The current state of the art for identification of motor related neural activity during deep brain stimulation (DBS) surgery utilizes manual movements of the patient’s joints while observing the recorded raw data of a single electrode. Here we describe an intra-operative method for detection of the motor territory of the subthalamic nucleus (STN) during DBS surgery. The method incorporates eight goniometers that continuously monitor and measure the angles of the wrist, elbow, knee and ankle, bilaterally. The joint movement data and microelectrode recordings from the STN are synchronized thus enabling objective intra-operative assessment of movement-related STN activity. This method is now used routinely in DBS surgery at our institute. Advantages include objective identification of motor areas, simultaneous detection of movement for all joints, detection of movement at a joint that is not under examination, shorter surgery time, and continuous monitoring of STN activity for patients with tremor.
Reuben R. Shamir, Renana Eitan, Sivan Sheffer, Odeya Marmor-Levin, Dan Valsky, Shay Moshel, Adam Zaidel, Hagai Bergman, Zvi Israel

Model-Guided Placement of Cerebral Ventricular Catheters

Abstract
Purpose: Freehand placement of external ventricular drainage is not sufficiently accurate and precise. In the absence of high quality pre-operative 3D images, we propose the use of an average model for guidance of ventricular catheters. Methods: The model was segmented to extract the ventricles and registered to five normal volunteers using a combination of landmark based and surface based registration. The proposed method was validated by comparing the use of the average model to the use of volunteer-specific images. Results: The position and orientation of the ventricles were compared and the distances between the target points at the left and right foramen of Monroe were computed (Mean±std: 5.65±1.60mm and 6.05±1.34mm for the left and right side respectively). Conclusions: Although an average model for guidance of a surgical procedure has a number of limitations, our initial experiments show that the use of a model might provide sufficient guidance for determination of the angle of insertion. Future work will include further clinical testing and possible refinement of the model.
Ingerid Reinertsen, Asgeir Jakola, Ole Solheim, Frank Lindseth, Geirmund Unsgård

Ultrasound-Guided Interventions

Ultrasound-Based Image Guidance for Robot-Assisted Laparoscopic Radical Prostatectomy: Initial in-vivo Results

Abstract
This paper describes the initial clinical evaluation of a real-time ultrasound-based guidance system for robot-assisted laparoscopic radical prostatectomy (RALRP). The surgical procedure was performed on a live anaesthetized canine with a da Vinci SI robot. Intraoperative imaging was performed using a robotic transrectal ultrasound (TRUS) manipulator and a bi-plane TRUS transducer. Two registration methods were implemented and tested: (i)using specialized fiducials placed at the air-tissue boundary, 3D TRUS data were registered to the da Vinci stereo endoscope with an average TRE of 2.37 ± 1.06 mm, (ii)using localizations of the da Vinci manipulator tips in 3D TRUS images, 3D TRUS data were registered to the kinematic frame of the da Vinci manipulators with average TRE of 1.88 ± 0.88 mm using manual tool tip localization, and average TRE of 2.68 ± 0.98 mm using an automatic tool tip localization algorithm. Registration time was consistently less than 2 minutes when performed by two experienced surgeons after limited learning. The location of the TRUS probe was remotely controlled through part of the procedure by a da Vinci tool, with the corresponding ultrasound images being displayed on the surgeon console using TilePro. Automatic tool tracking was achieved with angular accuracy of 1.65 ± 1.24 deg. This work demonstrates, for the first time, the in-vivo use of a robotically controlled TRUS probe calibrated to the da Vinci robot, and will allow the da Vinci tools to be tracked for safety and to be used as pointers for regions of interest to be imaged by ultrasound.
Omid Mohareri, Caitlin Schneider, Troy K. Adebar, Mike C. Yip, Peter Black, Christopher Y. Nguan, Dale Bergman, Jonathan Seroger, Simon DiMaio, Septimiu E. Salcudean

Augmentation of Paramedian 3D Ultrasound Images of the Spine

Abstract
The blind placement of an epidural needle is among the most difficult regional anesthetic techniques. The challenge is to insert the needle in the mid-sagittal plane and to avoid overshooting the needle into the spinal cord. Prepuncture 2D ultrasound scanning has been introduced as a reliable tool to localize the target and facilitate epidural needle placement. Ideally, real-time ultrasound should be used during needle insertion. However, several issues inhibit the use of standard 2D ultrasound, including the obstruction of the puncture site by the ultrasound probe, low visibility of the target in ultrasound images, and increased pain due to longer needle trajectory. An alternative is to use 3D ultrasound imaging, where the needle and target could be visible within the same reslice of a 3D volume; however, novice ultrasound users (i.e., many anesthesiologists) still have difficulty interpreting ultrasound images of the spine and identifying the target epidural space. In this paper, we propose to augment 3D ultrasound images by registering a multi-vertebrae statistical shape+pose model. We use such augmentation for enhanced interpretation of the ultrasound and identification of the mid-sagittal plane for the needle insertion. Validation is performed on synthetic data derived from the CT images, and 64 in vivo ultrasound volumes.
Abtin Rasoulian, Robert N. Rohling, Purang Abolmaesumi

3D Segmentation of Curved Needles Using Doppler Ultrasound and Vibration

Abstract
A method for segmenting the 3D shape of curved needles in solid tissue is described. An actuator attached to the needle outside the tissue vibrates at frequencies between 600 Hz and 6500 Hz, while 3D power Doppler ultrasound imaging is applied to detect the resulting motion of the needle shaft and surrounding tissue. The cross section of the vibrating needle is detected across the series of 2D images produced by a mechanical 3D ultrasound transducer, and the needle shape is reconstructed by fitting a 3D curve to the resulting points. The sensitivity of segmentation accuracy to tissue composition, vibration frequency, and Doppler pulse repetition frequency (PRF) was examined. Comparison with manual segmentation demonstrates that this method results in an average error of 1.09 mm in ex vivo tissue. This segmentation method may be useful in the future for providing feedback on curved needle shape for control of robotic needle steering systems.
Troy K. Adebar, Allison M. Okamura

Mobile EM Field Generator for Ultrasound Guided Navigated Needle Insertions

Abstract
Needle insertions are an elementary tool for diagnostic and therapeutic purposes. Critical success factors are: Precise needle placement, avoidance of critical structures and short intervention time. Navigation solutions for ultrasound-based needle insertions have been presented but did not find widespread clinical application. This can be attributed to the complexity and higher costs introduced by additional tracking related equipment. Using a new compact electromagnetic (EM) field generator (FG), we present the first navigated intervention method that combines field generator and ultrasound (US) probe into a single mobile imaging modality that enables tracking of needle and patient. In a phantom study, we applied the system for navigated needle insertion and achieving a hit rate of 92% and a mean accuracy of 3.1mm (n=24). These results demonstrate the potential of the new combined modality in facilitating US-guided needle insertion.
Keno März, Alfred M. Franz, Bram Stieltjes, Justin Iszatt, Alexander Seitel, Boris Radeleff, Hans-Peter Meinzer, Lena Maier-Hein

Image-Guided Interventions

First Flexible Robotic Intra-operative Nuclear Imaging for Image-Guided Surgery

Abstract
Functional imaging systems for intra-operative use, like freehand SPECT, have been successfully demonstrated in the past, with remarkable results. These results, even though very positive in some cases, tend to suffer from high variability depending on the expertise of the operator. A well trained operator can produce datasets that will lead to a reconstruction that can rival a conventional SPECT machine, while an untrained one will not be able to achieve such results. In this paper we present a flexible robotic functional nuclear imaging setup for intra-operative use, replacing the operator in the scanning process with a robotic arm. The robot can assure good coverage of the area of interest, thus producing a consistent scanning pattern that can be reproduced with high accuracy, and provides the option to compensate for radioactive decay. We show first results on phantoms demonstrating the feasibility of the setup to perform 3D nuclear imaging suitable for the operating room.
José Gardiazabal, Tobias Reichl, Aslı Okur, Tobias Lasser, Nassir Navab

Robust Real-Time Image-Guided Endoscopy: A New Discriminative Structural Similarity Measure for Video to Volume Registration

Abstract
This paper proposes a fully automatic real-time robust image-guided endoscopy method that uses a new discriminative structural similarity measure for pre- and intra-operative registration. Current approaches are limited to clinical applications due to two major bottlenecks: (1) weak continuity, i.e., endoscopic guidance may be blocked since a similarity measure might incorrectly characterize video images and virtual renderings generated from pre-operative volume data, resulting in a registration failure; (2) slow computation, since volume rendering is a time-consuming step in the registration. To address the first drawback, we introduce a robust similarity measure, which uses the degradation of structural information and considers image correlation or structure, luminance, and contrast to characterize images. Moreover, we utilize graphics processing unit techniques to accelerate the volume rendering step. We evaluated our method on patient datasets. The experimental results demonstrated that we provide a promising method, which is possibly applied in the operating room, to accurately and robustly guide endoscopy in real time, particularly the average accuracy of position and orientation was improved from (14.6, 51.2) to (4.45 mm, 12.3°) and the runtime was about 32 frames per second compared to current image-guided methods.
Xiongbiao Luo, Hirotsugu Takabatake, Hiroshi Natori, Kensaku Mori

Declustering n-Connected Components for Segmentation of Iodine Implants in C-Arm Fluoroscopy Images

Abstract
Dynamic dosimetry is becoming the standard to evaluate the quality of radioactive implants during brachytherapy. It is essential to obtain a 3D visualization of the implanted seeds and their relative position to the prostate. For this, a robust and precise segmentation of the seeds in 2D X-ray is required. First, implanted seeds are segmented using a region-based implicit active contour approach. Then, n-seed clusters are resolved using an efficient template based approach. A collection of 55 C-arm images from 10 patients are used to validate the proposed algorithm. Compared to manual ground-truth segmentation of 6002 seeds, 98.7% of seeds were automatically detected and declustered showing a false-positive rate of only 1.7%. Results indicate the proposed method is able to perform the identification and annotation processes of seeds on par with a human expert, constituting a viable alternative to the traditional manual segmentation approach.
Chiara Amat di San Filippo, Gabor Fichtinger, William James Morris, Septimiu E. Salcudean, Ehsan Dehghan, Pascal Fallavollita

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise