Skip to main content

2012 | Buch

Augmented Environments for Computer-Assisted Interventions

6th International Workshop, AE-CAI 2011, Held in Conjunction with MICCAI 2011, Toronto, ON, Canada, September 22, 2011, Revised Selected Papers

herausgegeben von: Cristian A. Linte, John T. Moore, Elvis C. S. Chen, David R. Holmes III

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the International Workshop on Augemented Environments for Computer-Assited Interventions, held in conjunction with MICCAI 2011, in Toronto, Canada, in September 2011. The 13 revised full papers presented were carefully reviewed and selected from 21 submissions. The papers cover the following topics: image registration and fusion, calibration, visualisation and 3D perception, hardware and optical design, real-time implementations, validation, clinical applications and clinical evaluation.

Inhaltsverzeichnis

Frontmatter
2D/3D Registration of a Preoperative Model with Endoscopic Video Using Colour-Consistency
Abstract
Image-guided surgery needs an effective and efficient registration between 2D video images of the surgical scene and a preoperative model of a patient from 3D MRI or CT scans. Such an alignment process is difficult due to the lack of robustly trackable features on the operative surface as well as tissue deformation and specularity. In this paper, we propose a novel approach to perform the registration using PTAM camera tracking and colour-consistency. PTAM provides a set of video images with the corresponding camera positions. Registration of the 3D model to the video images can then be achieved by maximization of colour-consistency between all 2D pixels corresponding to a given 3D surface point. An improved algorithm for calculation of visible surface points is provided. It is hoped that PTAM camera tracking using a reduced set of points can be combined with colour-consistency to provide a robust registration. A ground truth simulation test bed has been developed for validating the proposed algorithm and empirical studies have shown that the approach is feasible, with ground truth simulation data providing a capture range of ±9mm/° with a TRE less than 2mm. Our intended application is robot-assisted laparoscopic prostatectomy.
Ping-Lin Chang, Dongbin Chen, Daniel Cohen, Philip “Eddie” Edwards
A Realistic Test and Development Environment for Mixed Reality in Neurosurgery
Abstract
In a mixed reality visualization, physical and virtual environments are merged to produce new visualizations where both real and virtual objects are displayed together. In image guided surgery (IGS), surgical tools and data sets are fused into a mixed reality visualization providing the surgeon with a view beyond the visible anatomical surface of the patient, thereby reducing patient trauma, and potentially improving clinical outcomes. To date few mixed reality systems are used on a regular basis for surgery. One possible reason for this is that little research on which visualization methods and techniques are best and how they should be incorporated into the surgical workflow has been done. There is a strong need for evaluation of different visualization methods that may show the clinical usefulness of such systems. In this work we present a test and development environment for augmented reality visualization techniques and provide an example of the system use for image guided neurovascular surgery. The system was developed using open source software and off-the-shelf hardware.
Simon Drouin, Marta Kersten-Oertel, Sean Jy-Shyang Chen, D. Louis Collins
Visual Search Behaviour and Analysis of Augmented Visualisation for Minimally Invasive Surgery
Abstract
Disorientation has been one of the key issues hampering natural orifice translumenal endoscopic surgery (NOTES) adoption. A new Dynamic View Expansion (DVE) technique was recently introduced as a method to increase the field-of-view, as well as to provide temporal visual cues to encode the camera motion trajectory. This paper presents a systematic analysis of visual search behaviour during the use of DVE for NOTES navigation. The study compares spatial orientation and latency with and without the use of the new DVE technique with motion trajectory encoding. Eye tracking data was recorded and modelled using Markov chains to characterise the visual search behaviour, where a new region of interest (ROI) definition was used to determine the states in the transition graphs. Resultant state transition graphs formed from the participants’ eye movements showed a marked difference in visual search behaviour with increased cross-referencing between grey and less grey regions. The results demonstrate the advantages of using motion trajectory encoding for DVE.
Kenko Fujii, Johannes Totz, Guang-Zhong Yang
Augmented Reality Image Overlay Projection for Image Guided Open Liver Ablation of Metastatic Liver Cancer
Abstract
This work presents an evaluation of a novel augmented reality approach for the visualisation of real time guidance of an ablation tool to a tumor in open liver surgery. The approach uses a portable image overlay device, directly integrated into a liver surgical navigation system, to display guidance graphics along with underlying anatomical structures directly on the liver surface. The guidance application generates trajectories from the current ablation needle tip to the centre of the tumor. Needle alignment guidance and depth information are displayed directly on the liver surface, providing intuitive real-time feedback for guiding the ablation tool tip to the targeted tumor. Validation of the guidance visual feedback on porcine liver tissue showed that the system was useful in trajectory planning and tumor targeting. The augmented reality guidance was easily visible, removed the need for sight diversion and was implemented without imposing any timely or procedural overhead when compared to a navigated procedure itself.
Kate Alicia Gavaghan, Sylvain Anderegg, Matthias Peterhans, Thiago Oliveira-Santos, Stefan Weber
Tissue Deformation Recovery with Gaussian Mixture Model Based Structure from Motion
Abstract
Accurate 3D reconstruction of the surgical scene is important in intra-operative guidance. Existing methods are often based on the assumption that the camera is static or the tissue is deforming with periodic motion. In minimally invasive surgery, these assumptions do not always hold due to free-form tissue deformation induced by instrument-tissue interaction and camera motion required for continuous exploration of the surgical scene, particularly for intraluminal procedures. The aim of this work is to propose a novel framework for intra-operative free-form deformation recovery. The proposed method builds on a compact scene representation scheme that is suitable for both surgical episode identification and instrument-tissue motion modeling. Unlike previous approaches, it does not impose explicit models on tissue deformation, allowing realistic free-form deformation recovery. Validation is provided on both synthetic and phantom data. The practical value of the method is further demonstrated by deformation recovery on in vivo data recorded from a robotic assisted minimally invasive surgical procedure.
Stamatia Giannarou, Guang-Zhong Yang
Towards an Ultrasound Probe with Vision: Structured Light to Determine Surface Orientation
Abstract
Over the past decade, we have developed an augmented reality system called the Sonic Flashlight (SF), which merges ultrasound with the operator’s vision using a half-silvered mirror and a miniature display attached to the ultrasound probe. We now add a small video camera and a structured laser light source so that computer vision algorithms can determine the location of the surface of the patient being scanned, to aid in analysis of the ultrasound data. In particular, we intend to determine the angle of the ultrasound probe relative to the surface to disambiguate Doppler information from arteries and veins running parallel to, and beneath, that surface. The initial demonstration presented here finds the orientation of a flat-surfaced ultrasound phantom. This is a first step towards integrating more sophisticated computer vision methods into automated ultrasound analysis, with the ultimate goal of creating a symbiotic human/machine system that shares both ultrasound and visual data.
Samantha Horvath, John Galeotti, Bo Wang, Matt Perich, Jihang Wang, Mel Siegel, Patrick Vescovi, George Stetten
Markov Modeling of Colonoscopy Gestures to Develop Skill Trainers
Abstract
Colonoscopy is a complex procedure which requires considerable skill by the clinician in guiding the scope safely and accurately through the colon, and in a manner tolerable to the patient. Excessive pressure can cause the colon walls to distend, leading to excruciating pain or perforation. Concerted efforts by the ASGE have led to stipulating guidelines for trainees to reach necessary expertise. In this paper, we have analyzed the motion of the colonoscope by collecting kinematics data using 4 electromagnetic position sensors. Further, 36 feature vectors have been defined to capture all possible gestures. These feature vectors are used to train Hidden Markov Models to identify critical gestures that differentiate expertise. Five expert attending clinicians and four fellows were recruited as part of this study. Experimental results show that roll of the scope shows maximum differentiation of expertise.
Jagadeesan Jayender, Inbar Spofford, Balazs I. Lengyel, Christopher C. Thompson, Kirby G. Vosburgh
Volume Visualization in the Clinical Practice
Abstract
Volumetric data is common in medicine, geology and engineering, but the O(n 3) complexity in data and algorithms has prevented the widespread use of volume graphics. Recently, 3D image processing and visualization algorithms have been parallelized and ported to graphics processing units. Today, medical diagnostics highly depends on volumetric imaging methods that must be visualized in real-time. However, daily clinical practice shows that physicians still prefer simple 2D multi-planar reconstructions over 3D visualizations for intervention planning. Therefore, a very basic question in this context is, if real-time 3D image synthesis is necessary at all. This paper makes four main observations in a clinical context, which are evaluated with 24 independent physicians from three different European hospitals.
Bernhard Kainz, Rupert H. Portugaller, Daniel Seider, Michael Moche, Philipp Stiegler, Dieter Schmalstieg
CT-US Registration for Guidance of Transcatheter Aortic Valve Implantation
Abstract
Transcatheter aortic valve implantation is a minimally invasive procedure that delivers a replacement aortic valve to a beating heart using a valved stent in a catheter delivered either transapically or via the femoral artery. Current image guidance of this procedure relies on fluoroscopy, which provides poor visualization of cardiac anatomy. Combining a preoperative CT with intraoperative TEE images into a common environment would allow for improved image-guidance using only standard clinical images. A fast intraoperative registration of these images based on an iterative closest point registration is proposed and validated on human images. Three methods of extracting surface points from the TEE images are compared: 1) Biplane images outlined manually, 2) Multiple planes from the 3D TEE outlined manually, and 3) An automatically extracted surface from 3D TEE. Results from this initial validation demonstrate that a target registration error of less than 5mm may be achieved.
Pencilla Lang, Michael W. Chu, Daniel Bainbridge, Elvis C. S. Chen, Terry M. Peters
Enhanced Planning of Interventions for Spinal Deformity Correction Using Virtual Modeling and Visualization Techniques
Abstract
Traditionally spinal correction procedures have been planned using 2D radiographs or image slices extracted from conventional computed tomography scans. Such images prove inadequate for accurately and precisely planning interventions, mainly due to the complex 3D anatomy of the spinal column, as well as the close proximity of nerve bundles and vascular structures that must be avoided during the procedure. To address these limitations and provide the surgeon with more representative information while taking full advantage of the 3D volumetric imaging data, we have developed a clinician-friendly application for spine surgery planning. This tool enables rapid oblique reformatting of each individual vertebral image, 3D rendering of each or multiple vertebrae, as well as interactive templating and placement of virtual implants. Preliminary studies have demonstrated improved accuracy and confidence of pre-operative measurements and implant localization and suggest that the proposed application may lead to increased procedure efficiency, safety, shorter intra-operative time, and lower costs.
Cristian A. Linte, Kurt E. Augustine, Paul M. Huddleston, Anthony A. Stans, David R. Holmes III, Richard A. Robb
Alignment of 4D Coronary CTA with Monoplane X-ray Angiography
Abstract
We propose a 3D+t/2D+t registration strategy to relate preoperative CTA to intraoperative X-ray angiography for improved image guidance during coronary interventions. We first derive 4D coronary models from CTA and then align these models both temporally and spatially to the intraoperative images. Temporal alignment is based on aligning ECG signals, whereas the spatial alignment uses a similarity metric based on centerline projection and fuzzy X-ray segmentation. In the spatial alignment step we use information from multiple time points simultaneously and take into account rigid respiratory motion. Evaluation shows improved performance compared to a 3D/2D approach with respect to both registration success and reproducibility.
Coert Metz, Michiel Schaap, Stefan Klein, Peter Rijnbeek, Lisan Neefjes, Nico Mollet, Carl Schultz, Patrick Serruys, Wiro Niessen, Theo van Walsum
VR Training System for Endoscopic Surgery Robot
Development of a System Enabling 4D Analysis of Surgical Technique Training
Abstract
Our research group is currently developing an endoscopic surgical robot for digestive organs. In the current study, we sought to train surgeons to manipulate the system we are developing for clinical applications. To this end, we are developing a training system with the same interface as the real system, so that surgeons in training can practice basic manipulations and surgical techniques using organ models. To learn the basic manipulations of the system, we emphasized training the surgeon to operate the robotic arms, as this is the biggest difference from the conventional surgical techniques. We set up several types of tasks for the trainee, so that a beginner trainee could get used to operating the robot arms of the system. We developed a surgical training method using a stomach model reconstructed from MRI data sets. In addition to basic surgical techniques such as grabbing, lifting and cutting open soft tissue with the robot arm, we enabled the training system to perform techniques necessary for the surgical system, such as delivering water to the surgical field in case of bleeding, and clipping of incision sites. We added a function to record the performance of the trainee, enabling the system to analyze changes of the surgical field and robot arms in four dimensions during training.
Naoki Suzuki, Asaki Hattori, Satoshi Ieiri, Morimasa Tomikawa, Hajime Kenmotsu, Makoto Hashizume
Brain Parcellation Aids in Electrode Localization in Epileptic Patients
Abstract
Purpose: We aim to enhance the electrode localization with reference to spatial neuroanatomy in intracranial electroencephalogram (IEEG), which provides greater spatial resolution than traditional scalp electroencephalogram.
Methods: CT-MR rigid registration, MR non-rigid registration and prior-based segmentation are employed in an image processing pipeline to normalize patient CT, patient MR and an external labeled atlas to the same space.
Results: Despite possible abnormal cerebral structure and postoperative brain deformation, the proposed pipeline is able to automatically superimpose all the electrodes on the patient’s parcellated brain and visualize epileptogenic foci and IEEG events.
Conclusion: This work will greatly diminish epileptologist’ manual work and the potential for human error, allowing for automated and accurate detection of the anatomical position of electrodes. It also demonstrates the feasibility of applying brain parcellation to epileptic brains.
Jue Wu, Kathryn Davis, Allan Azarion, Yuanjie Zheng, Hongzhi Wang, Brian Litt, James Gee
Backmatter
Metadaten
Titel
Augmented Environments for Computer-Assisted Interventions
herausgegeben von
Cristian A. Linte
John T. Moore
Elvis C. S. Chen
David R. Holmes III
Copyright-Jahr
2012
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-32630-1
Print ISBN
978-3-642-32629-5
DOI
https://doi.org/10.1007/978-3-642-32630-1

Premium Partner