Skip to main content

2015 | Buch

Augmented Environments for Computer-Assisted Interventions

10th International Workshop, AE-CAI 2015, Held in Conjunction with MICCAI 2015, Munich, Germany, October 9, 2015. Proceedings

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 10th International Workshop on Augmented Environments for Computer-Assisted Interventions, held in conjunction with MICCAI 2015, in Munich, Germany in October 2015. The 15 revised full papers presented were carefully reviewed and selected from 21 submissions. The objective of the AE-CAI workshop was to attract scientific contributions that offer solutions to the technical problems in the area of augmented and virtual environments for computer-assisted interventions, and to provide a venue for dissemination of papers describing both complete systems and clinical applications.

Inhaltsverzeichnis

Frontmatter
Ultrasound-Guided Navigation System for Orthognathic Surgery
Abstract
Around 1–2 % of the US population has craniofacial deformities severe enough to be disabling and stigmatizing, and could benefit from orthognathic surgery. This surgery involves repositioning the jaws, due to the unique features of each patient’s teeth, jaws, and joint. Approximately 20 % of patients who had mandibular advancement surgery experience moderate relapse 1–5 years after surgery. We believe ultrasound is a promising imaging technology for orthognathic surgery guidance that can assist surgeons to visualize the condyle/ramus segment in order to guide it into its pre-surgical, biologically stable position. This paper explores the role of 3D ultrasound imaging as a real-time surgical guidance to improve treatment outcomes for orthognathic surgery. This paper shows our work designing a 3D ultrasound volume reconstruction system and our results demonstrating its ability to capture the bony structures of the mandible, compared with those structures reconstructed from pre-surgical Cone Beam Computed Tomography (CBCT).
Beatriz Paniagua, Dženan Zukic, Ricardo Ortiz, Stephen Aylward, Brent Golden, Tung Nguyen, Andinet Enquobahrie
Augmented Reality Ultrasound Guidance for Central Line Procedures: Preliminary Results
Abstract
Central line procedures are interventions in which a needle is placed in the jugular vein in the patient’s neck inferior to the carotid bifurcation. In these procedures, avoiding the puncture of the carotid artery is of upmost importance as it can cause severe neurological consequences or death. Often, these procedures are performed under ultrasound guidance, meaning that a linear ultrasound probe is held to the patient’s neck in which the interventionalist can visualize both the carotid artery and jugular vein. However, due to the geometry of the interventional scene, the needle must be placed out-of-plane with the ultrasound and the needle cannot be fully visualized, only a cross-section thereof. This lack of visualization can lead to issues gaging the correct penetration depth. This paper presents preliminary results on an augmented reality (AR) needle guidance system in which a tracked needle and ultrasound fan are simultaneously visualized in their entirety. This AR guidance system is compared against traditional ultrasound-only guidance on a neck phantom. The use of the AR system significantly reduces the intervention time (average decrease of \(3.51\pm 1.44\) s) and normalized path length (average decrease of \(150\pm 40\,\%\)) implying that the use of such as system makes the procedure easier for the interventionalist (\(n=36\), \(p \le 0.05\)). This AR system has gained regulatory approval and is scheduled for clinical trials in humans.
Golafsoun Ameri, John S. H. Baxter, A. Jonathan McLeod, Terry M. Peters, Elvis C. S. Chen
Interaction-Based Registration Correction for Improved Augmented Reality Overlay in Neurosurgery
Abstract
In image-guided neurosurgery the patient is registered with the reference of a tracking system and preoperative data before sterile draping. Due to several factors extensively reported in the literature, the accuracy of this registration can be much deteriorated after the initial phases of the surgery. In this paper, we present a simple method that allows the surgeon to correct the initial registration by tracing corresponding features in the real and virtual parts of an augmented reality view of the surgical field using a tracked pointer. Results of a preliminary study on a phantom yielded a target registration error of 4.06 ± 0.91 mm, which is comparable to results for initial landmark registration reported in the literature.
Simon Drouin, Marta Kersten-Oertel, D. Louis Collins
’On the Fly’ Reconstruction and Tracking System for Patient Setup in Radiation Therapy
Abstract
Range imaging devices have already shown their value for patient setup and motion management in external beam radiation therapy. However current systems need several range imaging devices recording the patient’s surface from different viewpoints to achieve the required stability and accuracy. Since range imaging devices come as add-ons to regular linear accelerators, they have to share the limited space with other sensors in the treatment room to get a line of sight to the patient’s isocenter. The objective of this work is to describe a new registration framework which enables stable tracking using only one range imager. We unveil the design of our solution to the problem of tracking a patient over long trajectories and large viewpoint changes including surface acquisition, pose estimation and simultaneous surface reconstruction. We evaluate the performance of the system using three clinically motivated experiments: (i) motion management, (ii) non-coplanar patient setup and (iii) tracking over very large angles. We compare our framework to the state-of-art ICP algorithm and to a ground-truth stereoscopic X-ray system from BrainLab. Results demonstrate that we could track subtle movements up to 2.5 cm with a mean target registration error of 0.44 mm and 0.02°. Subsequent non-coplanar field setup on 30° and 2 cm motion yielded a target registration error of 2.88 mm which is within clinical tolerances. Our sensor design demonstrates the potential of simultaneous reconstruction and tracking algorithms and its use for patient setup and motion management in radiation therapy.
Hagen Kaiser, Pascal Fallavollita, Nassir Navab
3D Catheter Tip Tracking in 2D X-Ray Image Sequences Using a Hidden Markov Model and 3D Rotational Angiography
Abstract
Integration of pre- or peri-operative images may improve image guidance in minimally invasive interventions. In abdominal catheterization procedures such as transcatheter arterial chemoembolization, 3D pre-/peri-operative images contain relevant information, such as complete 3D vasculature, that is not directly available from 2D imaging. Accurate knowledge of the catheter tip position in 3D is currently not available, and after registration of 3D information to 2D images (angiographies), the registration is invalidated by breathing motion and thus requires continuous updates. We propose a hidden Markov model based method to track the 3D catheter position, using 2D fluoroscopic image sequences and a 3D vessel tree obtained from 3D Rotational Angiography. Such a tracking facilitates display of the catheter in the 3D anatomy, and it enables to use the 3D vessels as a roadmap in 2D imaging. The tracking is initialized with the first 2D image of the sequence. For the subsequent images, based on a state transition probability distribution and the registration observations, the catheter tip position is tracked in the 3D vessel tree using registrations to the 2D fluoroscopic images. The method is evaluated on simulated data and two clinical sequences. In the simulations, we obtain a median tip position accuracies up to 2.9  mm. On clinical sequence, the distance between the catheter and the projected vessels after registration is below 1.9 mm.
Pierre Ambrosini, Ihor Smal, Daniel Ruijters, Wiro J. Niessen, Adriaan Moelker, Theo van Walsum
Human-PnP: Ergonomic AR Interaction Paradigm for Manual Placement of Rigid Bodies
Abstract
The human perception of the three-dimensional world is influenced by the mutual integration of physiological and psychological depth cues, whose complexity is still an unresolved issue per se. Even more so if we wish to mimic the perceptive efficiency of the human visual system within augmented reality (AR) based surgical navigation systems. In this work we present a novel and ergonomic AR interaction paradigm that aids the manual placement of a non-tracked rigid body in space by manually minimizing the reprojection residuals between a set of corresponding virtual and real feature points. Our paradigm draws its inspiration from the general problem of estimating camera pose from a set of n-correspondences, i.e. perspective-n-point problem. In a recent work, positive results were achieved in terms of geometric error by applying the proposed strategy on the validation of a wearable AR system to aid manual maxillary repositioning.
Fabrizio Cutolo, Giovanni Badiali, Vincenzo Ferrari
Real-Time Markerless Respiratory Motion Management Using Thermal Sensor Data
Abstract
We present a novel approach to jointly generate multidimensional breathing signals and identify unintentional patient motion based on thermal torso imaging. The system can operate at least 30 % faster than the currently fastest optical surface imaging systems. It provides easily obtainable point-to-point correspondences on the patients surface which makes the current use of computationally heavy non-rigid surface registration algorithms obsolete in our setup, as we can show that 2d tracking is sufficient to solve our problem. In a volunteer study consisting of 5 patient subjects we show that we can use the information to automatically separate unintentional movement, due to pain or coughing, from breathing motion to signal the user that it is necessary to re-register the patient. The method is validated on ground-truth annotated thermal videos and a clinical IR respiratory motion tracking system.
Hagen Kaiser, Pascal Fallavollita, Nassir Navab
An Iterative Closest Point Framework for Ultrasound Calibration
Abstract
We introduce an Iterative Closest Point framework for ultrasound calibration based on a hollow-line phantom. The main novelty of our approach is the application of a hollow-tube fiducial made from hyperechoic material, which allows for highly accurate fiducial localization via both manual and automatic segmentation. By reducing fiducial localization error, this framework is able to achieve sub-millimeter target registration error. The calibration phantom introduced can be manufactured inexpensively and precisely. Using a Monte Carlo approach, our calibration framework achieved 0.5 mm mean target registration error, with a standard deviation of 0.24 mm, using 12 or more tracked ultrasound images. This suggests that our framework is approaching the accuracy limit imposed by the tracking device used.
Elvis C. S. Chen, A. Jonathan McLeod, John S. H. Baxter, Terry M. Peters
Development of 4D Human Body Model that Enables Deformation of Skin, Organ and Blood Vessel According to Dynamic Change
Abstract
Any part of a live human body bears constant dynamic changes in the spatial and temporal axis. Therefore we believe that analyzing and understanding the 4 dimensional dynamics of a human body will contribute to new inputs in medical diagnosis and doctor’s judgment for effective treatment. To achieve this objective, we are developing a quantitative 4D human body model with inner structures such as the skeletal structure and major organs. We are aiming to grasp from various viewpoints, the spatiotemporal (four-dimensional) changes of an anatomical structure of a live human being. The model in this research is constructed based on a subject measured by MRI. The aim is to have the model’s inner structures change according to the subject’s full body movement data. In addition, we aim to have a function in which the shape of the skin surface changes synchronizing with the data.
In this research, we examine the possibilities of clinical application of our 4D model that not only has skeletal and major organ blood vessel systems but also has muscular systems as inner structure.
Naoki Suzuki, Asaki Hattori, Makoto Hashizume
Augmented Reality for Specific Neurovascular Surgical Tasks
Abstract
Augmented reality has the potential to aid surgeons with particular surgical tasks in image-guided surgery. In augmented reality (AR) visualization for neurosurgery, the live view of the surgical scene is merged with preoperative patient data, aiding the surgeon in mapping patient images from the image-guidance system to the real patient. Furthermore, augmented reality visualization allows the surgeon to see beyond the visible surface of the head or brain at the anatomy that is relevant at different stages of surgery. In this paper, the particular surgical tasks that have benefited from AR visualization by the neurosurgeons that have used our system are described. These tasks include: tailoring a craniotomy, localizing the anatomy of interest, planning a resection corridor and determining a surgical strategy. We present each of these surgical tasks and provide examples of how AR was used in the operating room.
Marta Kersten-Oertel, Ian J. Gerard, Simon Drouin, Kelvin Mok, Denis Sirhan, David S. Sinclair, D. Louis Collins
Layer Separation for Vessel Enhancement in Interventional X-ray Angiograms Using Morphological Filtering and Robust PCA
Abstract
Automatic vessel extraction from X-ray angiograms (XA) for percutaneous coronary interventions is often hampered by low contrast and presence of background structures, e.g. diaphragm, guiding catheters, stitches. In this paper, we present a novel layer separation technique for vessel enhancement in XA to address this problem. The method uses morphological filtering and Robust PCA to separate interventional XA images into three layers, i.e. a large-scale breathing structure layer, a quasi-static background layer and a layer containing the vessel structures that could potentially improve the quality of vessel extraction from XA. The method is evaluated on several clinical XA sequences. The result shows that the proposed method significantly increases the visibility of vessels in XA and outperforms other background-removal methods.
Hua Ma, Gerardo Dibildox, Jyotirmoy Banerjee, Wiro Niessen, Carl Schultz, Evelyn Regar, Theo van Walsum
Automatic Guide-Wire Detection for Neurointerventions Using Low-Rank Sparse Matrix Decomposition and Denoising
Abstract
In neuro-interventional surgeries, physicians rely on fluoroscopic video sequences to guide tools through the vascular system to the region of interest. Due to the low signal-to-noise ratio of low-dose images and the presence of many line-like structures in the brain, the guide-wire and other tools are difficult to see. In this work we propose an effective method to detect guide-wires in fluoroscopic videos that aims at enhancing the visualization for better intervention guidance. In contrast to prior work, we do not rely on a specific modeling of the catheter (e.g. shape, intensity, etc.), nor on prior statistical learning. Instead, we base our approach on motion cues by making use of recent advances in low-rank and sparse matrix decomposition, which we then combine with denoising. An evaluation on 651 X-ray images from 5 patient shows that our guide-wire tip detection is precise and within clinical tolerance for guide-wire inter-frame motions as high as 6 mm.
Markus Zweng, Pascal Fallavollita, Stefanie Demirci, Markus Kowarschik, Nassir Navab, Diana Mateus
3D Surgical Overlay with Markerless Image Registration Using a Single Camera
Abstract
Minimum invasive surgery can benefit from surgical visualization, which is achieved by either virtual reality or augmented reality. We previously proposed an integrated 3D image overlay based surgical visualization solution including 3D image rendering, distortion correction, and spatial projection. For correct spatial projection of the 3D image, image registration is necessary. In this paper we present a 3D image overlay based augmented reality surgical navigation system with markerless image registration using a single camera. The innovation compared with our previous work lies in the single camera based image registration method for 3D image overlay. The 3D mesh model of patient’s teeth which is created from the preoperative CT data is matched with the intraoperative image captured by a single optical camera to determine the six-degree-of-freedom pose of the model with respect to the camera. The obtained pose is used to superimpose the 3D image of critical hidden tissues on patient’s body directly via a translucent mirror for surgical visualization. The image registration performs automatically within approximate 0.2 s, which enables real-time update to tackle patient’s movement. Experimental results show that the registration accuracy is about 1 mm and confirm the feasibility of the 3D surgical overlay system.
Junchen Wang, Hideyuki Suenaga, Liangjing Yang, Hongen Liao, Takehiro Ando, Etsuko Kobayashi, Ichiro Sakuma
Simultaneous Estimation of Feature Correspondence and Stereo Object Pose with Application to Ultrasound Augmented Robotic Laparoscopy
Abstract
In-situ visualization of ultrasound in robot-assisted surgery requires robust, real-time computation of the pose of the intra-corporeal ultrasound (US) probe with respect to the stereo-laparoscopic camera. Image based, intrinsic methods of computing this relative pose need to overcome challenges due to irregular illumination, partial feature occlusion and clutter that are unavoidable in practical robotic-laparoscopy. In this paper, we extend a state-of-the-art simultaneous monocular pose and correspondence estimation framework to a stereo imaging model. The method is robust to partial feature occlusion and clutter, and does not require explicit feature matching. Through exhaustive experiments, we demonstrate that in terms of accuracy, the proposed method outperforms the conventional stereo pose estimation approach and the state-of-the-art monocular camera-based method. Both quantitative and qualitative results are presented.
Uditha L. Jayarathne, Xiongbiao Luo, Elvis C. S. Chen, Terry M. Peters
Patient Adapted Augmented Reality System for Real-Time Echocardiographic Applications
Abstract
When compared to other imaging modalities, the acquisition and interpretation of ultrasound data is challenging for new users. The main aim of this work is to investigate whether augmented reality can provide a patient specific correspondence between the ultrasound image and a virtual anatomic model of the heart and thus simplify the live echocardiographic scanning process.
The proposed augmented view contains the image of the scene (acquired with a tablet’s camera), a generic heart model and the ultrasound image plane. The position and orientation of the ultrasound probe and patient are tracked with the Vuforia framework, using fiducial markers that are visible on the tablet’s camera image. A customized renderer for augmentation purposes was implemented in OpenGL ES. Data streaming from a high end ultrasound scanner to the tablet device was implemented and the position of the heart model is continuously updated to match the ultrasound data by tracking and matching a set of anatomic landmarks. The prototype was tested in the echo laboratory and real-time performance was achievable as no significant lag between the scanner image and the one presented on the tablet was experienced. Furthermore the suitability of Vuforia for fiducial tracking was evaluated and was deemed sufficiently accurate for this application. The presented prototype was tested by an experienced echocardiographer and was considered beneficial for both teaching purposes and as a help tool for inexperienced ultrasound users.
Gabriel Kiss, Cameron Lowell Palmer, Hans Torp
Backmatter
Metadaten
Titel
Augmented Environments for Computer-Assisted Interventions
herausgegeben von
Cristian A Linte
Ziv Yaniv
Pascal Fallavollita
Copyright-Jahr
2015
Electronic ISBN
978-3-319-24601-7
Print ISBN
978-3-319-24600-0
DOI
https://doi.org/10.1007/978-3-319-24601-7