Skip to main content

2016 | Buch

Medical Imaging and Augmented Reality

7th International Conference, MIAR 2016, Bern, Switzerland, August 24-26, 2016, Proceedings

herausgegeben von: Guoyan Zheng, Hongen Liao, Pierre Jannin, Philippe Cattin, Su-Lin Lee

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

The 6th International Conference on Medical Imaging and Augmented Reality, MIAR 2016, was held in Bern, Switzerland during August 2016.

The aim of MIAR is to bring together researchers in computer vision, graphics, robotics, and medical imaging to present the state-of-the-art developments in this ever-growing research area in topics such as:

Medical Image Formation, Analysis and Interpretation

Augmented Reality, Visualization and Simulation

Computer Assisted Interventional and Robotics, Surgical Planning

Systematic Extra- and Intra-corporeal Imaging ModalitiesGeneral Biological and Neuroscience Image Computing

Inhaltsverzeichnis

Frontmatter
Erratum to: Medical Imaging and Augmented Reality
Guoyan Zheng, Hongen Liao, Pierre Jannin, Philippe Cattin, Su-Lin Lee

Computer Assisted Interventions

Frontmatter
A Novel Computer-Aided Surgical Simulation (CASS) System to Streamline Orthognathic Surgical Planning

Orthognathic surgery is a surgical procedure to correct jaw deformities. It requires extensive presurgical planning. We developed a novel computer-aided surgical simulation (CASS) system, the AnatomicAligner, for doctors planning the entire orthognathic surgical procedure in computer following our streamlined clinical protocol. The computerized plan can be transferred to the patient at the time of surgery using digitally designed surgical splints. The system includes six modules: image segmentation and three-dimensional (3D) model reconstruction; registration and reorientation of the models to neutral head posture (NHP) space, 3D cephalometric analysis, virtual osteotomy, surgical simulation, and surgical splint designing. The system has been validated using the 5 sets of patient’s datasets. The AnatomicAligner system will be soon available freely to the broader clinical and research communities.

Peng Yuan, Dennis Chun-Yu Ho, Chien-Ming Chang, Jianfu Li, Huaming Mai, Daeseung Kim, Shunyao Shen, Xiaoyan Zhang, Xiaobo Zhou, Zixiang Xiong, Jaime Gateno, James J. Xia
Computer Assisted Planning, Simulation and Navigation of Periacetabular Osteotomy

Periacetabular osteotomy (PAO) is an effective approach for surgical treatment of hip dysplasia in young adults. However, achieving an optimal acetabular reorientation during PAO is the most critical and challenging step. Routinely, the correct positioning of the acetabular fragment largely depends on the surgeons experience and is done under fluoroscopy to provide the surgeon with continuous live x-ray guidance. To address these challenges, we developed a computer assisted system. Our system starts with a fully automatic detection of the acetabular rim, which allows for quantifying the acetabular 3D morphology with parameters such as acetabular orientation, femoral head Extrusion Index (EI), Lateral Center Edge (LCE) angle, total and regional femoral head coverage (FHC) ratio for computer assisted diagnosis, planning and simulation of PAO. Intra-operative navigation is used to implement the pre-operative plan. Two validation studies were conducted on four sawbone models to evaluate the efficacy of the system intra-operatively and post-operatively. By comparing the pre-operatively planned situation with the intra-operatively achieved situation, average errors of $$0.6^\circ \pm 0.3^\circ $$, $$0.3^\circ \pm 0.2^\circ $$ and $$1.1^\circ \pm 1.1^\circ $$ were found respectively along three motion directions (Flexion/Extension, Abduction/Adduction and External Rotation/Internal Rotation). In addition, by comparing the pre-operatively planned situation with the post-operative results, average errors of $$0.9^\circ \pm 0.3^\circ $$ and $$0.9^\circ \pm 0.7^\circ $$ were found for inclination and anteversion, respectively.

Li Liu, Timo M. Ecker, Klaus-A. Siebenrock, Guoyan Zheng
FEM Simulation with Realistic Sliding Effect to Improve Facial-Soft-Tissue-Change Prediction Accuracy for Orthognathic Surgery

It is clinically important to accurately predict facial soft tissue changes following bone movements in orthognathic surgical planning. However, the current simulation methods are still problematic, especially in clinically critical regions, e.g., the nose, lips and chin. In this study, finite element method (FEM) simulation model with realistic tissue sliding effects was developed to increase the prediction accuracy in critical regions. First, the facial soft-tissue-change following bone movements was simulated using FEM with sliding effect with nodal force constraint. Subsequently, sliding effect with a nodal displacement constraint was implemented by reassigning the bone-soft tissue mapping and boundary condition for realistic sliding movement simulation. Our method has been quantitatively evaluated using 30 patient datasets. The FEM simulation method with the realistic sliding effects showed significant accuracy improvement in the whole face and the critical areas (i.e., lips, nose and chin) in comparison with the traditional FEM method.

Daeseung Kim, Huaming Mai, Chien-Ming Chang, Dennis Chun-Yu Ho, Xiaoyan Zhang, Shunyao Shen, Peng Yuan, Guangming Zhang, Jaime Gateno, Xiaobo Zhou, Michael A.K. Liebschner, James J. Xia
CathNets: Detection and Single-View Depth Prediction of Catheter Electrodes

The recent success of convolutional neural networks in many computer vision tasks implies that their application could also be beneficial for vision tasks in cardiac electrophysiology procedures which are commonly carried out under guidance of C-arm fluoroscopy. Many efforts for catheter detection and reconstruction have been made, but especially robust detection of catheters in X-ray images in realtime is still not entirely solved. We propose two novel methods for (i) fully automatic electrophysiology catheter electrode detection in interventional X-ray images and (ii) single-view depth estimation of such electrodes based on convolutional neural networks. For (i), experiments on 24 different fluoroscopy sequences (1650 X-ray images) yielded a detection rate > 99 %. Our experiments on (ii) depth prediction using 20 images with depth information available revealed that we are able to estimate the depth of catheter tips in the lateral view with a remarkable mean error of $$6.08\,\pm \,4.66$$ mm.

Christoph Baur, Shadi Albarqouni, Stefanie Demirci, Nassir Navab, Pascal Fallavollita
Inference of Tissue Haemoglobin Concentration from Stereo RGB

Multispectral imaging (MSI) can provide information about tissue oxygenation, perfusion and potentially function during surgery. In this paper we present a novel, near real-time technique for intrinsic measurements of total haemoglobin (THb) and blood oxygenation (SO$$_2$$) in tissue using only RGB images from a stereo laparoscope. The high degree of spectral overlap between channels makes inference of haemoglobin concentration challenging, non-linear and under constrained. We decompose the problem into two constrained linear sub-problems and show that with Tikhonov regularisation the estimation significantly improves, giving robust estimation of the THb. We demonstrate by using the co-registered stereo image data from two cameras it is possible to get robust SO$$_2$$ estimation as well. Our method is closed from, providing computational efficiency even with multiple cameras. The method we present requires only spectral response calibration of each camera, without modification of existing laparoscopic imaging hardware. We validate our technique on synthetic data from Monte Carlo simulation and further, in vivo, on a multispectral porcine data set.

Geoffrey Jones, Neil T. Clancy, Simon Arridge, Daniel S. Elson, Danail Stoyanov
Radiation-Free 3D Navigation and Vascular Reconstruction for Aortic Stent Graft Deployment

We propose a radiation-free three dimensional (3D) navigation and vasculature reconstruction method to reduce aortic stent graft deployment time and repeated exposures to high doses of X-ray radiation. 2D intraoperative US images are fused with 3D preoperative magnetic resonance (MR) image to provide intuitive 3D navigation for deployment of the stent-graft in the proposed system. On the other hand, calibrated 2D US images track catheter’s tip and construct intraoperative 3D aortic model by combining segmented intravascular ultrasound (IVUS) images. The constructed 3D aortic models can quantitatively assess morphological characteristics of aortic to assist deployment of stent graft. This system was validated by using in vitro cardiac and aorta phantom. The mean target registration error of 2D US-3D MR registration was 2.70 mm and the average tracking error of position of IVUS catheter’s tip was 1.12 mm. Accurate contours detection results in IVUS images were acquired. Meanwhile, Hausdorff distances are 0.78 mm and 0.59 mm for outer and inner contours, respectively; Dice coefficients are 90.21 % and 89.96 %. Experiment results demonstrate that our radiation-free navigation and 3D vasculature reconstruction method is promising for deployment of stent graft in vivo studies.

Fang Chen, Jia Liu, Hongen Liao
Electromagnetic Guided In-Situ Laser Fenestration of Endovascular Stent-Graft: Endovascular Tools Sensorization Strategy and Preliminary Laser Testing

The in-situ endograft fenestration, a possible surgical option for the minimally invasive treatment of aneurysms with unfavorable anatomy, is today limited by difficulties in targeting the fenestration site and by the lack of a safe method to perforate the graft. In this work we suggest the use of: a 3D electromagnetic (EM) navigator, to accurately guide the endovascular instruments to the target, and a laser system, to selectively perforate the graft. More particularly we propose to integrate a laser fiber into a sensorized guidewire and we describe an EM sensorization strategy to accurately guide the laser tool. Finally we preliminary explore different laser irradiation conditions to achieve a successful endograft fenestration and we verify that the heating generated by the laser doesn’t damage the EM coils.

Sara Condino, Roberta Piazza, Filippo Micheletti, Francesca Rossi, Roberto Pini, Raffaella Berchiolli, Aldo Alberti, Vincenzo Ferrari, Mauro Ferrari
A Cost-Effective Navigation System for Peri-acetabular Osteotomy Surgery

Purpose. To develop and evaluate a low-cost, surgical navigation solution for periacetabular osteotomy (PAO) surgery.Methods. A commercially available low-cost miniature computer is used together with a camera board (Raspberry Pi 2 Model B, Camera Module PiNoir) to track planar markers (Aruco markers). The overall setup of the tracking unit is small enough to be attached directly to the patient’s pelvis. The patient’s pelvis is registered by estimating the pose of a planar marker which is attached to an anterior pelvic plane (APP) digitization device. Next, one marker is attached to the acetabular fragment and the initial orientation of the fragment is recorded. The estimated orientation of the fragment is transmitted to the host computer for visualization.Results. A plastic bone study (eight hip joints) was performed to validate the proposed system. The comparison with a previously developed optical tracking-based system showed no statistical significant difference between measurements obtained from the two systems. In all eight hip joints the mean absolute difference was below 2$$^\circ $$ for both anteversion and inclination and a very strong correlation was observed.Conclusions. We show that with our proof-of-principle system, we are able to compute the acetabular orientation accurately.

Silvio Pflugi, Rakesh Vasireddy, Li Liu, Timo M. Ecker, Till Lerch, Klaus Siebenrock, Guoyan Zheng
Motion-Based Technical Skills Assessment in Transoesophageal Echocardiography

This paper presents a novel approach for evaluating technical skills in Transoesophageal Echocardiography (TEE). Our core assumption is that operational competency can be objectively expressed by specific motion-based measures. TEE experiments were carried out with an augmented reality simulation platform involving both novice trainees and expert radiologists. Probe motion data were collected and used to formulate various kinematic parameters. Subsequent analysis showed that statistically significant differences exist among the two groups for the majority of the metrics investigated. Experts exhibited lower completion times and higher average velocity and acceleration, attributed to their refined ability for efficient and economical probe manipulation. In addition, their navigation pattern is characterised by increased smoothness and fluidity, evaluated through the measures of dimensionless jerk and spectral arc length. Utilised as inputs to well-known clustering algorithms, the derived metrics are capable of discriminating experience levels with high accuracy (>84 %).

Evangelos B. Mazomenos, Francisco Vasconcelos, Jeremy Smelt, Henry Prescott, Marjan Jahangiri, Bruce Martin, Andrew Smith, Susan Wright, Danail Stoyanov
Advanced Design System for Infantile Cranium Shape Model Growth Prediction

We present a longitudinal statistical shape/volumetric model of the cranium of infants and use it as prior information to support the design of cranial shape correction helmets. In addition, a logical approach based on natural brain growth is considered to derive necessary formulation to calculate the required treatment duration. The respective morphological analysis and statistical model will be integrated into the current haptic-based design pipeline of a company to produce an effective computer-assisted design system.

Kamal Shahim, Mauricio Reyes, Ruben Simon, Philipp Jürgens, Christoph Blecher

Augmented Reality and Virtual Reality

Frontmatter
Interactive Mixed Reality for Muscle Structure and Function Learning

Understanding the structure and function of the muscular system is important in many different medical scenarios, such as student and patient education, muscle training, and rehabilitation of patients with kinetic problems. The structure and function of muscles are difficult to learn as muscle movement is imperceptible using the traditional methods. In this paper, an interactive mixed reality system is proposed for facilitating the learning of the muscles of the upper extremities. The proposed system consists of two main components: an AR view, overlaying the virtual model of the arm on top of the video stream, and a VR view, providing a more detailed understanding of the muscles. The mixed reality view helps the user to mentally map the behaviour of muscles to his/her own body during different movements. A user study including twenty students was performed by a questionnaire framework. The results indicate that our system is useful for learning the structure and function of the muscle and can be a valuable supplement to established muscle learning paradigms.

Meng Ma, Philipp Jutzi, Felix Bork, Ina Seelbach, Anna Maria von der Heide, Nassir Navab, Pascal Fallavollita
Visualization Techniques for Augmented Reality in Endoscopic Surgery

Augmented reality (AR) is widely used in minimally invasive surgery (MIS), since it enhances the surgeon’s perception of spatial relationship by overlaying the invisible structures on the endoscopic images. Depth perception is the key problem in AR visualization. In this paper, we present a video-based AR system for aiding MIS of removing a tumor inside a kidney. We explore several different AR visualization techniques. They are transparent overlay, virtual window, random-dot mask and the ghosting method. We also introduce the depth-aware ghosting method to further enhance the depth perception of virtual structure which has complex spatial geometry. Both simulated and in vivo experiments were carried out to evaluate these AR visualization techniques. The experimental results demonstrate the feasibility of our AR system and AR visualization techniques. Finally, we conclude the characteristics of these AR visualization techniques.

Rong Wang, Zheng Geng, Zhaoxing Zhang, Renjing Pei
Augmented Reality Imaging for Robot-Assisted Partial Nephrectomy Surgery

Laparoscopic partial nephrectomy (LPN) is a standard of care for small kidney cancer tumours. A successful LPN is the complete excision of the kidney tumour while preserving as much of the non-cancerous kidney as possible. This is a challenging procedure because the surgeon has a limited field of view and reduced or no haptic feedback while performing delicate excisions as fast as possible. This work introduces and evaluates a novel surgical navigation marker called the Dynamic Augmented Reality Tracker (DART). The DART is used in a novel intra-operative augmented reality ultrasound navigation system (ARUNS) for robot-assisted minimally invasive surgery to overcome some of these challenges. The DART is inserted into a kidney and the DART and pick-up laparoscopic ultrasound transducer are tracked during an intra-operative freehand ultrasound scan of the tumour. After ultrasound, the system continues to track the DART and display the segmented 3D tumour and location of surgical instruments relative to the tumour throughout the surgery. The ultrasound point reconstruction root mean squared error (RMSE) was 0.9 mm, the RMSE of tracking the da Vinci surgical instruments was 1.5 mm and the total system RMSE, which includes ultrasound imaging and da Vinci kinematic instrument tracking, was 5.1 mm. The system was evaluated by an expert surgeon who used the DART and ARUNS to excise a tumour from a kidney phantom. This work serves as a preliminary evaluation in anticipation of further refinement and validation in vivo.

Philip Edgcumbe, Rohit Singla, Philip Pratt, Caitlin Schneider, Christopher Nguan, Robert Rohling
Mobile Laserprojection in Computer Assisted Neurosurgery

Neurosurgery is one of the key areas for computer-assisted navigation. By providing an intuitive visualization and feedback to the operator, augmented reality has the potential to provide improved clinical workflows and patient outcomes. In this work, a mobile augmented reality system for surgical neuronavigation is presented, allowing for a direct projection of anatomical information on both planar and non-planar surfaces. The system consists of a tracked mobile laser projector, integrated within a navigation system, providing registered patient surface and image data. Using this setup, image data can be corrected for distortions and displayed accurately on the body surface in relation to the patient. We present the overall system with an efficient distortion correction, and evaluate the accuracy with a series of experiments for point and surface projection errors, as well as the general clinical applicability. The system is demonstrated for the projection of craniotomy incision lines and general image data directly onto the surface of a human skull model, where an average error of 1.04 mm shows a sufficient accuracy with respect to the clinical application.

Christoph Hennersperger, Johannes Manus, Nassir Navab
Towards Augmented Reality Guided Craniotomy Planning in Tumour Resections

Augmented reality has been proposed as a solution to overcome some of the current shortcomings of image-guided neurosurgery. In particular, it has been used to merge patient images, surgical plans, and the surgical field of view into a comprehensive visualization. In this paper we explore the use of augmented reality for planning craniotomies in image-guided neurosurgery procedures for tumour resections. Our augmented reality image-guided neurosurgery system was brought into the operating room for 8 cases where the surgeon used augmented reality prior to tumour resection. We describe our initial results that suggest that augmented reality can play an important role in tailoring the size and shape of the craniotomy and for evaluating intra-operative surgical strategies. With continued development and validation, augmented reality guidance has the potential to improve the minimally invasiveness of image-guided neurosurgery through improved intraoperative surgical planning.

Marta Kersten-Oertel, Ian J. Gerard, Simon Drouin, Kevin Petrecca, Jeffery A. Hall, D. Louis Collins
Augmenting Scintigraphy Images with Pinhole Aligned Endoscopic Cameras: A Feasibility Study

Morbidity of cancer is still high and this is especially true for squamous cell carcinoma in the oral cavity and oropharynx which is one of the most widespread cancers worldwide. To avoid spreading of the tumor, often the lymphatic tissue of the neck is removed together with the tumor. Such neck dissections are inherently dangerous for the patient and only required in roughly 30 % of the patients as has been shown by studies. To prevent overtreatments, sentinel lymph node biopsy is used where the first lymph node after the tumor is probed for cancerous cells. The lymphatic tissue is then only completely removed when tumor cells are found. This sentinel node is localized by means of detecting a radioactive tracer that is injected near the tumor. Its uptake is then measured and observed. State-of-the-art support for the specialist is a 1-dimensional audio-based gamma detection unit which makes it challenging to detect and excise the true sentinel lymph node for an effective histologic examination and therefore correct staging.This feasibility study presents the working principles and preliminary results of a scintigraphy device that is supported by augmented reality to aid the surgeon performing sentinel lymph node biopsy. Advances in detector- and sensor technology enable this leap forward for this type of intervention. We developed and tested a small-form multi-pinhole collimator with axis-aligned endoscopic cameras. As these cameras and the pinholes provide the same projective geometry, the augmentation of the gamma images of tracer enriched lymph nodes with optical images of the intervention site can be easily done, all without the need for a 3D depth map or synthetic model of the surgical scene.

Peter A. von Niederhäusern, Ole C. Maas, Michael Rissi, Matthias Schneebeli, Stephan Haerle, Philippe C. Cattin
Tactile Augmented Reality for Arteries Palpation in Open Surgery Training

Palpation is an essential step of several open surgical procedures for locating arteries by arterial pulse detection. In this context, surgical simulation would ideally provide realistic haptic sensations to the operator. This paper presents a proof of concept implementation of tactile augmented reality for open-surgery training. The system is based on the integration of a wearable tactile device into an augmented physical simulator which allows the real time tracking of artery reproductions and the user finger and provides pulse feedback during palpation. Preliminary qualitative test showed a general consensus among surgeons regarding the realism of the arterial pulse feedback and the usefulness of tactile augmented reality in open-surgery simulators.

Sara Condino, Rosanna Maria Viglialoro, Simone Fani, Matteo Bianchi, Luca Morelli, Mauro Ferrari, Antonio Bicchi, Vincenzo Ferrari
Augmented Reality Guidance with Electromagnetic Tracking for Transpyloric Tube Insertion

Transpyloric tube insertion is a commonly intervention which consists in inserting a tube through the nose, the oesophagus and the stomach until the pylorus. This procedure can be done blindly if no licensed physician operator is available. Thus, the caregivers have to insert the tube without any visual feedbacks. The position of the tube has also to be conformed to avoid any issues.We proposed a guidance system for the insertion of this tube through an augmented reality application using an electromagnetic tracker. The trajectory and the orientation of the tube were visualized on a screen to monitor the tube insertion but also to conform the position of the tube tip inside the pylorus. This monitoring was improved by adding preoperative landmarks coming from a CT image or intraoperative landmarks obtained with ultrasound. Finally, the display was overlaid on a camera view to match the tube trajectory on the patient during the insertion so the clinicians directly visualized anatomical landmarks to guide themselves during the tube insertion.Preliminary results on one phantom showed the usefulness of our guidance system for blindly tube insertion. Further evaluations have to be realized on animals and on patients to validate these results.

Jordan Bano, Tomohiko Akahoshi, Ryu Nakadate, Byunghyun Cho, Makoto Hashizume
Exploring Visuo-Haptic Augmented Reality User Interfaces for Stereo-Tactic Neurosurgery Planning

Stereo-tactic neurosurgery planning is a time-consuming and complex task that requires detailed understanding of the patient anatomy and the affected regions in the brain to precisely deliver the treatment and to avoid proximity to any known risk structures. Traditional user interfaces for neurosurgery planning use keyboard and mouse for interaction and visualize the medical data on a screen. Previous research, however, has shown that 3D user interfaces are more intuitive for navigating volumetric data and enable users to understand spatial relations more quickly. Furthermore, new imaging modalities and automated segmentation of relevant structures provide important information to medical experts. However, displaying such information requires frequent context switches or occludes otherwise important information.In collaboration with medical experts, we analyzed the planning workflow for stereo-tactic neurosurgery interventions and identified two tasks in the process that can be improved: volume exploration and trajectory refinement. In this paper, we present a novel 3D user interface for neurosurgery planning that is implemented using a head-mounted display and a haptic device. The proposed system improves volume exploration with bi-manual interaction to control oblique slicing of volumetric data and reduces visual clutter with the help of haptic guides that enable users to precisely target regions of interest and to avoid proximity to known risk structures.

Ulrich Eck, Philipp Stefan, Hamid Laga, Christian Sandor, Pascal Fallavollita, Nassir Navab
Interactive Depth of Focus for Improved Depth Perception

The need to look into human body for better diagnosis, improved surgical planning and minimally invasive surgery led to breakthroughs in medical imaging. But, intra-operatively a surgeon needs to look at multi-modal imaging data on multiple displays and to fuse the multi-modal data in the context of the patient. This adds extra mental effort for the surgeon in an already high cognitive load surgery. The obvious solution to augment medical object in the context of patient suffers from inaccurate depth perception. In the past, some visualizations have addressed the issue of wrong depth perception, but not without interfering with the natural intuitive view of the surgeon. Therefore, in the current work an interactive depth of focus (DoF) blur method for AR is proposed. It mimics the naturally present DoF blur effect in a microscope. DoF blur forces the cue of accommodation and convergence to come into effect and holds potential to give near metric accuracy; its quality decreases with distance. This makes it suitable for microscopic neurosurgical applications with smaller working depth ranges.

Megha Kalia, Christian Schulte zu Berge, Hessam Roodaki, Chandan Chakraborty, Nassir Navab
Augmented Reality for Neurosurgical Guidance: An Objective Comparison of Planning Interface Modalities

Numerous augmented reality image guidance tools have been evaluated under specific clinical criteria, but there is a lack of investigation into the broad effect on targeting ability and perception. In this paper, we evaluated performance of 18 subjects on a targeting task modeling ventriculostomy trajectory planning. Users targeted ellipsoids within a mannequin head using both an augmented reality interface and a traditional slice-based interface for planning. Users were significantly more accurate by several measures using augmented reality guidance, but were seen to have significant targeting bias; depth was underestimated by users with low targeting success. Our results further demonstrate the need for superior depth cues in augmented reality implementations while providing a framework for objective evaluation of augmented reality interfaces.

Ryan Armstrong, Trinette Wright, Sandrine de Ribaupierre, Roy Eagleson

Medical Image Analysis

Frontmatter
Adaptive Mean Shift Based Hemodynamic Brain Parcellation in fMRI

One of the remaining challenges in event-related fMRI is to discriminate between the vascular response and the neural activity in the BOLD signal. This discrimination is done by identifying the hemodynamic territories which differ in their underlying dynamics. In the literature, many approaches have been proposed to estimate these underlying dynamics, which is also known as Hemodynamic Response Function (HRF). However, most of the proposed approaches depend on a prior information regarding the shape of the parcels (territories) and their number. In this paper, we propose a novel approach which relies on the adaptive mean shift algorithm for the parcellation of the brain. A variational inference is used to estimate the unknown variables while the mean shift is embedded within a variational expectation maximization (VEM) framework to allow for estimating the parcellation and the HRF profiles without having any prior information about the number of the parcels or their shape. Results on synthetic data confirms the ability of the proposed approach to estimate accurate HRF estimates and number of parcels. It also manages to discriminate between voxels in different parcels especially at the borders between these parcels. In real data experiment, the proposed approach manages to recover HRF estimates close to the canonical shape in the bilateral occipital cortex.

Mohanad Albughdadi, Lotfi Chaari, Jean-Yves Tourneret
Quantitative Analysis of 3D T1‐Weighted Gadolinium (Gd) DCE‐MRI with Different Repetition Times

Dynamic contrast-enhanced MRI (DCE-MRI) acquires T1-weighted MRI scans before and after injection of an MRI contrast agent such as gadolinium (Gd). Gadolinium causes the relaxation time to decrease, resulting in higher MR image intensities after injection followed by a gradual decrease in image intensities during wash out. Gd does not pass the intact blood–brain barrier (BBB), thus its dynamics can be used to quantify pathology associated with BBB leaks. In current clinical practice, it is suggested to use the same pulse sequence for pre-injection T1 calibration and Gd concentration calculation in the DCE image sequence based on the spoiled gradient recalled echo (SPGR) signal equation. A common method for T1 estimation is using variable flip angle (VFA). However, when the parameters such as the repetition time (TR) for image acquisition could be tuned differently for T1 estimation and DCE acquisition, the popular dcemriS4 software package that handles only a fixed TR often results in discrepancies in Gd concentration estimation. This paper reports a quick solution for calculating Gd concentrations when different TRs are used. First, the pre-injection T1 map is calculated by using the Levenberg-Marquardt algorithm with VFA acquisition, then, because the TR used for DCE acquisition is different from the VFA TR, the equilibrium magnetization is updated with the TR for DCE, and the Gd concentration is calculated thereafter. In the experiments, we first simulated Gd concentration curves for different tissue types and generated the corresponding VFA and DCE image sequences and then used the proposed method to reconstruct the concentration. Comparing with the original simulated data allows us to validate the accuracy of the proposed computation. Further, we tested performance of the method by simulating different amounts of Ktrans changes in a manually selected region of interest (ROI). The results showed that the new method can estimate Gd dynamics more accurately in the case where different TRs are used and be sensitive enough to detect slight Ktrans changes in DCE-MRI.

Elijah D. Rockers, Maria B. Pascual, Sahil Bajaj, Joseph C. Masdeu, Zhong Xue
Cascade Registration of Micro CT Volumes Taken in Multiple Resolutions

In this paper, we present a preliminary report of a multi-scale registration method between micro-focus X-ray CT (micro CT) volumes taken in different scales. 3D fine structures of target objects can be observed on micro CT volumes, which are difficult to observe on clinical CT volumes. Micro CT scanners can scan specimens in various resolutions. In their high resolution volumes, ultra fine structures of specimens can be observed, while scanned areas are limited to very small. On the other hand, in low resolution volumes, large areas can be captured, while fine structures of specimens are difficult to observe. The fusion volume of the high and low resolution volumes will have benefits of both. Because the difference of resolutions between the high and low resolution volumes may vary greatly, an intermediate resolution volume is required for successful fusion of volumes. To perform such volume fusion, a cascade multi-resolution registration technique is required. To register micro CT volumes that have quite different resolutions, we employ a cascade co-registration technique. In the cascade co-registration process, intermediate resolution volumes are used in a registration process of the high and low resolution volumes. In the registration between two volumes, we apply two steps registration techniques. In the first step, a block division is used to register two resolution volumes. Afterward, we estimate the fine spatial positions relating the registered two volumes using the Powell method. The registration result can be used to generate a fusion volume of the high and low resolution volumes.

Kai Nagara, Hirohisa Oda, Shota Nakamura, Masahiro Oda, Hirotoshi Homma, Hirotsugu Takabatake, Masaki Mori, Hiroshi Natori, Daniel Rueckert, Kensaku Mori
3D Vessel Segmentation Using Random Walker with Oriented Flux Analysis and Direction Coherence

Accurate 3D vessel segmentation remains challenging due to varying intensity contrast, high noise level, topological complexity and large extension area. In this paper, we propose an efficient graph-based method for 3D vessel segmentation with the help of oriented flux analysis and direction coherence, which work both in the graph construction and energy function formulation. To address the shrinking problem and seed sensitivity in conventional graph-based methods, new metrics based on hand-draft features are designed to encode vessel-dedicated information as prior probability into the optimization framework and to guide the segmentation towards elongated structures. Optimal vessel segmentation results can then be obtained with the random walker implementation efficiently. For evaluation, the proposed method is compared with classical random walker and region growing. We also conduct the comparison with a Hessian-enhanced graph-based method by providing the same graph construction and optimization strategy. The results demonstrate that our method performs better on both synthetic and real images and has higher robustness when the noise level increases.

Qing Zhang, Albert C. S. Chung
Registration of CT and Ultrasound Images of the Spine with Neural Network and Orientation Code Mutual Information

Pairwise registration of 2D ultrasound (US) and 3D computed tomography (CT) images can improve the efficiency and safety of image-guided anesthesia in spine surgery. However, accurate 2D US and 3D CT registration for multiple vertebras without an appropriate initial registration position is still a challenge, due to the difference of image modalities and missing bone structures in US image. This paper proposes a novel 2D US and 3D CT registration method, in which convolutional neural network (CNN) classification of US images is reported for the first time to achieve rough image registration. And a new orientation code mutual information metric is further applied to finish local registration refinement. By combining automatic rough registration with fine registration refinement, our algorithm achieves 2D US and 3D CT registration for multiple vertebras (L2-L4) without the requirement of an appropriate initial alignment. The accuracy of our algorithm is validated on 50 in vivo clinical US images dataset of multiple vertebras. And a mean target registration error of 2.3 mm is acquired, which is lower than the clinically acceptable accuracy 3.5 mm.

Fang Chen, Dan Wu, Hongen Liao
A New Statistical Image Analysis Approach and Its Application to Hippocampal Morphometry

In this work, we propose a novel and powerful image analysis framework for hippocampal morphometry in early mild cognitive impairment (EMCI), an early prodromal stage of Alzheimer’s disease (AD). We create a hippocampal surface atlas with subfield information, model each hippocampus using the SPHARM technique, and register it to the atlas to extract surface deformation signals. We propose a new alternative to standard random field theory (RFT) and permutation image analysis methods, Statistical Parametric Mapping (SPM) Distribution Analysis or SPM-DA, to perform statistical shape analysis and compare its performance with that of RFT methods on both simulated and real hippocampal surface data. The major strengths of our framework are twofold: (a) SPM-DA provides potentially more powerful algorithms than standard RFT methods for detecting weak signals, and (b) the framework embraces the important hippocampal subfield information for improved biological interpretation. We demonstrate the effectiveness of our method via an application to an AD cohort, where an SPM-DA method detects meaningful hippocampal shape differences in EMCI that are undetected by standard RFT methods.

Mark Inlow, Shan Cong, Shannon L. Risacher, John West, Maher Rizkalla, Paul Salama, Andrew J. Saykin, Li Shen, for the ADNI
Clustering of MRI Radiomics Features for Glioblastoma Multiforme: An Initial Study

This paper proposed a radiomics model from magnetic resonance imaging (MRI) for Glioblastoma Multiforme (GBM) patients. One challenge of radiomics study is to reduce the redundancy of the features. Totally 466 radiomics features were extracted from automatically segmented tumors from T1, T1 contrast, T2, and FLAIR MRIs. The consensus clustering method was used and 10 feature clusters were obtained. All clusters had a prognostic association with survival, where three clusters had a mean C-index $$\ge $$0.60. The medoid features in each clusters with highest C-index were selected as radiomics signature candidates. The maximum and mean C-indices of the medoids are 0.75 and 0.68. The results demonstrated that the clusters reduced the data redundancy as well as generated clinical relevant radiomics features.

Zhi-Cheng Li, Qi-Hua Li, Bo-Lin Song, Yin-Sheng Chen, Qiu-Chang Sun, Yao-Qin Xie, Lei Wang
A Multi-resolution Multi-model Method for Coronary Centerline Extraction Based on Minimal Path

Extracting centerlines of coronary arteries is challenging but important in clinical applications of cardiac computed tomography angiography (CTA). Since manual annotation of coronary arteries is time-consuming, labor-intensive and subject to intra- and inter-variations, we propose a new method to fully automatically extract the coronary centerlines. We first develop a new image filter which generates pixels with salient vessel features within a given window. This filter hence can capture sparsely distributed but important vessel points, enabling the minimal path (MP) process to track the key centerline points at different resolution of the images. Then, we reformulate the filter for multi-resolution fast marching, which not only can speed up the coronary tracking process, but also can help the front propagation to step over the indistinct segments of the coronary artery such as at the locations of stenosis. We embed this scheme into the MP framework to develop a multi-resolution multi-model approach (MMP), where the extracted centerlines from low-resolution MP serve as prior and constraints for the high-resolution process. We evaluated the performance of this method using the Rotterdam CTA training data and the coronary artery algorithm evaluation framework. The average inside of our extraction was 0.51 mm and the overlap was 72.9 %. The mean runtime on the original resolution CTA images was 3.4 min using the MMP method.

Dengqiang Jia, Wenzhe Shi, Daniel Rueckert, Liu Liu, Sebastien Ourselin, Xiahai Zhuang
Facial Behaviour Analysis in Parkinson’s Disease

We describe a method for evaluating facial expressivity in order to improve related clinical assessments of Parkinson’s Disease (PD). There is a controversial evidence in the literature that PD facial impairment can be detected on certain emotional expressions. This study aimed to investigate the feasibility of discriminative and quantitive measures of PD from the ability of a subject to express facial expressions. Video clips of 8 subjects (4 healthy controls and 4 with patients with PD) were recorded during daily sessions over several weeks. Observations covered emotion variation over one week for control subjects and six weeks for patients with PD. A statistical shape model was used to track facial expressions and to measure the amount of expressivity exhibited by each subject. The study suggests that measures of the amount of movement during happiness, disgust and anger expressions are the most discriminative, with PD patients exhibiting less movement than controls. This work demonstrates that it may be possible to measure day-to-day variations in symptoms of PD automatically.

Riyadh Almutiry, Samuel Couth, Ellen Poliakoff, Sonja Kotz, Monty Silverdale, Tim Cootes

Medical Image Computing

Frontmatter
Weighted Robust PCA for Statistical Shape Modeling

Statistical shape models (SSMs) play an important role in medical image analysis. A sufficiently large number of high quality datasets is needed in order to create a SSM containing all possible shape variations. However, the available datasets may contain corrupted or missing data due to the fact that clinical images are often captured incompletely or contain artifacts. In this work, we propose a weighted Robust Principal Component Analysis (WRPCA) method to create SSMs from incomplete or corrupted datasets. In particular, we introduce a weighting scheme into the conventional Robust Principal Component Analysis (RPCA) algorithm in order to discriminate unusable data from meaningful ones in the decomposition of the training data matrix more accurately. For evaluation, the proposed WRPCA is compared with conventional RPCA on both corrupted (63 CT datasets of the liver) and incomplete datasets (15 MRI datasets of the human foot). The results show a significant improvement in terms of reconstruction accuracy on both datasets.

Jingting Ma, Feng Lin, Jonas Honsdorf, Katharina Lentzen, Stefan Wesarg, Marius Erdt
Intra-Operative Modeling of the Left Atrium: A Simulation Approach Using Poisson Surface Reconstruction

Electroanatomic Mapping (EAM) is an important process in Radio-frequency Catheter Ablations. In EAM, sample points are collected from the patient’s atrium during intervention. This process is subject to inaccuracies contributed by different sources (e.g. tissue deformations and tracking errors). Poisson Surface Reconstruction (PSR) has recently been applied for intra-operative modeling of the left atrium through highly-dense clouds of points extracted from intra-operative and pre-operative imaging. In this work, we study the application of PSR under low-density sampling conditions which occur in some clinical work-flows. For this study we propose a simulation framework that is employed to characterize PSR in terms of accuracy of reconstruction. Our results show that a median error as low as 2.28 mm can be obtained for a maximum of 600 sampled points. These results indicate the feasibility of applying PSR for low-dense clouds of points.

Rafael Palomar, Faouzi A. Cheikh, Azeddine Beghdadi, Ole J. Elle
Atlas-Based Reconstruction of 3D Volumes of a Lower Extremity from 2D Calibrated X-ray Images

In this paper, reconstruction of 3D volumes of a complete lower extremity (including both femur and tibia) from a limited number of calibrated X-ray images is addressed. We present a novel atlas-based method combining 2D-2D image registration-based 3D landmark reconstruction with a B-spline interpolation. In our method, an atlas consisting of intensity volumes and a set of predefined, sparse 3D landmarks, which are derived from the outer surface as well as the intramedullary canal surface of the associated anatomical structures, are used together with the input X-ray images to reconstruct 3D volumes of the complete lower extremity. Robust 2D-2D image registrations are first used to match digitally reconstructed radiographs, which are generated by simulating X-ray projections of an intensity volumes, with the input X-ray images. The obtained 2D-2D non-rigid transformations are used to update the locations of the 2D projections of the 3D landmarks in the associated image views. Combining updated positions for all sparse landmarks and given a grid of B-spline control points, we can estimate displacement vectors on all control points, and further to estimate the spline coefficients to yield a smooth volumetric deformation field. To this end, we develop a method combining the robustness of 2D-3D landmark reconstruction with the global smoothness properties inherent to B-spline parametrization.

Weimin Yu, Guoyan Zheng
3D Fully Convolutional Networks for Intervertebral Disc Localization and Segmentation

Accurate localization and segmentation of intervertebral discs (IVDs) from volumetric data is a pre-requisite for clinical diagnosis and treatment planning. With the advance of deep learning, 2D fully convolutional networks (FCN) have achieved state-of-the-art performance on 2D image segmentation related tasks. However, how to segment objects such as IVDs from volumetric data hasn’t been well addressed so far. In order to resolve above problem, we extend the 2D FCN into a 3D variant with end-to-end learning and inference, where voxel-wise predictions are generated. In order to compare the performance of 2D and 3D deep learning methods on volumetric segmentation, two different frameworks are studied: one is a 2D FCN with deep feature representations by making use of adjacent slices, the other one is a 3D FCN with flexible 3D convolutional kernels. We evaluated our methods on the 3D MRI data of MICCAI 2015 Challenge on Automatic Intervertebral Disc Localization and Segmentation. Extensive experimental results corroborated that 3D FCN can achieve a higher localization and segmentation accuracy than 2D FCN, which demonstrates the significance of volumetric information when confronting 3D localization and segmentation tasks.

Hao Chen, Qi Dou, Xi Wang, Jing Qin, Jack C. Y. Cheng, Pheng-Ann Heng
Temporal Prediction of Respiratory Motion Using a Trained Ensemble of Forecasting Methods

Respiratory motion is a limiting factor during cancer therapy. Although image tracking can facilitate compensation for this motion, system latencies will still reduce the accuracy of tracking-based treatments. We propose a novel approach for temporal prediction of the motion of anatomical targets in the liver, observed from ultrasound sequences. The method is based on an ensemble of six prediction models, including neural networks, which are trained on motion traces and images. Using leave-one-subject-out validation on 24 liver ultrasound 2D sequences from the Challenge on Liver Ultrasound Tracking, the best performance was achieved by the linear regression-based ensemble of all methods with an accuracy of 1.49 (2.39) mm for a latency of 300 (600) ms.

Xiaoran Chen, Christine Tanner, Orçun Göksel, Gábor Székely, Valeria De Luca
Automatic Fast-Registration Surgical Navigation System Using Depth Camera and Integral Videography 3D Image Overlay

We propose an automatic fast-registration augmented reality (AR) surgical navigation system for minimally invasive surgery. The system integrates a fast-registration technique with three-dimensional (3D) integral videography (IV) image overlay. The detailed anatomic information generated by IV technique is superimposed to the patient using 3D autostereoscopic images, which reproduce motion parallax with naked eyes. To reduce the patient-3D overlay image registration time and achieve automatic execution, we integrate a 3D image overlay system with a real-time patient tracking system, which utilizes particle filter algorithm and depth camera. Experimental results showed that the system can lower the registration time and the can reach up to 10 frames per second (fps). The 3D overlay image to the patient registration average error is 1.88 mm, with standard deviation of 0.72 mm. Further work for improvement of the depth camera acquisition accuracy and tracking algorithm makes this system more feasible and practical.

Cong Ma, Guowen Chen, Hongen Liao
Patient-Specific 3D Reconstruction of a Complete Lower Extremity from 2D X-rays

This paper introduces a solution that can robustly derive 3D models of musculoskeletal structures from 2D X-ray Images. The present method, as an integrated solution, consists of three components: (1) a musculoskeletal structure immobilization apparatus; (2) an X-ray image calibration phantom; and (3) a statistical shape model-based 2D-3D reconstruction algorithm. These three components are integrated in a systematic way in the present method to derive 3D models of any musculoskeletal structure from 2D X-ray Images in a functional position (e.g., weight-bearing position for lower limb). More specifically, the musculoskeletal structure immobilization apparatus will be used to rigidly fix the X-ray calibration phantom with respect to the underlying anatomy during the image acquisition. The calibration phantom then serves two purposes. For one side, the phantom will allow one to calibrate the projection parameters of any acquired X-ray image. For the other side, the phantom also allows one to track positions of multiple X-ray images of the underlying anatomy without using any additional positional tracker, which is a prerequisite condition for the third component to compute patient-specific 3D models from 2D X-ray images and the associated statistical shape models. Validation studies conducted on both simulated X-ray images and on patients’ X-ray data demonstrate the efficacy of the present solution.

Guoyan Zheng, Steffen Schumann, Alper Alcoltekin, Branislav Jaramaz, Lutz-P. Nolte
Cross-Manifold Guidance in Deformable Registration of Brain MR Images

Manifold is often used to characterize the high-dimensional distribution of individual brain MR images. The deformation field, used to register the subject with the template, is perceived as the geodesic pathway between images on the manifold. Generally, it is non-trivial to estimate the deformation pathway directly due to the intrinsic complexity of the manifold. In this work, we break the restriction of the single and complex manifold, by short-circuiting the subject-template pathway with routes from multiple simpler manifolds. Specifically, we reduce the anatomical complexity of the subject/template images, and project them to the virtual and simplified manifolds. The projected simple images then guide the subject image to complete its journey toward the template image space step by step. In the final, the subject-template pathway is computed by traversing multiple manifolds of lower complexity, rather than depending on the original single complex manifold only. We validate the cross-manifold guidance and apply it to brain MR image registration. We conclude that our method leads to superior alignment accuracy compared to state-of-the-art deformable registration techniques.

Jinpeng Zhang, Qian Wang, Guorong Wu, Dinggang Shen
Eidolon: Visualization and Computational Framework for Multi-modal Biomedical Data Analysis

Biomedical research, combining multi-modal image and geometry data, presents unique challenges for data visualization, processing, and quantitative analysis. Medical imaging provides rich information, from anatomical to deformation, but extracting this to a coherent picture across image modalities with preserved quality is not trivial. Addressing these challenges and integrating visualization with image and quantitative analysis results in Eidolon, a platform which can adapt to rapidly changing research workflows. In this paper we outline Eidolon, a software environment aimed at addressing these challenges, and discuss the novel integration of visualization and analysis components. These capabilities are demonstrated through the example of cardiac strain analysis, showing the Eidolon supports and enhances the workflow.

Eric Kerfoot, Lauren Fovargue, Simone Rivolo, Wenzhe Shi, Daniel Rueckert, David Nordsletten, Jack Lee, Radomir Chabiniok, Reza Razavi
Backmatter
Metadaten
Titel
Medical Imaging and Augmented Reality
herausgegeben von
Guoyan Zheng
Hongen Liao
Pierre Jannin
Philippe Cattin
Su-Lin Lee
Copyright-Jahr
2016
Electronic ISBN
978-3-319-43775-0
Print ISBN
978-3-319-43774-3
DOI
https://doi.org/10.1007/978-3-319-43775-0