Skip to main content

Über dieses Buch

This book constitutes the refereed proceedings of the First International Workshop on Computer Assisted and Robotic Endoscopy, CARE 2014, held in conjunction with MICCAI 2014, in Boston, MA, USA, in September 2014. The 12 papers presented focus on recent technical advances associated with computer vision; graphics; robotics and medical imaging; external tracking systems; medical device control systems; information processing techniques; endoscopy; planning and simulation.



Discarding Non Informative Regions for Efficient Colonoscopy Image Analysis

The diagnostic yield of colon cancer screening using colonoscopy could improve using intelligent systems. The large amount of data provided by high definition equipments contains frames with large non-informative regions. Non-informative regions have such a low visual quality that even physicians can not properly identify structures. Thus, identification of such regions is an important step for an efficient and accurate processing. We present a strategy for discarding non-informative regions in colonoscopy frames based on a model of appearance of such regions. Three different methods are proposed to characterize accurately the boundary between informative and non-informative regions. Preliminary results shows that there is a statistically significant difference between each of the methods as some of them are more strict when deciding which part of the image is informative and others regarding which is the non-informative region.
Jorge Bernal, Debora Gil, Carles Sánchez, F. Javier Sánchez

Video-Specific SVMs for Colonoscopy Image Classification

We propose a novel classification framework called the video-specific SVM (V-SVM) for normal-vs-abnormal white-light colonoscopy image classification. V-SVM is an ensemble of linear SVMs, with each trained to separate the abnormal images in a particular video from all the normal images in all the videos. Since V-SVM is designed to capture lesion-specific properties as well as intra-class variations it is expected to perform better than SVM. Experiments on a colonoscopy image dataset with about \(10{,}000\) images show that V-SVM significantly improves the performance over SVM and other baseline classifiers.
Siyamalan Manivannan, Ruixuan Wang, Maria P. Trujillo, Jesus Arbey Hoyos, Emanuele Trucco

Impact of Keypoint Detection on Graph-Based Characterization of Blood Vessels in Colonoscopy Videos

We explore the potential of the use of blood vessels as anatomical landmarks for developing image registration methods in colonoscopy images. An unequivocal representation of blood vessels could be used to guide follow-up methods to track lesions over different interventions. We propose a graph-based representation to characterize network structures, such as blood vessels, based on the use of intersections and endpoints. We present a study consisting of the assessment of the minimal performance a keypoint detector should achieve so that the structure can still be recognized. Experimental results prove that, even by achieving a loss of \(25\,\%\) of the keypoints, the descriptive power of the associated graphs to the vessel pattern is still high enough to recognize blood vessels.
Joan M. Núñez, Jorge Bernal, Miquel Ferrer, Fernando Vilariño

A Novel Approach on the Colon Wall Segmentation and Its’ Application

To measure the thickness of the colon is of much significance for colonic polyps detection in computed tomographic colonography (CTC). For achieving this target, to extract the boundary of both inner and outer colon wall accurately will be the prime task. However, the low contrast of CT attenuation values between colon wall and the surrounding tissues limits many traditional algorithms to achieve this task. Current research work has been exploiting two steps for segmenting inner and outer colon wall: (1) Finding the inner colon wall; and (2) applying geodesic active contour (GAC) based level set to extract outer boundary of colon wall. However, when sticking presents between two colon walls, the task turns to be much more complicated and the threshold level set segmentation method may fail in this situation. In view of this, we present a minimum surface overlay model to extract the inner wall in this paper. Combined with the superposition model, we are able to depict the outer wall of colon in a natural way. We validated the proposed algorithm based on 60 CTC datasets. Compared with the GAC model, the new presented method is more reliable for the colon wall segmentation. Additionally, the application for the wall thickness also provided us with any hints on the colonic polyps detection.
Huafeng Wang, Wenfeng Song, Lihong Li, Yuan Cao, Haixia Pan, Ming Ma, Jiang Huang, Guangming Mao, Zhengrong Liang

Cerebral Ventricle Segmentation from 3D Pre-term IVH Neonate MR Images Using Atlas-Based Convex Optimization

Intraventricular hemorrhage (IVH) or brain bleeding is a common condition among pre-term infants that occurs in 15–30 % of very low birth weight preterm neonates. Infants with IVH are at risk of developing progressive dilatation of the ventricles, a pathology called hydrocephalus. The ventricular size of patients with mild enlargement of cerebral ventricles is monitored by ultrasound or MR imaging of the brain for 1–2 years, as they are at risk of developing hydrocephalus. This paper proposes an accurate and numerically efficient algorithm to the segmentation of the cerebral ventricle system of pre-term IVH neonates from 3D T1 weighted MR images. The proposed segmentation algorithm makes use of the convex optimization technique combined with the learned priors of image intensities and label probabilistic map, which is built from a multi-atlas registration scheme. The leave-one-out cross validation using 10 IVH patient T1 weighted MR images showed that the proposed method yielded a mean DSC of \(83.1\,\%\pm 4.2\,\%\), a MAD of \(1.0\pm 0.7\) mm, a MAXD of \(11.3\pm 7.3\) mm, and a VD of \(6.5\,\%\pm 6.2\,\%\), suggesting that it can be used in clinical practice for ventricle volume measurements of IVH neonate patients.
Wu Qiu, Jing Yuan, Martin Rajchl, Jessica Kishimoto, Eranga Ukwatta, Sandrine de Ribaupierre, Aaron Fenster

Fully Automatic CT Segmentation for Computer-Assisted Pre-operative Planning of Hip Arthroscopy

Extraction of both pelvic and femoral surface models of a hip joint from CT data for computer-assisted pre-operative planning of hip arthroscopy is addressed. We present a method for a fully automatic image segmentation of a hip joint. Our method works by combining fast random forest (RF) regression based landmark detection, atlas-based segmentation, with articulated statistical shape model (aSSM) based hip joint reconstruction. The two fundamental contributions of our method are: (1) An improved fast Gaussian transform (IFGT) is used within the RF regression framework for a fast and accurate landmark detection, which then allows for a fully automatic initialization of the atlas-based segmentation; and (2) aSSM based fitting is used to preserve hip joint structure and to avoid penetration between the pelvic and femoral models. Validation on 30 hip CT images show that our method achieves high performance in segmenting pelvis, left proximal femur, and right proximal femur surfaces with an average accuracy of 0.59 mm, 0.62 mm, and 0.58 mm, respectively.
Chengwen Chu, Cheng Chen, Guoyan Zheng

A Comparative Study of Ego-Motion Estimation Algorithms for Teleoperated Robotic Endoscopes

Colorectal cancer is one of the leading causes of cancer-related mortality in the world, although it can be efficiently treated if detected early. Colonoscopy is the most-commonly adopted visual screening procedure of the colon by means of a flexible tiny endoscopic camera. In an effort to promote early screening and to facilitate mastering the endoscope motion by the physician, teleoperable robotic endoscopes and wireless capsules are being developed. In order to enable precise and fast closed-loop control for these devices, the accurate 3-D position and orientation of the camera must be known. Estimating the camera ego-motion by processing the endoscopic video provides a viable solution since it does not require the adoption of external magnetic trackers during the screening procedure that can occupy the scope’s operation channel. Furthermore, and compared to SLAM or registration approaches, ego-motion estimation algorithms do not require to deal with the highly deformable (and thus highly-uncertain) global 3D map of the colon.
This paper provides researchers in the medical imaging computing community with an in-depth comparison of several state-of-the-art ego-motion estimation methods that can localize the camera with respect to an initial video frame. Four optical-flow (OF) algorithms are analyzed and their performance is examined when used in two state-of-the-art (supervised and unsupervised) 6 degrees of freedom (DoF) ego-motion estimation algorithms. The ability for each of these methods to precisely localize the camera after a long trajectory have been examined. To the best of our knowledge, this is the first work that compares vision-based localization algorithms in a endoscopic scenario.
Gustavo A. Puerto-Souza, Aaron N. Staranowicz, Charreau S. Bell, Pietro Valdastri, Gian-Luca Mariottini

Image-Based Navigation for a Robotized Flexible Endoscope

Robotizing flexible endoscopy enables image-based control of endoscopes. Especially during high-throughput procedures, such as a colonoscopy, navigation support algorithms could improve procedure turnaround and ergonomics for the endoscopist.
In this study, we have developed and implemented a navigation algorithm that is based on image classification followed by dark region segmentation. Robustness and accuracy were evaluated on real images obtained from human colonoscopy exams. Comparison was done using manual annotation as a reference. Intraclass correlation (ICC) was employed as a measure for similarity between automated and manual results.
The discrimination of the developed classifier was 6.8, making it a reliable classifier. In the experiments, the developed algorithm gave an ICC of 93 % (range 84.7–98.8 %) over the test image sequences on average. If images were classified as ‘uninformative’, which led to re-initialization of the algorithm, this was predictive for the result of dark region segmentation accuracy.
In conclusion, the developed target detection algorithm provided accurate results and is thought to provide reliable assistance in the clinic. The clinical relevance of this kind of navigation and control is currently being investigated.
Nanda van der Stap, C. H. Slump, Ivo A. M. J. Broeders, Ferdi van der Heijden

Is Multi-model Feature Matching Better for Endoscopic Motion Estimation?

Camera motion estimation is a standard yet critical step to endoscopic visualization. It is affected by the variation of locations and correspondences of features detected in 2D images. Feature detectors and descriptors vary, though one of the most widely used remains SIFT. Practitioners usually also adopt its feature matching strategy, which defines inliers as the feature pairs subjecting to a global affine transformation. However, for endoscopic videos, we are curious if it is more suitable to cluster features into multiple groups. We can still enforce the same transformation as in SIFT within each group. Such a multi-model idea has been recently examined in the Multi-Affine work, which outperforms Lowe’s SIFT in terms of re-projection error on minimally invasive endoscopic images with manually labelled ground-truth matches of SIFT features. Since their difference lies in matching, the accuracy gain of estimated motion is attributed to the holistic Multi-Affine feature matching algorithm. But, more concretely, the matching criterion and point searching can be the same as those built in SIFT. We argue that the real variation is only the motion model verification. We either enforce a single global motion model or employ a group of multiple local ones. In this paper, we investigate how sensitive the estimated motion is affected by the number of motion models assumed in feature matching. While the sensitivity can be analytically evaluated, we present an empirical analysis in a leaving-one-out cross validation setting without requiring labels of ground-truth matches. Then, the sensitivity is characterized by the variance of a sequence of motion estimates. We present a series of quantitative comparison such as accuracy and variance between Multi-Affine motion models and the global affine model.
Xiang Xiang, Daniel Mirota, Austin Reiter, Gregory D. Hager

Algorithms for Automated Pointing of Cardiac Imaging Catheters

This paper presents a modified controller and expanded algorithms for automatically positioning cardiac ultrasound imaging catheters within the heart to improve treatment of cardiac arrhythmias such as atrial fibrillation. Presented here are a new method for controlling the position and orientation of a catheter, smoother and more accurate automated catheter motion, and initial results of image processing into clinically useful displays. Ultrasound imaging (intracardiac echo, or ICE) catheters are steered by four actuated degrees of freedom (DOF) to produce bi-directional bending in combination with handle rotation and translation. Closed form solutions for forward and inverse kinematics enable position control of the catheter tip. Additional kinematic calculations enable 1-DOF angular control of the imaging plane. The combination of positioning with imager rotation enables a wide range of visualization capabilities, such as recording a sequence of ultrasound images and reconstructing them into 3D or 4D volumes for diagnosis and treatment. The algorithms were validated with a robotic test bed and the resulting images were reconstructed into 3D volumes. This capability may improve the efficiency and effectiveness of intracardiac catheter interventions by allowing visualization of soft tissues or working instruments. The methods described here are applicable to any long thin tendon-driven tool (with single or bi-directional bending) requiring accurate tip position and orientation control.
Paul M. Loschak, Laura J. Brattain, Robert D. Howe

Endoscopic Sheffield Index for Unsupervised In Vivo Spectral Band Selection

Endoscopic procedures provide important information about the internal patient anatomy but are currently restricted to a 2D texture analysis of the visible organ surfaces. Spectral imaging has high potential in generating valuable complementary information about the molecular tissue composition but suffers from long image acquisition times. As the technique requires an aligned stack of images, its benefit in endoscopic procedures is still very limited due to continuous motion of the camera and the tissue. In this paper, we present an information theory based approach to band selection for endoscopic spectral imaging. In contrast to previous approaches, our concept does not require labelled training data or an elaborate light-tissue interaction model. According to a validation study using phantom data as well as in vivo spectral data obtained from five surgeries in a porcine model, only few bands selected by our method are sufficient to reconstruct the tissue composition with a similar accuracy as obtainable with the full spectrum.
Sebastian J. Wirkert, Neil T. Clancy, Danail Stoyanov, Shobhit Arya, George B. Hanna, Heinz-Peter Schlemmer, Peter Sauer, Daniel S. Elson, Lena Maier-Hein

Towards Personalized Biomechanical Model and MIND-Weighted Point Matching for Robust Deformable MR-TRUS Registration

This paper explores a novel deformable MR-TRUS registration method, which uses a personalized statistical motion model and a robust point matching strategy that is boosted by the modality independent neighborhood descriptor (MIND) algorithm. Current deformable MR-TRUS registration methods limit to inaccurate deformation estimation and unstable point correspondence. To precisely estimate tissue deformation, we construct a personalized statistical motion model (PSMM) on the basis of finite element methods and patient-specific biomechanical properties that were detected by ultrasound elastography. To accurately obtain the point correspondence between surface point sets segmented from MR and TRUS images, we first adopt the PSMM to provide realistic boundary conditions for point correspondence estimation. We further introduce the MIND to weight a robust point matching procedure. We evaluate our method on five datasets from volunteers. The experimental results demonstrate that our novel approach provides more accurate and robust MR-TRUS registration than state-of-the-art methods. The current target registration error was significantly improved from 2.6 to 1.8 mm. which completely meets the clinical requirement of less than 2.5 mm.
Yi Wang, Dong Ni, Jing Qin, Muqing Lin, Xiaoyan Xie, Ming Xu, Pheng Ann Heng


Weitere Informationen

Premium Partner