Skip to main content

2017 | Buch

Intravascular Imaging and Computer Assisted Stenting, and Large-Scale Annotation of Biomedical Data and Expert Label Synthesis

6th Joint International Workshops, CVII-STENT 2017 and Second International Workshop, LABELS 2017, Held in Conjunction with MICCAI 2017, Québec City, QC, Canada, September 10–14, 2017, Proceedings

herausgegeben von: M. Jorge Cardoso, Tal Arbel, Dr. Su-Lin Lee, Veronika Cheplygina, Dr. Simone Balocco, Diana Mateus, Guillaume Zahnd, Lena Maier-Hein, Stefanie Demirci, Prof. Eric Granger, Prof. Luc Duong, Marc-André Carbonneau, Dr. Shadi Albarqouni, Gustavo Carneiro

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed joint proceedings of the 6th Joint International Workshop on Computing and Visualization for Intravascular Imaging and Computer Assisted Stenting, CVII-STENT 2017, and the Second International Workshop on Large-Scale Annotation of Biomedical Data and Expert Label Synthesis, LABELS 2017, held in conjunction with the 20th International Conference on Medical Imaging and Computer-Assisted Intervention, MICCAI 2017, in Québec City, QC, Canada, in September 2017.

The 6 full papers presented at CVII-STENT 2017 and the 11 full papers presented at LABELS 2017 were carefully reviewed and selected. The CVII-STENT papers feature the state of the art in imaging, treatment, and computer-assisted intervention in the field of endovascular interventions. The LABELS papers present a variety of approaches for dealing with few labels, from transfer learning to crowdsourcing.

Inhaltsverzeichnis

Frontmatter

6th Joint International Workshops on Computing and Visualization for Intravascular Imaging and Computer Assisted Stenting, CVII-STENT 2017

Frontmatter
Robust Detection of Circles in the Vessel Contours and Application to Local Probability Density Estimation
Abstract
In this work we propose a technique to automatically estimate circular cross-sections of the vessels in CT scans. First, a circular contour is extracted for each slice of the CT by using the Hough transform. Afterward, the locations of the circles are optimized by means of a parametric snake model, and those circles which best fit the contours of the vessels are selected by applying a robust quality criterion. Finally, this collection of circles is used to estimate the local probability density functions of the image intensity inside and outside the vessels. We present a large variety of experiments on CT scans which show the reliability of the proposed method.
Luis Alvarez, Esther González, Julio Esclarín, Luis Gomez, Miguel Alemán-Flores, Agustín Trujillo, Carmelo Cuenca, Luis Mazorra, Pablo G. Tahoces, José M. Carreira
Intra-coronary Stent Localization in Intravascular Ultrasound Sequences, A Preliminary Study
Abstract
An intraluminal coronary stent is a metal scaffold deployed in a stenotic artery during Percutaneous Coronary Intervention (PCI). Intravascular Ultrasound (IVUS) is a catheter-based imaging technique generally used for assessing the correct placement of the stent. All the approaches proposed so far for the stent analysis only focused on the struts detection, while this paper proposes a novel approach to detect the boundaries and the position of the stent along the pullback. The pipeline of the method requires the identification of the stable frames of the sequence and the reliable detection of stent struts. Using this data, a measure of likelihood for a frame to contain a stent is computed. Then, a robust binary representation of the presence of the stent in the pullback is obtained applying an iterative and multi-scale approximation of the signal to symbols using the SAX algorithm. Results obtained comparing the automatic results versus the manual annotation of two observers on 80 IVUS in-vivo sequences shows that the method approaches the inter-observer variability scores.
Simone Balocco, Francesco Ciompi, Juan Rigla, Xavier Carrillo, Josepa Mauri, Petia Radeva
Robust Automatic Graph-Based Skeletonization of Hepatic Vascular Trees
Abstract
The topologies of vascular trees embedded inside soft tissues carry important information which can be successfully exploited in the context of the computer-assisted planning and navigation. For example, topological matching of complete and/or partial hepatic trees provides important source of correspondences that can be employed straightforwardly by image registration algorithms. Therefore, robust and reliable extraction of vascular topologies from both pre- and intra-operative medical images is an important task performed in the context of surgical planning and navigation. In this paper, we propose an extension of an existing graph-based method where the vascular topology is constructed by computation of shortest paths in a minimum-cost spanning tree obtained from binary mask of the vascularization. We suppose that the binary mask is extracted from a 3D CT image using automatic segmentation and thus suffers from important artefacts and noise. When compared to the original algorithm, the proposed method (i) employs a new weighting measure which results in smoothing of extracted topology and (ii) introduces a set of tests based on various geometric criteria which are executed in order to detect and remove spurious branches. The method is evaluated on vascular trees extracted from abdominal contrast-enhanced CT scans and MR images. The method is quantitatively compared to the original version of the algorithm showing the importance of proposed modifications. Since the branch testing depends on parameters, the parametric study of the proposed method is presented in order to identify the optimal parametrization.
R. Plantefève, S. Kadoury, A. Tang, I. Peterlik
DCNN-Based Automatic Segmentation and Quantification of Aortic Thrombus Volume: Influence of the Training Approach
Abstract
Computerized Tomography Angiography (CTA) based assessment of Abdominal Aortic Aneurysms (AAA) treated with Endovascular Aneurysm Repair (EVAR) is essential during follow-up to evaluate the progress of the patient along time, comparing it to the pre-operative situation, and to detect complications. In this context, accurate assessment of the aneurysm or thrombus volume pre- and post-operatively is required. However, a quantifiable and trustworthy evaluation is hindered by the lack of automatic, robust and reproducible thrombus segmentation algorithms. We propose an automatic pipeline for thrombus volume assessment, starting from its segmentation based on a Deep Convolutional Neural Network (DCNN) both pre-operatively and post-operatively. The aim is to investigate several training approaches to evaluate their influence in the thrombus volume characterization.
Karen López-Linares, Luis Kabongo, Nerea Lete, Gregory Maclair, Mario Ceresa, Ainhoa García-Familiar, Iván Macía, Miguel Ángel González Ballester
Vascular Segmentation in TOF MRA Images of the Brain Using a Deep Convolutional Neural Network
Abstract
Cerebrovascular diseases are one of the main causes of death and disability in the world. Within this context, fast and accurate automatic cerebrovascular segmentation is important for clinicians and researchers to analyze the vessels of the brain, determine criteria of normality, and identify and study cerebrovascular diseases. Nevertheless, automatic segmentation is challenging due to the complex shape, inhomogeneous intensity, and inter-person variability of normal and malformed vessels. In this paper, a deep convolutional neural network (CNN) architecture is used to automatically segment the vessels of the brain in time-of-flight magnetic resonance angiography (TOF MRA) images of healthy subjects. Bi-dimensional manually annotated image patches are extracted in the axial, coronal, and sagittal directions and used as input for training the CNN. For segmentation, each voxel is individually analyzed using the trained CNN by considering the intensity values of neighboring voxels that belong to its patch. Experiments were performed with TOF MRA images of five healthy subjects, using varying numbers of images to train the CNN. Cross validations revealed that the proposed framework is able to segment the vessels with an average Dice coefficient ranging from 0.764 to 0.786 depending on the number of images used for training. In conclusion, the results of this work suggest that CNNs can be used to segment cerebrovascular structures with an accuracy similar to other high-level segmentation methods.
Renzo Phellan, Alan Peixinho, Alexandre Falcão, Nils D. Forkert
VOIDD: Automatic Vessel-of-Intervention Dynamic Detection in PCI Procedures
Abstract
In this article, we present the work towards improving the overall workflow of the Percutaneous Coronary Interventions (PCI) procedures by capacitating the imaging instruments to precisely monitor the steps of the procedure. In the long term, such capabilities can be used to optimize the image acquisition to reduce the amount of dose or contrast media employed during the procedure. We present the automatic VOIDD algorithm to detect the vessel of intervention which is going to be treated during the procedure by combining information from the vessel image with contrast agent injection and images acquired during guidewire tip navigation. Due to the robust guidewire tip segmentation method, this algorithm is also able to automatically detect the sequence corresponding to guidewire navigation. We present an evaluation methodology which characterizes the correctness of the guide wire tip detection and correct identification of the vessel navigated during the procedure. On a dataset of 2213 images from 8 sequences of 4 patients, VOIDD identifies vessel-of-intervention with accuracy in the range of \(88\%\) or above and absence of tip with accuracy in range of \(98\%\) or above depending on the test case.
Ketan Bacchuwar, Jean Cousty, Régis Vaillant, Laurent Najman

Second International Workshop on Large-Scale Annotation of Biomedical Data and Expert Label Synthesis, LABELS 2017

Frontmatter
Exploring the Similarity of Medical Imaging Classification Problems
Abstract
Supervised learning is ubiquitous in medical image analysis. In this paper we consider the problem of meta-learning – predicting which methods will perform well in an unseen classification problem, given previous experience with other classification problems. We investigate the first step of such an approach: how to quantify the similarity of different classification problems. We characterize datasets sampled from six classification problems by performance ranks of simple classifiers, and define the similarity by the inverse of Euclidean distance in this meta-feature space. We visualize the similarities in a 2D space, where meaningful clusters start to emerge, and show that the proposed representation can be used to classify datasets according to their origin with 89.3% accuracy. These findings, together with the observations of recent trends in machine learning, suggest that meta-learning could be a valuable tool for the medical imaging community.
Veronika Cheplygina, Pim Moeskops, Mitko Veta, Behdad Dashtbozorg, Josien P. W. Pluim
Real Data Augmentation for Medical Image Classification
Abstract
Many medical image classification tasks share a common unbalanced data problem. That is images of the target classes, e.g., certain types of diseases, only appear in a very small portion of the entire dataset. Nowadays, large collections of medical images are readily available. However, it is costly and may not even be feasible for medical experts to manually comb through a huge unlabeled dataset to obtain enough representative examples of the rare classes. In this paper, we propose a new method called Unified LF&SM to recommend most similar images for each class from a large unlabeled dataset for verification by medical experts and inclusion in the seed labeled dataset. Our real data augmentation significantly reduces expensive manual labeling time. In our experiments, Unified LF&SM performed best, selecting a high percentage of relevant images in its recommendation and achieving the best classification accuracy. It is easily extendable to other medical image classification problems.
Chuanhai Zhang, Wallapak Tavanapong, Johnny Wong, Piet C. de Groen, JungHwan Oh
Detecting and Classifying Nuclei on a Budget
Abstract
The benefits of deep neural networks can be hard to realise in medical imaging tasks because training sample sizes are often modest. Pre-training on large data sets and subsequent transfer learning to specific tasks with limited labelled training data has proved a successful strategy in other domains. Here, we implement and test this idea for detecting and classifying nuclei in histology, important tasks that enable quantifiable characterisation of prostate cancer. We pre-train a convolutional neural network for nucleus detection on a large colon histology dataset, and examine the effects of fine-tuning this network with different amounts of prostate histology data. Results show promise for clinical translation. However, we find that transfer learning is not always a viable option when training deep neural networks for nucleus classification. As such, we also demonstrate that semi-supervised ladder networks are a suitable alternative for learning a nucleus classifier with limited data.
Joseph G. Jacobs, Gabriel J. Brostow, Alex Freeman, Daniel C. Alexander, Eleftheria Panagiotaki
Towards an Efficient Way of Building Annotated Medical Image Collections for Big Data Studies
Abstract
Annotating large collections of medical images is essential for building robust image analysis pipelines for different applications, such as disease detection. This process involves expert input, which is costly and time consuming. Semiautomatic labeling and expert sourcing can speed up the process of building such collections. In this work we report innovations in both of these areas. Firstly, we have developed an algorithm inspired by active learning and self training that significantly reduces the number of annotated training images needed to achieve a given level of accuracy on a classifier. This is an iterative process of labeling, training a classifier, and testing that requires a small set of labeled images at the start, complemented with human labeling of difficult test cases at each iteration. Secondly, we have built a platform for large scale management and indexing of data and users, as well as for creating and assigning tasks such as labeling and contouring for big data medical imaging studies. This is a web-based platform and provides the tooling for both researchers and annotators, all within a simple dynamic user interface. Our annotation platform also streamlines the process of iteratively training and labeling in algorithms such as active learning/self training described here. In this paper, we demonstrate that the combination of the platform and the proposed algorithm significantly reduces the workload involved in building a large collection of labeled cardiac echo images.
Yaniv Gur, Mehdi Moradi, Hakan Bulu, Yufan Guo, Colin Compas, Tanveer Syeda-Mahmood
Crowdsourcing Labels for Pathological Patterns in CT Lung Scans: Can Non-experts Contribute Expert-Quality Ground Truth?
Abstract
This paper investigates what quality of ground truth might be obtained when crowdsourcing specialist medical imaging ground truth from non-experts. Following basic tuition, 34 volunteer participants independently delineated regions belonging to 7 pathological patterns in 20 scans according to expert-provided pattern labels. Participants’ annotations were compared to a set of reference annotations using Dice similarity coefficient (DSC), and found to range between 0.41 and 0.77. The reference repeatability was 0.81. Analysis of prior imaging experience, annotation behaviour, scan ordering and time spent showed that only the last was correlated with annotation quality. Multiple observers combined by voxelwise majority vote outperformed a single observer, matching the reference repeatability for 5 of 7 patterns. In conclusion, crowdsourcing from non-experts yields acceptable quality ground truth, given sufficient expert task supervision and a sufficient number of observers per scan.
Alison Q. O’Neil, John T. Murchison, Edwin J. R. van Beek, Keith A. Goatman
Expected Exponential Loss for Gaze-Based Video and Volume Ground Truth Annotation
Abstract
Many recent machine learning approaches used in medical imaging are highly reliant on large amounts of image and ground truth data. In the context of object segmentation, pixel-wise annotations are extremely expensive to collect, especially in video and 3D volumes. To reduce this annotation burden, we propose a novel framework to allow annotators to simply observe the object to segment and record where they have looked at with a $200 eye gaze tracker. Our method then estimates pixel-wise probabilities for the presence of the object throughout the sequence from which we train a classifier in semi-supervised setting using a novel Expected Exponential loss function. We show that our framework provides superior performances on a wide range of medical image settings compared to existing strategies and that our method can be combined with current crowd-sourcing paradigms as well.
Laurent Lejeune, Mario Christoudias, Raphael Sznitman
SwifTree: Interactive Extraction of 3D Trees Supporting Gaming and Crowdsourcing
Abstract
Analysis of vascular and airway trees of circulatory and respiratory systems is important for many clinical applications. Automatic segmentation of these tree-like structures from 3D data remains an open problem due to their complex branching patterns, geometrical diversity, and pathology. On the other hand, it is challenging to design intuitive interactive methods that are practical to use in 3D for trees with tens or hundreds of branches. We propose SwifTree, an interactive software for tree extraction that supports crowdsourcing and gamification. Our experiments demonstrate that: (i) aggregating the results of multiple SwifTree crowdsourced sessions achieves more accurate segmentation; (ii) using the proposed game-mode reduces time needed to achieve a pre-set tree segmentation accuracy; and (iii) SwifTree outperforms automatic segmentation methods especially with respect to noise robustness.
Mian Huang, Ghassan Hamarneh
Crowdsourced Emphysema Assessment
Abstract
Classification of emphysema patterns is believed to be useful for improved diagnosis and prognosis of chronic obstructive pulmonary disease. Emphysema patterns can be assessed visually on lung CT scans. Visual assessment is a complex and time-consuming task performed by experts, making it unsuitable for obtaining large amounts of labeled data. We investigate if visual assessment of emphysema can be framed as an image similarity task that does not require expert. Substituting untrained annotators for experts makes it possible to label data sets much faster and at a lower cost. We use crowd annotators to gather similarity triplets and use t-distributed stochastic triplet embedding to learn an embedding. The quality of the embedding is evaluated by predicting expert assessed emphysema patterns. We find that although performance varies due to low quality triplets and randomness in the embedding, we still achieve a median \(F_1\) score of 0.58 for prediction of four patterns.
Silas Nyboe Ørting, Veronika Cheplygina, Jens Petersen, Laura H. Thomsen, Mathilde M. W. Wille, Marleen de Bruijne
A Web-Based Platform for Distributed Annotation of Computerized Tomography Scans
Abstract
Computer Aided Diagnosis (CAD) systems are adopting advancements at the forefront of computer vision and machine learning towards assisting medical experts with providing faster diagnoses. The success of CAD systems heavily relies on the availability of high-quality annotated data. Towards supporting the annotation process among teams of medical experts, we present a web-based platform developed for distributed annotation of medical images. We capitalize on the HTML5 canvas to allow for medical experts to quickly perform segmentation of regions of interest. Experimental evaluation of the proposed platform show a significant reduction in the time required to perform the annotation of abdominal computerized tomography images. Furthermore, we evaluate the relationship between the size of the harvested regions and the quality of the annotations. Finally, we present additional functionality of the developed platform for the closer examination of 3D point clouds for kidney cancer.
Nicholas Heller, Panagiotis Stanitsas, Vassilios Morellas, Nikolaos Papanikolopoulos
Training Deep Convolutional Neural Networks with Active Learning for Exudate Classification in Eye Fundus Images
Abstract
Training deep convolutional neural network for classification in medical tasks is often difficult due to the lack of annotated data samples. Deep convolutional networks (CNN) has been successfully used as an automatic detection tool to support the grading of diabetic retinopathy and macular edema. Nevertheless, the manual annotation of exudates in eye fundus images used to classify the grade of the DR is very time consuming and repetitive for clinical personnel. Active learning algorithms seek to reduce the labeling effort in training machine learning models. This work presents a label-efficient CNN model using the expected gradient length, an active learning algorithm to select the most informative patches and images, converging earlier and to a better local optimum than the usual SGD (Stochastic Gradient Descent) strategy. Our method also generates useful masks for prediction and segments regions of interest.
Sebastian Otálora, Oscar Perdomo, Fabio González, Henning Müller
Uncertainty Driven Multi-loss Fully Convolutional Networks for Histopathology
Abstract
Different works have shown that the combination of multiple loss functions is beneficial when training deep neural networks for a variety of prediction tasks. Generally, such multi-loss approaches are implemented via a weighted multi-loss objective function in which each term encodes a different desired inference criterion. The importance of each term is often set using empirically tuned hyper-parameters. In this work, we analyze the importance of the relative weighting between the different terms of a multi-loss function and propose to leverage the model’s uncertainty with respect to each loss as an automatically learned weighting parameter. We consider the application of colon gland analysis from histopathology images for which various multi-loss functions have been proposed. We show improvements in classification and segmentation accuracy when using the proposed uncertainty driven multi-loss function.
Aïcha BenTaieb, Ghassan Hamarneh
Backmatter
Metadaten
Titel
Intravascular Imaging and Computer Assisted Stenting, and Large-Scale Annotation of Biomedical Data and Expert Label Synthesis
herausgegeben von
M. Jorge Cardoso
Tal Arbel
Dr. Su-Lin Lee
Veronika Cheplygina
Dr. Simone Balocco
Diana Mateus
Guillaume Zahnd
Lena Maier-Hein
Stefanie Demirci
Prof. Eric Granger
Prof. Luc Duong
Marc-André Carbonneau
Dr. Shadi Albarqouni
Gustavo Carneiro
Copyright-Jahr
2017
Electronic ISBN
978-3-319-67534-3
Print ISBN
978-3-319-67533-6
DOI
https://doi.org/10.1007/978-3-319-67534-3