Skip to main content

Über dieses Buch

This book constitutes the refereed joint proceedings of the 7th Joint International Workshop on Computing and Visualization for Intravascular Imaging and Computer Assisted Stenting, CVII-STENT 2018, and the Third International Workshop on Large-Scale Annotation of Biomedical Data and Expert Label Synthesis, LABELS 2018, held in conjunction with the 21th International Conference on Medical Imaging and Computer-Assisted Intervention, MICCAI 2018, in Granada, Spain, in September 2018.
The 9 full papers presented at CVII-STENT 2017 and the 12 full papers presented at LABELS 2017 were carefully reviewed and selected. The CVII-STENT papers feature the state of the art in imaging, treatment, and computer-assisted intervention in the field of endovascular interventions. The LABELS papers present a variety of approaches for dealing with few labels, from transfer learning to crowdsourcing.



Correction to: Towards Automatic Measurement of Type B Aortic Dissection Parameters: Methods, Applications and Perspective

The original version of the chapter starting on p. 64 was revised. The author names and their affiliations have been changed.
Jianning Li, Long Cao, W. Cheng, M. Bowen, Wei Guo

Proceedings of the 7th Joint MICCAI Workshop on Computing and Visualization for Intravascular Imaging and Computer Assisted Stenting (CVII-STENT 2018)


Blood-Flow Estimation in the Hepatic Arteries Based on 3D/2D Angiography Registration

Digital substraction angiography (DSA) images are routinely used to guide endovascular interventions such as embolization or angioplasty/stenting procedures. In clinical practice, flow assessment is evaluated subjectively based on the experience of the operator. Quantitative DSA (qDSA) using optical imaging has been developed to provide quantitative measurements using parameters such as transit time or time to peak. We propose a generalizable method to estimate the actual flow by tracking the contrast agent on the 2D cine images (to evaluate transit time) and using locally rigid registrations of the 2D cine angiograms to the 3D vascular segmentation (to calculate flow rate). An in-vitro endovascular intervention was simulated using a multibranch phantom reproducing a porcine liver arterial geometry. Water was pumped on each exit branch using syringe pump in pull mode. The main intake was switched from water to the contrast agent under angiographic acquisition using a 3-way valve selector. Knowing the actual flow exiting each branch, the same flow was applied to each output in 3 separated experiments (2, 5, and 10 mL/min). The average estimated blood flow rate was within a 16% \((\pm 11\%)\) error range in all experiments compared to the pump flow settings. This novel flow quantifying method is promising to optimize and improve the effectiveness of embolization and revascularization procedures such as transarterial chemoembolizations of the liver.
Simon Lessard, Rosalie Planteféve, François Michaud, Catherine Huet, Gilles Soulez, Samuel Kadoury

Automated Quantification of Blood Flow Velocity from Time-Resolved CT Angiography

Contrast-enhanced computed tomography angiography (CE-CTA) provides valuable, non-invasive assessment of lower extremity peripheral arterial disease (PAD). The advent of wide beam CT scanners has enabled multiple CT acquisitions over the same structure at a high frame rate, facilitating time-resolved CTA acquisitions. In this study, we investigate the technical feasibility of automatically quantifying the bolus arrival time and blood velocity in the arteries below the knee from time-resolved CTA. Our approach is based on arterial segmentation and local estimation of the bolus arrival time. The results are compared to values obtained through manual reading of the datasets and show good agreement. Based on a small patient study, we explore initial utility of these quantitative measures for the diagnosis of lower extremity PAD.
Pieter Thomas Boonen, Nico Buls, Gert Van Gompel, Yannick De Brucker, Dimitri Aerden, Johan De Mey, Jef Vandemeulebroucke

Multiple Device Segmentation for Fluoroscopic Imaging Using Multi-task Learning

For endovascular aortic repair (EVAR), integrating preoperative information of the aortic anatomy with intraoperative fluoroscopy can aid in reducing radiation exposure, contrast agent and procedure time. However, the quality of this fusion may deteriorate over the course of the intervention due to patient movement or deformation of the vasculature caused by interventional tools. Automatically detecting the instruments present in the X-ray image can help to assess the degree of deterioration, trigger automatic re-registration or aid in automatic workflow phase detection and process modeling. In this work, we investigate a flexible approach to segment different devices based on fully convolutional neural networks using multi-task learning. We evaluate the proposed approach on a set of 38 X-ray images acquired during EVAR interventions by targeting the segmentation of aortic stents, stiff guidewires and pigtail catheters. We compare the results to the performance of single-task networks. We manage to keep similar performance compared to single-task networks with Dice coefficients between 0.95 and 0.80 depending on the device, while speeding up computation by a factor of two.
Katharina Breininger, Tobias Würfl, Tanja Kurzendorfer, Shadi Albarqouni, Marcus Pfister, Markus Kowarschik, Nassir Navab, Andreas Maier

Segmentation of the Aorta Using Active Contours with Histogram-Based Descriptors

This work presents an automatic method to segment the aortic lumen in computed tomography scans by combining an ellipse-based structure of the artery and an active contour model. The general shape of the aorta is first estimated by adapting the contour of its cross-sections to ellipses oriented in the direction orthogonal to the course of the vessel. From this set of ellipses, an initial segmentation is computed, which is used as starting approximation for the active contour technique. Apart from the traditional attraction and regularization terms of the active contours, an additional term is included to make the contour evolve according to the likelihood of a given intensity to be inside the aorta or in the surrounding tissues. With this technique, it is possible to adapt the boundary of the initial segmentation by considering not only the most significant edges, but also the distribution of the intensities inside and surrounding the aortic lumen.
Miguel Alemán-Flores, Daniel Santana-Cedrés, Luis Alvarez, Agustín Trujillo, Luis Gómez, Pablo G. Tahoces, José M. Carreira

Layer Separation in X-ray Angiograms for Vessel Enhancement with Fully Convolutional Network

Percutaneous coronary intervention is a treatment for coronary artery disease, which is performed under image-guidance using X-ray angiography. The intensities in an X-ray image are a superimposition of 2D structures projected from 3D anatomical structures, which makes robust information processing challenging. The purpose of this work is to investigate to what extent vessel layer separation can be achieved with deep learning, especially adversarial networks. To this end, we develop and evaluate a deep learning based method for vessel layer separation. In particular, the method utilizes a fully convolutional network (FCN), which was trained by two different strategies: an \(L_1\) loss and a combination of \(L_1\) and adversarial losses. The experiment results show that the FCN trained with both losses can well enhance vessel structures by separating the vessel layer, while the \(L_1\) loss results in better contrast. In contrast to traditional layer separation methods [1], both our methods can be executed much faster and thus have potential for real-time applications.
Haidong Hao, Hua Ma, Theo van Walsum

Generation of a HER2 Breast Cancer Gold-Standard Using Supervised Learning from Multiple Experts

Breast cancer is one of the most common cancer in women around the world. For diagnosis, pathologists evaluate the expression of biomarkers such as HER2 protein using immunohistochemistry over tissue extracted by a biopsy. This assessment is performed through microscopic inspection, estimating intensity and integrity of the membrane cells’s staining and scoring the sample as 0 (negative), 1+, 2+, or 3+ (positive): a subjective decision that depends on the interpretation of the pahologist.
This work is aimed to achieve consensus among opinions of pathologists in cases of HER2 breast cancer biopsies, using supervised learning methods based on multiple experts. The main goal is to generate a reliable public breast cancer gold-standard, to be used as training/testing dataset in future developments of machine learning methods for automatic HER2 overexpression assessment.
There were collected 30 breast cancer biopsies, with positive and negative diagnosis, where tumor regions were marked as regions-of-interest (ROIs). Magnification of \(20\times \) was used to crop non-overlapping rectangular sections according to a grid over the ROIs, leading a dataset with 1.250 images.
In order to collect the pathologists’ opinions, an Android application was developed. The biopsy sections are presented in a random way, and for each image, the expert must assign a score (0, 1+, 2+, 3+). Currently, six referent Chilean breast cancer pathologists are working on the same set of samples.
Getting the pathologists’ acceptance was a hard and time consuming task. Even more, obtaining the scoring of pathologists is a task that requires subtlety communication and time to manage their progress in the use of the application.
Violeta Chang

Deep Learning-Based Detection and Segmentation for BVS Struts in IVOCT Images

Bioresorbable Vascular Scaffold (BVS) is the latest stent type for the treatment of coronary artery disease. A major challenge of BVS is that once it is malapposed during implantation, it may potentially increase the risks of late stent thrombosis. Therefore it is important to analyze struts malapposition during implantation. This paper presents an automatic method for BVS malapposition analysis in intravascular optical coherence tomography images. Struts are firstly detected by a detector trained through deep learning. Then, struts boundaries are segmented using dynamic programming. Based on the segmentation, apposed and malapposed struts are discriminated automatically. Experimental results show that the proposed method successfully detected 97.7% of 4029 BVS struts with 2.41% false positives. The average Dice coefficient between the segmented struts and ground truth was 0.809. It concludes that the proposed method is accurate and efficient for BVS struts detection and segmentation, and enables automatic malapposition analysis.
Yihui Cao, Yifeng Lu, Qinhua Jin, Jing Jing, Yundai Chen, Jianan Li, Rui Zhu

Towards Automatic Measurement of Type B Aortic Dissection Parameters: Methods, Applications and Perspective

Aortic dissection (AD) is caused by blood flowing into an intimal tear on the innermost layer of the aorta leading to the formation of true lumen and false lumen. For type B aortic Dissection (TBAD), the tear can appear beyond the left subclavian artery or in the aortic arch according to Stanford classification. Quantitative and qualitative analysis of the geometrical and biomedical parameters of TBAD such as maximum transverse diameter of the thoracic aorta, maximum diameter of the true-false lumen and the length of proximal landing zone is crucial for the treatment planning of thoracic endovascular aortic repair (TEVAR), follow-up as well as long-term outcome prediction of TBAD. Its experience-dependent to measure accurately the parameters of TBAD even with the help of computer-aided software. In this paper, we describe our efforts towards the realization of automatic measurement of TBAD parameters with the hope to help surgeons better manage the disease and lighten their burden. In our efforts to achieve this goal, a large standard TBAD database with manual annotation of the entire aorta, true lumen, false lumen and aortic wall is built. A series of deep learning based methods for automatic segmentation of TBAD are developed and evaluated using the database. Finally, automatic measurement techniques are developed based on the output of our automatic segmentation module. Clinical applications of the automatic measurement methods as well as the perspective of deep learning in dealing with TBAD is also discussed.
Jianning Li, Long Cao, W. Cheng, M. Bowen, Wei Guo

Prediction of FFR from IVUS Images Using Machine Learning

We present a machine learning approach for predicting fractional flow reserve (FFR) from intravscular ultrasound images (IVUS) in coronary arteries. IVUS images and FFR measurements were collected from 1744 patients and 1447 lumen and plaque segmentation masks were generated from 1447 IVUS images using an automatic segmentation model trained on separate 70 IVUS images and minor manual corrections. Using total 114 features from the masks and general patient informarion, we trained random forest (RF), extreme gradient boost (XGBoost) and artificial neural network (ANN) models for a binary classification of FFR-80 threshold (FFR < 0.8 v.s. FFR \(\ge \) 0.8) for comparison. The ensembled XGBoost models evaluated in 290 unseen cases achieved 81% accuracy and 70% recall.
Geena Kim, June-Goo Lee, Soo-Jin Kang, Paul Ngyuen, Do-Yoon Kang, Pil Hyung Lee, Jung-Min Ahn, Duk-Woo Park, Seung-Whan Lee, Young-Hak Kim, Cheol Whan Lee, Seong-Wook Park, Seung-Jung Park

Deep Learning Retinal Vessel Segmentation from a Single Annotated Example: An Application of Cyclic Generative Adversarial Neural Networks

Supervised deep learning methods such as fully convolutional neural networks have been very effective at medical image segmentation tasks. These approaches are limited, however, by the need for large amounts of labeled training data. The time and labor required for creating human-labeled ground truth segmentations for training examples is often prohibitive. This paper presents a method for the generation of synthetic examples using cyclic generative adversarial neural networks. The paper further shows that a fully convolutional network trained on a dataset of several synthetic examples and a single manually-crafted ground truth segmentation can approach the accuracy of an equivalent network trained on twenty manually segmented examples.
Praneeth Sadda, John A. Onofrey, Xenophon Papademetris

Proceedings of the 3rd International Workshop on Large-Scale Annotation of Biomedical Data and Expert Label Synthesis (LABELS 2018)


An Efficient and Comprehensive Labeling Tool for Large-Scale Annotation of Fundus Images

Computerized labeling tools are often used to systematically record the assessment for fundus images. Carefully designed labeling tools not only save time and enable comprehensive and thorough assessment at clinics, but also economize large-scale data collection processes for the development of automatic algorithms. To realize efficient and thorough fundus assessment, we developed a new labeling tool with novel schemes - stepwise labeling and regional encoding. We have used our tool in a large-scale data annotation project in which 318,376 annotations for 109,885 fundus images were gathered with a total duration of 421 h. We believe that the fundamental concepts in our tool would inspire other data collection processes and annotation procedure in different domains.
Jaemin Son, Sangkeun Kim, Sang Jun Park, Kyu-Hwan Jung

Crowd Disagreement About Medical Images Is Informative

Classifiers for medical image analysis are often trained with a single consensus label, based on combining labels given by experts or crowds. However, disagreement between annotators may be informative, and thus removing it may not be the best strategy. As a proof of concept, we predict whether a skin lesion from the ISIC 2017 dataset is a melanoma or not, based on crowd annotations of visual characteristics of that lesion. We compare using the mean annotations, illustrating consensus, to standard deviations and other distribution moments, illustrating disagreement. We show that the mean annotations perform best, but that the disagreement measures are still informative. We also make the crowd annotations used in this paper available at https://​figshare.​com/​s/​5cbbce14647b6628​6544.
Veronika Cheplygina, Josien P. W. Pluim

Imperfect Segmentation Labels: How Much Do They Matter?

Labeled datasets for semantic segmentation are imperfect, especially in medical imaging where borders are often subtle or ill-defined. Little work has been done to analyze the effect that label errors have on the performance of segmentation methodologies. Here we present a large-scale study of model performance in the presence of varying types and degrees of error in training data. We trained U-Net, SegNet, and FCN32 several times for liver segmentation with 10 different modes of ground-truth perturbation. Our results show that for each architecture, performance steadily declines with boundary-localized errors, however, U-Net was significantly more robust to jagged boundary errors than the other architectures. We also found that each architecture was very robust to non-boundary-localized errors, suggesting that boundary-localized errors are fundamentally different and more challenging problem than random label errors in a classification setting.
Nicholas Heller, Joshua Dean, Nikolaos Papanikolopoulos

Crowdsourcing Annotation of Surgical Instruments in Videos of Cataract Surgery

Automating objective assessment of surgical technical skill is necessary to support training and professional certification at scale, even in settings with limited access to an expert surgeon. Likewise, automated surgical activity recognition can improve operating room workflow efficiency, teaching and self-review, and aid clinical decision support systems. However, current supervised learning methods to do so, rely on large training datasets. Crowdsourcing has become a standard in curating such large training datasets in a scalable manner. The use of crowdsourcing in surgical data annotation and its effectiveness has been studied only in a few settings. In this study, we evaluated reliability and validity of crowdsourced annotations for information on surgical instruments (name of instruments and pixel location of key points on instruments). For 200 images sampled from videos of two cataract surgery procedures, we collected 9 independent annotations per image. We observed an inter-rater agreement of 0.63 (Fleiss’ kappa), and an accuracy of 0.88 for identification of instruments compared against an expert annotation. We obtained a mean pixel error of 5.77 pixels for annotation of instrument tip key points. Our study shows that crowdsourcing is a reliable and accurate alternative to expert annotations to identify instruments and instrument tip key points in videos of cataract surgery.
Tae Soo Kim, Anand Malpani, Austin Reiter, Gregory D. Hager, Shameema Sikder, S. Swaroop Vedula

Four-Dimensional ASL MR Angiography Phantoms with Noise Learned by Neural Styling

Annotated datasets for evaluation and validation of medical image processing methods can be difficult and expensive to obtain. Alternatively, simulated datasets can be used, but adding realistic noise properties is especially challenging. This paper proposes using neural styling, a deep learning based algorithm, which can automatically learn noise patterns from real medical images and reproduce these patterns in the simulated datasets. In this work, the imaging modality to be simulated is four-dimensional arterial spin labeling magnetic resonance angiography (4D ASL MRA), a modality that includes information of the cerebrovascular geometry and blood flow. The cerebrovascular geometry used to create the simulated phantoms is obtained from segmentations of 3D time-of-flight (TOF) MRA images of healthy volunteers. Dynamic blood flow is simulated according to a mathematical model designed specifically to describe the signal generated by 4D ASL MRA series. Finally, noise is added by using neural styling to learn the noise patterns present in real 4D ASL MRA datasets. Qualitative evaluation of two simulated 4D ASL MRA datasets revealed high similarity of the blood flow dynamics and noise properties as compared to the corresponding real 4D ASL MRA datasets. These simulated phantoms, with realistic noise properties, can be useful for the development, optimization, and evaluation of image processing methods focused on segmentation and blood flow parameters estimation in 4D ASL MRA series.
Renzo Phellan, Thomas Linder, Michael Helle, Thiago V. Spina, Alexandre Falcão, Nils D. Forkert

Feature Learning Based on Visual Similarity Triplets in Medical Image Analysis: A Case Study of Emphysema in Chest CT Scans

Supervised feature learning using convolutional neural networks (CNNs) can provide concise and disease relevant representations of medical images. However, training CNNs requires annotated image data. Annotating medical images can be a time-consuming task and even expert annotations are subject to substantial inter- and intra-rater variability. Assessing visual similarity of images instead of indicating specific pathologies or estimating disease severity could allow non-experts to participate, help uncover new patterns, and possibly reduce rater variability. We consider the task of assessing emphysema extent in chest CT scans. We derive visual similarity triplets from visually assessed emphysema extent and learn a low dimensional embedding using CNNs. We evaluate the networks on 973 images, and show that the CNNs can learn disease relevant feature representations from derived similarity triplets. To our knowledge this is the first medical image application where similarity triplets has been used to learn a feature representation that can be used for embedding unseen test images.
Silas Nyboe Ørting, Jens Petersen, Veronika Cheplygina, Laura H. Thomsen, Mathilde M. W. Wille, Marleen de Bruijne

Capsule Networks Against Medical Imaging Data Challenges

A key component to the success of deep learning is the availability of massive amounts of training data. Building and annotating large datasets for solving medical image classification problems is today a bottleneck for many applications. Recently, capsule networks were proposed to deal with shortcomings of Convolutional Neural Networks (ConvNets). In this work, we compare the behavior of capsule networks against ConvNets under typical datasets constraints of medical image analysis, namely, small amounts of annotated data and class-imbalance. We evaluate our experiments on MNIST, Fashion-MNIST and medical (histological and retina images) publicly available datasets. Our results suggest that capsule networks can be trained with less amount of data for the same or better performance and are more robust to an imbalanced class distribution, which makes our approach very promising for the medical imaging community.
Amelia Jiménez-Sánchez, Shadi Albarqouni, Diana Mateus

Fully Automatic Segmentation of Coronary Arteries Based on Deep Neural Network in Intravascular Ultrasound Images

Accurate segmentation of coronary arteries is important for the diagnosis of cardiovascular diseases. In this paper, we propose a fully convolutional neural network to efficiently delineate the boundaries of the wall and lumen of the coronary arteries using intravascular ultrasound (IVUS) images. Our network addresses multi-label segmentation of the wall and lumen areas at the same time. The primary body of the proposed network is U-shaped which contains the encoding and decoding paths to learn rich hierarchical representations. The multi-scale input layer is adapted to take a multi-scale input. We deploy a multi-label loss function with weighted pixel-wise cross-entropy to alleviate imbalance of the rate of background, wall, and lumen. The proposed method is compared with three existing methods and the segmentation results are measured on four metrics, dice similarity coefficient, Jaccard index, percentage of area difference, and Hausdorff distance on totally 38,478 IVUS images from 35 subjects.
Sekeun Kim, Yeonggul Jang, Byunghwan Jeon, Youngtaek Hong, Hackjoon Shim, Hyukjae Chang

Weakly-Supervised Learning for Tool Localization in Laparoscopic Videos

Surgical tool localization is an essential task for the automatic analysis of endoscopic videos. In the literature, existing methods for tool localization, tracking and segmentation require training data that is fully annotated, thereby limiting the size of the datasets that can be used and the generalization of the approaches. In this work, we propose to circumvent the lack of annotated data with weak supervision. We propose a deep architecture, trained solely on image level annotations, that can be used for both tool presence detection and localization in surgical videos. Our architecture relies on a fully convolutional neural network, trained end-to-end, enabling us to localize surgical tools without explicit spatial annotations. We demonstrate the benefits of our approach on a large public dataset, Cholec80, which is fully annotated with binary tool presence information and of which 5 videos have been fully annotated with bounding boxes and tool centers for the evaluation.
Armine Vardazaryan, Didier Mutter, Jacques Marescaux, Nicolas Padoy

Radiology Objects in COntext (ROCO): A Multimodal Image Dataset

This work introduces a new multimodal image dataset, with the aim of detecting the interplay between visual elements and semantic relations present in radiology images. The objective is accomplished by retrieving all image-caption pairs from the open-access biomedical literature database PubMedCentral, as these captions describe the visual content in their semantic context. All compound, multi-pane, and non-radiology images were eliminated using an automatic binary classifier fine-tuned with a deep convolutional neural network system. Radiology Objects in COntext (ROCO) dataset contains over 81k radiology images with several medical imaging modalities including Computer Tomography, Ultrasound, X-Ray, Fluoroscopy, Positron Emission Tomography, Mammography, Magnetic Resonance Imaging, Angiography. All images in ROCO have corresponding caption, keywords, Unified Medical Language Systems Concept Unique Identifiers and Semantic Type. An out-of-class set with 6k images ranging from synthetic radiology figures to digital arts is provided, to improve prediction and classification performance. Adopting ROCO, systems for caption and keywords generation can be modeled, which allows multimodal representation for datasets lacking text representation. Systems with the goal of image structuring and semantic information tagging can be created using ROCO, which is beneficial and of assistance for image and information retrieval purposes.
Obioma Pelka, Sven Koitka, Johannes Rückert, Felix Nensa, Christoph M. Friedrich

Improving Out-of-Sample Prediction of Quality of MRIQC

MRIQC is a quality control tool that predicts the binary rating (accept/exclude) that human experts would assign to T1-weighted MR images of the human brain. For such prediction, a random forests classifier performs on a vector of image quality metrics (IQMs) extracted from each image. Although MRIQC achieved an out-of-sample accuracy of \(\sim \)76%, we concluded that this performance on new, unseen datasets would likely improve after addressing two problems. First, we found that IQMs show “site-effects” since they are highly correlated with the acquisition center and imaging parameters. Second, the high inter-rater variability suggests the presence of annotation errors in the labels of both training and test data sets. Annotation errors may be accentuated by some preprocessing decisions. Here, we confirm the “site-effects” in our IQMs using t-student Stochastic Neighbour Embedding (t-SNE). We also improve by a \(\sim \)10% accuracy increment on the out-of-sample prediction of MRIQC by revising a label binarization step in MRIQC. Reliable and automated QC of MRI is in high demand for the increasingly large samples currently being acquired. We show here one iteration to improve the performance of MRIQC on this task, by investigating two challenging problems: site-effects and noise in the labels assigned by human experts.
Oscar Esteban, Russell A. Poldrack, Krzysztof J. Gorgolewski


Weitere Informationen

Premium Partner