Skip to main content
main-content

Über dieses Buch

This book constitutes the refereed proceedings of the 23rd Conference on Medical Image Understanding and Analysis, MIUA 2019, held in Liverpool, UK, in July 2019.

The 43 full papers presented were carefully reviewed and selected from 70 submissions. There were organized in topical sections named: oncology and tumour imaging; lesion, wound and ulcer analysis; biostatistics; fetal imaging; enhancement and reconstruction; diagnosis, classification and treatment; vessel and nerve analysis; image registration; image segmentation; ophthalmic imaging; and posters.

Inhaltsverzeichnis

Frontmatter

Oncology and Tumour Imaging

Frontmatter

Tissue Classification to Support Local Active Delineation of Brain Tumors

In this paper, we demonstrate how a semi-automatic algorithm we proposed in previous work may be integrated into a protocol which becomes fully automatic for the detection of brain metastases. Such a protocol combines 11C-labeled Methionine PET acquisition with our previous segmentation approach. We show that our algorithm responds especially well to this modality thereby upgrading its status from semi-automatic to fully automatic for the presented application. In this approach, the active contour method is based on the minimization of an energy functional which integrates the information provided by a machine learning algorithm. The rationale behind such a coupling is to introduce in the segmentation the physician knowledge through a component capable of influencing the final outcome toward what would be the segmentation performed by a human operator. In particular, we compare the performance of three different classifiers: Naïve Bayes classification, K-Nearest Neighbor classification, and Discriminant Analysis. A database comprising seventeen patients with brain metastases is considered to assess the performance of the proposed method in the clinical environment.Regardless of the classifier used, automatically delineated lesions show high agreement with the gold standard (R2 = 0.98). Experimental results show that the proposed protocol is accurate and meets the physician requirements for radiotherapy treatment purpose.

Albert Comelli, Alessandro Stefano, Samuel Bignardi, Claudia Coronnello, Giorgio Russo, Maria G. Sabini, Massimo Ippolito, Anthony Yezzi

Using a Conditional Generative Adversarial Network (cGAN) for Prostate Segmentation

Prostate cancer is the second most commonly diagnosed cancer among men and currently multi-parametric MRI is a promising imaging technique used for clinical workup of prostate cancer. Accurate detection and localisation of the prostate tissue boundary on various MRI scans can be helpful for obtaining a region of interest for Computer Aided Diagnosis systems. In this paper, we present a fully automated detection and segmentation pipeline using a conditional Generative Adversarial Network (cGAN). We investigated the robustness of the cGAN model against adding Gaussian noise or removing noise from the training data. Based on the detection and segmentation metrics, de-noising did not show a significant improvement. However, by including noisy images in the training data, the detection and segmentation performance was improved in each 3D modality, which resulted in comparable to state-of-the-art results.

Amelie Grall, Azam Hamidinekoo, Paul Malcolm, Reyer Zwiggelaar

A Novel Application of Multifractal Features for Detection of Microcalcifications in Digital Mammograms

This paper presents a novel image processing algorithm for automated microcalcifications (MCs) detection in digital mammograms. In order to improve the detection accuracy and reduce false positive (FP) numbers, two scales of sub-images are considered for detecting varied sized MCs, and different processing algorithms are used on them. The main contributions of this research work include: use of multifractal analysis based methods to analyze mammograms and describe MCs texture features; development of adaptive α values selection rules for better highlighting MCs patterns in mammograms; application of an effective SVM classifier to predict the existence of tiny MC spots. A full-field digital mammography (FFDM) dataset INbreast is used to test our proposed method, and experimental results demonstrate that our detection algorithm outperforms other reported methods, reaching a higher sensitivity (80.6%) and reducing FP numbers to lower levels.

Haipeng Li, Ramakrishnan Mukundan, Shelley Boyd

Wilms’ Tumor in Childhood: Can Pattern Recognition Help for Classification?

Wilms’ tumor or nephroblastoma is a kidney tumor and the most common renal malignancy in childhood. Clinicians assume that these tumors develop from embryonic renal precursor cells - sometimes via nephrogenic rests or nephroblastomatosis. In Europe, chemotherapy is carried out prior to surgery, which downstages the tumor. This results in various pathological subtypes with differences in their prognosis and treatment.First, we demonstrate that the classical distinction between nephroblastoma and its precursor lesion is error prone with an accuracy of 0.824. We tackle this issue with appropriate texture features and improve the classification accuracy to 0.932.Second, we are the first to predict the development of nephroblastoma under chemotherapy. We use a bag of visual model and show that visual clues are present that help to approximate the developing subtype.Last but not least, we provide our data set of 54 kidneys with nephroblastomatosis in conjunction with 148 Wilms’ tumors.

Sabine Müller, Joachim Weickert, Norbert Graf

Multi-scale Tree-Based Topological Modelling and Classification of Micro-calcifications

We propose a novel method for the classification of benign and malignant micro-calcifications using a multi-scale tree-based modelling approach. By taking the connectivity between individual calcifications into account, micro-calcification trees were build at multiple scales along with the extraction of several tree related micro-calcification features. We interlinked the distribution aspect of calcifications with the tree structures at each scale. Classification results show an accuracy of $$85\%$$ for MIAS and $$79\%$$ for DDSM, which is in-line with the state-of-the-art methods.

Zobia Suhail, Reyer Zwiggelaar

Lesion, Wound and Ulcer Analysis

Frontmatter

Automated Mobile Image Acquisition of Skin Wounds Using Real-Time Deep Neural Networks

Periodic image acquisition plays an important role in the monitoring of different skin wounds. With a visual history, health professionals have a clear register of the wound’s state at different evolution stages, which allows a better overview of the healing progress and efficiency of the therapeutics being applied. However, image quality and adequacy has to be ensured for proper clinical analysis, being its utility greatly reduced if the image is not properly focused or the wound is partially occluded. This paper presents a new methodology for automated image acquisition of skin wounds via mobile devices. The main differentiation factor is the combination of two different approaches to ensure simultaneous image quality and adequacy: real-time image focus validation; and real-time skin wound detection using Deep Neural Networks (DNN). A dataset of 457 images manually validated by a specialist was used, being the best performance achieved by a SSDLite MobileNetV2 model with mean average precision of 86.46% using 5-fold cross-validation, memory usage of 43 MB, and inference speeds of 23 ms and 119 ms for desktop and smartphone usage, respectively. Additionally, a mobile application was developed and validated through usability tests with eleven nurses, attesting the potential of using real-time DNN approaches to effectively support skin wound monitoring procedures.

José Faria, João Almeida, Maria João M. Vasconcelos, Luís Rosado

Hyperspectral Imaging Combined with Machine Learning Classifiers for Diabetic Leg Ulcer Assessment – A Case Study

Diabetic ulcers of the foot are a serious complication of diabetes with huge impact on the patient’s life. Assessing which ulcer will heal spontaneously is of paramount importance. Hyperspectral imaging has been used lately to complete this task, since it has the ability to extract data about the wound itself and surrounding tissues. The classification of hyperspectral data remains, however, one of the most popular subjects in the hyperspectral imaging field. In the last decades, a large number of unsupervised and supervised methods have been proposed in response to this hyperspectral data classification problem. The aim of this study was to identify a suitable classification method that could differentiate as accurately as possible between normal and pathological biological tissues in a hyperspectral image with applications to the diabetic foot. The performance of four different machine learning approaches including minimum distance technique (MD), spectral angle mapper (SAM), spectral information divergence (SID) and support vector machine (SVM) were investigated and compared by analyzing their confusion matrices. The classifications outcome analysis revealed that the SVM approach has outperformed the MD, SAM, and SID approaches. The overall accuracy and Kappa coefficient for SVM were 95.54% and 0.9404, whereas for the other three approaches (MD, SAM and SID) these statistical parameters were 69.43%/0.6031, 79.77%/0.7349 and 72.41%/0.6464, respectively. In conclusion, the SVM could effectively classify and improve the characterization of diabetic foot ulcer using hyperspectral image by generating the most reliable ratio among various types of tissue depicted in the final maps with possible prognostic value.

Mihaela A. Calin, Sorin V. Parasca, Dragos Manea, Roxana Savastru

Classification of Ten Skin Lesion Classes: Hierarchical KNN versus Deep Net

This paper investigates the visual classification of the 10 skin lesions most commonly encountered in a clinical setting (including melanoma (MEL) and melanocytic nevi (ML)), unlike the majority of previous research that focuses solely on melanoma versus melanocytic nevi classification. Two families of architectures are explored: (1) semi-learned hierarchical classifiers and (2) deep net classifiers. Although many applications have benefited by switching to a deep net architecture, here there is little accuracy benefit: hierarchical KNN classifier 78.1%, flat deep net 78.7% and refined hierarchical deep net 80.1% (all 5 fold cross-validated). The classifiers have comparable or higher accuracy than the five previous research results that have used the Edinburgh DERMOFIT 10 lesion class dataset. More importantly, from a clinical perspective, the proposed hierarchical KNN approach produces: (1) 99.5% separation of melanoma from melanocytic nevi (76 MEL & 331 ML samples), (2) 100% separation of melanoma from seborrheic keratosis (SK) (76 MEL & 256 SK samples), and (3) 90.6% separation of basal cell carcinoma (BCC) plus squamous cell carcinoma (SCC) from seborrheic keratosis (SK) (327 BCC/SCC & 256 SK samples). Moreover, combining classes BCC/SCC & ML/SK to give a modified 8 class hierarchical KNN classifier gives a considerably improved 87.1% accuracy. On the other hand, the deepnet binary cancer/non-cancer classifier had better performance (0.913) than the KNN classifier (0.874). In conclusion, there is not much difference between the two families of approaches, and that performance is approaching clinically useful rates.

Robert B. Fisher, Jonathan Rees, Antoine Bertrand

Biostatistics

Frontmatter

Multilevel Models of Age-Related Changes in Facial Shape in Adolescents

Here we study the effects of age on facial shape in adolescents by using a method called multilevel principal components analysis (mPCA). An associated multilevel multivariate probability distribution is derived and expressions for the (conditional) probability of age-group membership are presented. This formalism is explored via Monte Carlo (MC) simulated data in the first dataset; where age is taken to increase the overall scale of a three-dimensional facial shape represented by 21 landmark points and all other “subjective” variations are related to the width of the face. Eigenvalue plots make sense and modes of variation correctly identify these two main factors at appropriate levels of the mPCA model. Component scores for both single-level PCA and mPCA show a strong trend with age. Conditional probabilities are shown to predict membership by age group and the Pearson correlation coefficient between actual and predicted group membership is r = 0.99. The effects of outliers added to the MC training data are reduced by the use of robust covariance matrix estimation and robust averaging of matrices. These methods are applied to another dataset containing 12 GPA-scaled (3D) landmark points for 195 shapes from 27 white, male schoolchildren aged 11 to 16 years old. 21% of variation in the shapes for this dataset was accounted for by age. Mode 1 at level 1 (age) via mPCA appears to capture an increase in face height with age, which is consistent with reported pubertal changes in children. Component scores for both single-level PCA and mPCA again show a distinct trend with age. Conditional probabilities are again shown to reflect membership by age group and the Pearson correlation coefficient is given by r = 0.63 in this case. These analyses are an excellent first test of the ability of multilevel statistical methods to model age-related changes in facial shape in adolescents.

Damian J. J. Farnell, Jennifer Galloway, Alexei I. Zhurov, Stephen Richmond

Spatial Modelling of Retinal Thickness in Images from Patients with Diabetic Macular Oedema

For the diagnosis and monitoring of retinal diseases, the spatial context of retinal thickness is highly relevant but often under-utilised. Despite the data being spatially collected, current approaches are not spatial: they involve analysing each location separately, or they analyse all image sectors together but they ignore the possible spatial correlations such as linear models, and multivariate analysis of variance (MANOVA). We propose spatial statistical inference framework for retinal images, which is based on a linear mixed effect model and which models the spatial topography via fixed effect and spatial error structures. We compare our method with MANOVA in analysis of spatial retinal thickness data from a prospective observational study, the Early Detection of Diabetic Macular Oedema (EDDMO) study involving 89 eyes with maculopathy and 168 eyes without maculopathy from 149 diabetic participants. Heidelberg Optical Coherence Tomography (OCT) is used to measure retinal thickness. MANOVA analysis suggests that the overall retinal thickness of eyes with maculopathy are not significantly different from the eyes with no maculopathy (p = 0.11), while our spatial framework can detect the difference between the two disease groups (p = 0.02). We also evaluated our spatial statistical model framework on simulated data whereby we illustrate how spatial correlations can affect the inferences about fixed effects. Our model addresses the need of correct adjustment for spatial correlations in ophthalmic images and to improve the precision of association in clinical studies. This model can be potentially extended into disease monitoring and prognosis in other diseases or imaging technologies.

Wenyue Zhu, Jae Yee Ku, Yalin Zheng, Paul Knox, Simon P. Harding, Ruwanthi Kolamunnage-Dona, Gabriela Czanner

Fetal Imaging

Frontmatter

Incremental Learning of Fetal Heart Anatomies Using Interpretable Saliency Maps

While medical image analysis has seen extensive use of deep neural networks, learning over multiple tasks is a challenge for connectionist networks due to tendencies of degradation in performance over old tasks while adapting to novel tasks. It is pertinent that adaptations to new data distributions over time are tractable with automated analysis methods as medical imaging data acquisition is typically not a static problem. So, one needs to ensure that a continual learning paradigm be ensured in machine learning methods deployed for medical imaging. To explore interpretable lifelong learning for deep neural networks in medical imaging, we introduce a perspective of understanding forgetting in neural networks used in ultrasound image analysis through the notions of attention and saliency. Concretely, we propose quantification of forgetting as a decline in the quality of class specific saliency maps after each subsequent task schedule. We also introduce a knowledge transfer from past tasks to present by a saliency guided retention of past exemplars which improve the ability to retain past knowledge while optimizing parameters for current tasks. Experiments on a clinical fetal echocardiography dataset demonstrate a state-of-the-art performance for our protocols.

Arijit Patra, J. Alison Noble

Improving Fetal Head Contour Detection by Object Localisation with Deep Learning

Ultrasound-based fetal head biometrics measurement is a key indicator in monitoring the conditions of fetuses. Since manual measurement of relevant anatomical structures of fetal head is time-consuming and subject to inter-observer variability, there has been strong interest in finding automated, robust, accurate and reliable method. In this paper, we propose a deep learning-based method to segment fetal head from ultrasound images. The proposed method formulates the detection of fetal head boundary as a combined object localisation and segmentation problem based on deep learning model. Incorporating an object localisation in a framework developed for segmentation purpose aims to improve the segmentation accuracy achieved by fully convolutional network. Finally, ellipse is fitted on the contour of the segmented fetal head using least-squares ellipse fitting method. The proposed model is trained on 999 2-dimensional ultrasound images and tested on 335 images achieving Dice coefficient of $$97.73 \pm 1.32$$. The experimental results demonstrate that the proposed deep learning method is promising in automatic fetal head detection and segmentation.

Baidaa Al-Bander, Theiab Alzahrani, Saeed Alzahrani, Bryan M. Williams, Yalin Zheng

Automated Fetal Brain Extraction from Clinical Ultrasound Volumes Using 3D Convolutional Neural Networks

To improve the performance of most neuroimage analysis pipelines, brain extraction is used as a fundamental first step in the image processing. However, in the case of fetal brain development for routing clinical assessment, there is a need for a reliable Ultrasound (US)-specific tool. In this work we propose a fully automated CNN approach to fetal brain extraction from 3D US clinical volumes with minimal preprocessing. Our method accurately and reliably extracts the brain regardless of the large data variations in acquisition (eg. shadows, occlusions) inherent in this imaging modality. It also performs consistently throughout a gestational age range between 14 and 31 weeks, regardless of the pose variation of the subject, the scale, and even partial feature-obstruction in the image, outperforming all current alternatives.

Felipe Moser, Ruobing Huang, Aris T. Papageorghiou, Bartłomiej W. Papież, Ana I. L. Namburete

Multi-task CNN for Structural Semantic Segmentation in 3D Fetal Brain Ultrasound

The fetal brain undergoes extensive morphological changes throughout pregnancy, which can be visually seen in ultrasound acquisitions. We explore the use of convolutional neural networks (CNNs) for the segmentation of multiple fetal brain structures in 3D ultrasound images. Accurate automatic segmentation of brain structures in fetal ultrasound images can track brain development through gestation, and can provide useful information that can help predict fetal health outcomes. We propose a multi-task CNN to produce automatic segmentations from atlas-generated labels of the white matter, thalamus, brainstem, and cerebellum. The network as trained on 480 volumes produced accurate 3D segmentations on 48 test volumes, with Dice coefficient of 0.93 on the white matter and over 0.77 on segmentations of thalamus, brainstem and cerebellum.

Lorenzo Venturini, Aris T. Papageorghiou, J. Alison Noble, Ana I. L. Namburete

Towards Capturing Sonographic Experience: Cognition-Inspired Ultrasound Video Saliency Prediction

For visual tasks like ultrasound (US) scanning, experts direct their gaze towards regions of task-relevant information. Therefore, learning to predict the gaze of sonographers on US videos captures the spatio-temporal patterns that are important for US scanning. The spatial distribution of gaze points on video frames can be represented through heat maps termed saliency maps. Here, we propose a temporally bidirectional model for video saliency prediction (BDS-Net), drawing inspiration from modern theories of human cognition. The model consists of a convolutional neural network (CNN) encoder followed by a bidirectional gated-recurrent-unit recurrent convolutional network (GRU-RCN) decoder. The temporal bidirectionality mimics human cognition, which simultaneously reacts to past and predicts future sensory inputs. We train the BDS-Net alongside spatial and temporally one-directional comparative models on the task of predicting saliency in videos of US abdominal circumference plane detection. The BDS-Net outperforms the comparative models on four out of five saliency metrics. We present a qualitative analysis on representative examples to explain the model’s superior performance.

Richard Droste, Yifan Cai, Harshita Sharma, Pierre Chatelain, Aris T. Papageorghiou, J. Alison Noble

Enhancement and Reconstruction

Frontmatter

A Novel Deep Learning Based OCTA De-striping Method

Noise in images presents a considerable problem, limiting their readability and hindering the performance of post-processing and analysis tools. In particular, optical coherence tomography angiography (OCTA) suffers from stripe noise. In medical imaging, clinicians rely on high quality images in order to make accurate diagnoses and plan management. Poor quality images can lead to pathology being overlooked or undiagnosed. Image denoising is a fundamental technique that can be developed to tackle this problem and improve performance in many applications, yet there exists no method focused on removing stripe noise in OCTA. Existing OCTA denoising methods do not consider the structure of stripe noise, which severely limits their potential for recovering the image. The development of artificial intelligence (AI) have enabled deep learning approaches to obtain impressive results and play a dominant role in many areas, but require a ground truth for training, which is difficult to obtain for this problem. In this paper, we propose a revised U-net framework for removing the stripe noise from OCTA images, leaving a clean image. With our proposed method, a ground truth is not required for training, allowing both the stripe noise and the clean image to be estimated, preserving more image detail without compromising image quality. The experimental results show the impressive de-striping performance of our method on OCTA images. We evaluate the effectiveness of our proposed method using the peak-signal-to-noise ratio (PSNR) and the structural similarity index measure (SSIM), achieving excellent results as well.

Dongxu Gao, Numan Celik, Xiyin Wu, Bryan M. Williams, Amira Stylianides, Yalin Zheng

Edge Enhancement for Image Segmentation Using a RKHS Method

Image segmentation has many important applications, particularly in medical imaging. Often medical images such as CTs have little contrast in them, and segmentation in such cases poses a great challenge to existing models without further user interaction. In this paper we propose an edge enhancement method based on the theory of reproducing kernel Hilbert spaces (RKHS) to model smooth components of an image, while separating the edges using approximated Heaviside functions. By modelling using this decomposition method, the approximated Heaviside function is capable of picking up more details than the usual method of using the image gradient. Further using this as an edge detector in a segmentation model can allow us to pick up a region of interest when low contrast between two objects is present and other models fail.

Liam Burrows, Weihong Guo, Ke Chen, Francesco Torella

A Neural Network Approach for Image Reconstruction from a Single X-Ray Projection

Time-resolved imaging becomes popular in radiotherapy in that it significantly reduces blurring artifacts in volumetric images reconstructed from a set of 2D X-ray projection data. We aim at developing a neural network (NN) based machine learning algorithm that allows for reconstructing an instantaneous image from a single projection. In our approach, each volumetric image is represented as a deformation of a chosen reference image, in which the deformation is modeled as a linear combination of a few basis functions through principal component analysis (PCA). Based on this PCA deformation model, we train an ensemble of neural networks to find a mapping from a projection image to PCA coefficients. For image reconstruction, we apply the learned mapping on an instantaneous projection image to obtain the PCA coefficients, thus getting a deformation. Then, a volumetric image can be reconstructed by applying the deformation on the reference image. Experimentally, we show promising results on a set of simulated data.

Samiha Rouf, Chenyang Shen, Yan Cao, Conner Davis, Xun Jia, Yifei Lou

Deep Vectorization Convolutional Neural Networks for Denoising in Mammogram Using Enhanced Image

Mammography is an X-ray image of the breast which has been widely used for the management of breast cancer. However, in many cases, it is not easy to identify a sign of cancer as tumour or malignancy due to clouding various noise patterns caused by the low dose radiation from the X-ray machine. Mammogram denoising is an important process to improve the visual quality of mammogram to help the radiologist’s diagnosis when they screening mammogram. This paper introduces denoising deep vectorization convolutional neural networks using an enhanced image from direct contrast in a wavelet domain for training. Then, Denoised mammogram is obtained from mapping between the original and enhanced image. Mammogram image from the mini-MIAS database of mammograms was used in this experiment. The experimental results demonstrate that the proposed method can effectively suppress various noises in mammogram both qualitative and subjective test by comparison to traditional denoising methods.

Varakorn Kidsumran, Yalin Zheng

Diagnosis, Classification and Treatment

Frontmatter

A Hybrid Machine Learning Approach Using LBP Descriptor and PCA for Age-Related Macular Degeneration Classification in OCTA Images

We propose a novel hybrid machine learning approach for age-related macular degeneration (AMD) classification to support the automated analysis of images captured by optical coherence tomography angiography (OCTA). The algorithm uses a Rotation Invariant Uniform Local Binary Patterns (LBP) descriptor to capture local texture patterns associated with AMD and Principal Component Analysis (PCA) to decorrelate texture features. The analysis is performed on the entire image without targeting any particular area. The study focuses on four distinct groups, namely, healthy; neovascular AMD (an advanced stage of AMD associated with choroidal neovascularisation (CNV)); non-neovascular AMD (AMD without the presence of CNV) and secondary CNV (CNV due to retinal pathology other than AMD). Validation sets were created using a Stratified K-Folds Cross-Validation strategy for limiting the overfitting problem. The overall performance was estimated based on the area under the Receiver Operating Characteristic (ROC) curve (AUC). The classification was conducted as a binary classification problem. The best performance achieved with the SVM classifier based on the AUC score for: (i) healthy vs neovascular AMD was 100$$\%$$, (ii) neovascular AMD vs non-neovascular AMD was 85$$\%$$; (iii) CNV (neovascular AMD plus secondary CNV) vs non-neovascular AMD was 83$$\%$$.

Abdullah Alfahaid, Tim Morris, Tim Cootes, Pearse A. Keane, Hagar Khalid, Nikolas Pontikos, Panagiotis Sergouniotis, Konstantinos Balaskas

Combining Fine- and Coarse-Grained Classifiers for Diabetic Retinopathy Detection

Visual artefacts of early diabetic retinopathy in retinal fundus images are usually small in size, inconspicuous, and scattered all over retina. Detecting diabetic retinopathy requires physicians to look at the whole image and fixate on some specific regions to locate potential biomarkers of the disease. Therefore, getting inspiration from ophthalmologist, we propose to combine coarse-grained classifiers that detect discriminating features from the whole images, with a recent breed of fine-grained classifiers that discover and pay particular attention to pathologically significant regions. To evaluate the performance of this proposed ensemble, we used publicly available EyePACS and Messidor datasets. Extensive experimentation for binary, ternary and quaternary classification shows that this ensemble largely outperforms individual image classifiers as well as most of the published works in most training setups for diabetic retinopathy detection. Furthermore, the performance of fine-grained classifiers is found notably superior than coarse-grained image classifiers encouraging the development of task-oriented fine-grained classifiers modelled after specialist ophthalmologists.

Muhammad Naseer Bajwa, Yoshinobu Taniguchi, Muhammad Imran Malik, Wolfgang Neumeier, Andreas Dengel, Sheraz Ahmed

Vessel and Nerve Analysis

Frontmatter

Automated Quantification of Retinal Microvasculature from OCT Angiography Using Dictionary-Based Vessel Segmentation

Investigations in how the retinal microvasculature correlates with ophthalmological conditions necessitate a method for measuring the microvasculature. Optical coherence tomography angiography (OCTA) depicts the superficial and the deep layer of the retina, but quantification of the microvascular network is still needed. Here, we propose an automatic quantitative analysis of the retinal microvasculature. We use a dictionary-based segmentation to detect larger vessels and capillaries in the retina and we extract features such as densities and vessel radius. The method is validated on repeated OCTA scans from healthy subjects, and we observe high intraclass correlation coefficients and high agreement in a Bland-Altman analysis. The quantification method is also applied to pre- and postoperative scans of cataract patients. Here, we observe a higher variation between the measurements, which can be explained by the greater variation in scan quality. Statistical tests of both the healthy subjects and cataract patients show that our method is able to differentiate subjects based on the extracted microvascular features.

Astrid M. E. Engberg, Jesper H. Erichsen, Birgit Sander, Line Kessel, Anders B. Dahl, Vedrana A. Dahl

Validating Segmentation of the Zebrafish Vasculature

The zebrafish is an established vertebrate model to study development and disease of the cardiovascular system. Using transgenic lines and state-of-the-art microscopy it is possible to visualize the vascular architecture non-invasively, in vivo over several days. Quantification of the 3D vascular architecture would be beneficial to objectively and reliably characterise the vascular anatomy. So far, no method is available to automatically quantify the 3D cardiovascular system of transgenic zebrafish, which would enhance their utility as a pre-clinical model. Vascular segmentation is essential for any subsequent quantification, but due to the lack of a segmentation “gold standard” for the zebrafish vasculature, no in-depth assessment of vascular segmentation methods in zebrafish has been performed. In this study, we examine vascular enhancement using the Sato et al. enhancement filter in the Fiji image analysis framework and optimise the filter scale parameter for typical vessels of interest in the zebrafish cranial vasculature; and present methodological approaches to address the lack of a segmentation gold-standard of the zebrafish vasculature.

Elisabeth Kugler, Timothy Chico, Paul A. Armitage

Analysis of Spatial Spectral Features of Dynamic Contrast-Enhanced Brain Magnetic Resonance Images for Studying Small Vessel Disease

Cerebral small vessel disease (SVD) comprises all the pathological processes affecting small brain vessels and, consequently, damaging white and grey matter. Although the cause of SVD is unknown, there seems to be a dysfunction of the small vessels. In this paper, we propose a framework comprising tissue segmentation, spatial spectral feature extraction, and statistical analysis to study intravenous contrast agent distribution over time in cerebrospinal fluid, normal-appearing and abnormal brain regions in patients with recent mild stroke and SVD features. Our results show the potential of the power spectrum for the analysis of dynamic contrast-enhanced brain MRI acquisitions in SVD since significant variation in the data was related to vascular risk factors and visual clinical variables that characterise the burden of SVD features. Thus, our proposal may increase sensitivity to detect subtle features of small vessel dysfunction. A public version of our framework can be found at https://github.com/joseabernal/DynamicBrainMRIAnalysis.git .

Jose Bernal, Maria del C. Valdés-Hernández, Javier Escudero, Paul A. Armitage, Stephen Makin, Rhian M. Touyz, Joanna M. Wardlaw

Segmentation of Arteriovenous Malformation Based on Weighted Breadth-First Search of Vascular Skeleton

Cerebral arteriovenous malformations (AVM) are prone to rupture, which will lead to life-threatening conditions. Because of the complexity and high mortality and disability rate of AVM, it has been a severe problem in surgery for many years. In this paper, we propose a new method of AVM location and segmentation based on graph theory. A weighted breadth-first search tree is created from the result of vascular skeletonization, and the AVM is automatically detected and extracted. The feeding arteries, draining veins and the AVM nidus are segmented according to the topological structure of the vessel. We evaluate the proposed method on clinical data sets and achieve an average accuracy of 95.14%, sensitivity of 82.28% and specificity of 94.88%. The results show that our method is effective and is helpful for the treatment of vascular interventional surgery.

Zonghan Wu, Baochang Zhang, Jun Yang, Na Li, Shoujun Zhou

Image Registration

Frontmatter

A Variational Joint Segmentation and Registration Framework for Multimodal Images

Image segmentation and registration are closely related image processing techniques and often required as simultaneous tasks. In this work, we introduce an optimization-based approach to a joint registration and segmentation model for multimodal images deformation. The model combines an active contour variational term with a mutual information smoothing fitting term and solves in this way the difficulties of simultaneously performed segmentation and registration models for multimodal images. This combination takes into account the image structure boundaries and the movement of the objects, leading in this way to a robust dynamic scheme that links the object boundaries information that changes over time. Comparison of our model with state of art shows that our method leads to more consistent registrations and accurate results.

Adela Ademaj, Lavdie Rada, Mazlinda Ibrahim, Ke Chen

An Unsupervised Deep Learning Method for Diffeomorphic Mono-and Multi-modal Image Registration

Different from image segmentation, developing a deep learning network for image registration is less straightforward because training data cannot be prepared or supervised by humans unless they are trivial. In this paper we present an unsupervised deep leaning model in which the deformation fields are self-trained by an image similarity metric and a regularization term. The latter consists of a smoothing constraint on the derivatives and a constraint on the determinant of the transformation in order to obtain spatially smooth and plausible solution.The proposed algorithm is first trained and tested on synthetic and real mono-modal images. The results show how it deals with large deformation registration problems and leads to a real time solution with no folding. It is then generalised to multi-modal images. Although any variational model may be used to work with our the deep learning algorithm, we present a new model using the reproducing Kernel Hilbert space theory, where an initial given pair of images, which are assumed non-linearly correlated, are first processed and optimized to serve the purpose of “intensity or edge correction” and to yield intermediate new images which are more strongly correlated and will be used for training the model. Experiments and comparisons with learning and non-learning models demonstrate that this approach can deliver good performance and simultaneously generate an accurate diffeomorphic transformation.

Anis Theljani, Ke Chen

Automatic Detection and Visualisation of Metastatic Bone Disease

This paper presents a novel method of finding and visualising metastatic bone disease in computed tomography (CT). The approach we suggest locates disease by comparing trabecular bone density between symmetric bony regions. Areas of strong difference could indicate metastatic bone disease as bone lesions either increase or decrease bone density. Our detection method is completely automatic and only requires raw CT data as input. Results are visualised in an interactive 3-dimensional viewer which displays a polygonal mesh of the bone structure overlaid with colour combined with resliced CT data. Diseased regions are clearly highlighted in both the mesh and in the resliced CT data. We test our method on both healthy and diseased CT data to demonstrate the validity of the technique. Experimental results show that our method can detect metastatic bone disease, although further work is needed to improve the robustness of the technique and to decrease false positives.

Nathan Sjoquist, Tristan Barrett, David Thurtle, Vincent J. Gnanapragasam, Ken Poole, Graham Treece

High-Order Markov Random Field Based Image Registration for Pulmonary CT

Deformable image registration is an important tool in medical image analysis. In the case of lung four dimensions computed tomography (4D CT) registration, there is a major problem that the traditional image registration methods based on continuous optimization are easy to fall into the local optimal solution and lead to serious misregistration. In this study, we proposed a novel image registration method based on high-order Markov Random Fields (MRF). By analyzing the effect of the deformation field constraint of the potential functions with different order cliques in MRF model, energy functions with high-order cliques form are designed separately for 2D and 3D images to preserve the deformation field topology. For the complexity of the designed energy function with high-order cliques form, the Markov Chain Monte Carlo (MCMC) algorithm is used to solve the optimization problem of the designed energy function. To address the high computational requirements in lung 4D CT image registration, a multi-level processing strategy is adopted to reduce the space complexity of the proposed registration method and promote the computational efficiency. Compared with some other registration methods, the proposed method achieved the optimal average target registration error (TRE) of 0.93 mm on public DIR-lab dataset with 4D CT images, which indicates its great potential in lung motion modeling and image guided radiotherapy.

Peng Xue, Enqing Dong, Huizhong Ji

Image Segmentation

Frontmatter

A Fully Automated Segmentation System of Positron Emission Tomography Studies

In this paper, we present an automatic system for the brain metastasis delineation in Positron Emission Tomography images. The segmentation process is fully automatic, so that intervention from the user is never required making the entire process completely repeatable. Contouring is performed using an enhanced local active segmentation. The proposed system is, at first instance, evaluated on four datasets of phantom experiments to assess the performance under different contrast ratio scenarios, and, successively, on ten clinical cases in radiotherapy environment.Phantom studies show an excellent performance with a dice similarity coefficient rate greater than 92% for larger spheres. In clinical cases, automatically delineated tumors show high agreement with the gold standard with a dice similarity coefficient of 88.35 ± 2.60%.These results show that the proposed system can be successfully employed in Positron Emission Tomography images, and especially in radiotherapy treatment planning, to produce fully automatic segmentations of brain cancers.

Albert Comelli, Alessandro Stefano

A General Framework for Localizing and Locally Segmenting Correlated Objects: A Case Study on Intervertebral Discs in Multi-modality MR Images

Low back pain is a leading cause of disability that has been associated with intervertebral disc (IVD) degeneration by various clinical studies. With MRT being the imaging technique of choice for IVDs due to its excellent soft tissue contrast, we propose a fully automatic approach for localizing and locally segmenting spatially correlated objects—tailored to cope with a limited set of training data while making very few domain assumptions—and apply it to lumbar IVDs in multi-modality MR images. Regression tree ensembles spatially regularized by a conditional random field are used to find the IVD centroids, which allows to cut fixed-size sections around each IVD to efficiently perform the segmentation on the sub-volumes. Exploiting the similar imaging characteristics of IVD tissue, we build an IVD-agnostic V-Net to perform the segmentation and train it on all IVDs (instead of a specific one). In particular, we compare the usage of binary (i.e., pairwise) CRF potentials combined with a latent scaling variable to tackle spine size variability with scaling-invariant ternary potentials. Evaluating our approach on a public challenge data set consisting of 16 cases from 8 subjects with 4 modalities each, we achieve an average Dice coefficient of 0.904, an average absolute surface distance of 0.423 mm and an average center distance of 0.59 mm.

Alexander O. Mader, Cristian Lorenz, Carsten Meyer

Polyp Segmentation with Fully Convolutional Deep Dilation Neural Network

Evaluation Study

Analyses of polyp images play an important role in an early detection of colorectal cancer. An automated polyp segmentation is seen as one of the methods that could improve the accuracy of the colonoscopic examination. The paper describes evaluation study of a segmentation method developed for the Endoscopic Vision Gastrointestinal Image ANAlysis – (GIANA) polyp segmentation sub-challenges. The proposed polyp segmentation algorithm is based on a fully convolutional network (FCN) model. The paper describes cross-validation results on the training GIANA dataset. Various tests have been evaluated, including network configuration, effects of data augmentation, and performance of the method as a function of polyp characteristics. The proposed method delivers state-of-the-art results. It secured the first place for the image segmentation tasks at the 2017 GIANA challenge and the second place for the SD images at the 2018 GIANA challenge.

Yunbo Guo, Bogdan J. Matuszewski

Ophthalmic Imaging

Frontmatter

Convolutional Attention on Images for Locating Macular Edema

Neural networks have become a standard for classifying images. However, by their very nature, their internal data representation remains opaque. To solve this dilemma, attention mechanisms have recently been introduced. They help to highlight regions in input data that have been used for a network’s classification decision. This article presents two attention architectures for the classification of medical images. Firstly, we are explaining a simple architecture which creates one attention map that is used for all classes. Secondly, we introduce an architecture that creates an attention map for each class. This is done by creating two U-nets - one for attention and one for classification - and then multiplying these two maps together. We show that our architectures well meet the baseline of standard convolutional classifications while at the same time increasing their explainability.

Maximilian Bryan, Gerhard Heyer, Nathanael Philipp, Matus Rehak, Peter Wiedemann

Optic Disc and Fovea Localisation in Ultra-widefield Scanning Laser Ophthalmoscope Images Captured in Multiple Modalities

We propose a convolutional neural network for localising the centres of the optic disc (OD) and fovea in ultra-wide field of view scanning laser ophthalmoscope (UWFoV-SLO) images of the retina. Images captured in both reflectance and autofluorescence (AF) modes, and central pole and eyesteered gazes, were used. The method achieved an OD localisation accuracy of 99.4% within one OD radius, and fovea localisation accuracy of 99.1% within one OD radius on a test set comprising of 1790 images. The performance of fovea localisation in AF images was comparable to the variation between human annotators at this task. The laterality of the image (whether the image is of the left or right eye) was inferred from the OD and fovea coordinates with an accuracy of 99.9%.

Peter R. Wakeford, Enrico Pellegrini, Gavin Robertson, Michael Verhoek, Alan D. Fleming, Jano van Hemert, Ik Siong Heng

Deploying Deep Learning into Practice: A Case Study on Fundus Segmentation

We study the deployment of a deep learning medical image segmentation pipeline, which sees new input data, not contained in the training and evaluation database. Like in application, although this data shows the same properties, it may stem from a slightly different distribution than the training set because of differences in the hardware setup or environmental conditions. We show that, although cross-validation results suggest high generalization, segmentation score drops significantly with pre-processed data from a new database. The positive effects of a short fine-tuning phase after deployment, which seems to be necessary under such conditions, can be observed. To enable this study, we develop a segmentation pipeline comprising pre-processing steps to homogenize the data contained in 4 databases (DRIONS, DRISHTI-GS, RIM-ONE, REFUGE) and an artificial neural network (NN) segmenting optic disc and cup. This NN can be trained using exactly the same hyperparameters on all 4 databases, while achieving performance close to state-of-the-art methods specifically designed for the individual databases. Furthermore we deduct a hierarchy of the 4 databases with respect to complexity and broadness of contained samples.

Sören Klemm, Robin D. Ortkemper, Xiaoyi Jiang

Joint Destriping and Segmentation of OCTA Images

As an innovative retinal imaging technology, optical coherence tomography angiography (OCTA) can resolve and provide important information of fine retinal vessels in a non-invasive and non-contact way. The effective analysis of retinal blood vessels is valuable for the investigation and diagnosis of vascular and vascular-related diseases, for which accurate segmentation is a vital first step. OCTA images are always affected by some stripe noises artifacts, which will impede correct segmentation and should be removed. To address this issue, we present a two-stage strategy for stripe noise removal by image decomposition and segmentation by an active contours approach. We then refine this into a new joint model, which improves the speed of the algorithm while retaining the quality of the segmentation and destriping. We present experimental results on both simulated and real retinal imaging data, demonstrating the effective performance of our new joint model for segmenting vessels from the OCTA images corrupted by stripe noise.

Xiyin Wu, Dongxu Gao, Bryan M. Williams, Amira Stylianides, Yalin Zheng, Zhong Jin

Posters

Frontmatter

AI-Based Method for Detecting Retinal Haemorrhage in Eyes with Malarial Retinopathy

Cerebral Malaria (CM) as one of the most common and severe diseases in sub-Saharan Africa, claimed the lives of more than 435,000 people each year. Because Malarial Retinopathy (MR) is as one of the best clinical diagnostic indicators of CM, it may be essential to analysing MR in fundus images for assisting the CM diagnosis as an applicable solution in developing countries. Image segmentation is an essential topic in medical imaging analysis and is widely developed and improved for clinic study. In this paper, we aim to develop an automatic and fast approach to detect/segment MR haemorrhages in colour fundus images. We introduce a deep learning-based haemorrhages detection of MR inspired by Dense-Net based network called one-hundred-layers tiramisu for the segmentation tasks. We evaluate our approach on one MR dataset of 259 annotated colour fundus images. For keeping the originality of raw MR colour fundus images, 6,098 sub-images are extracted and split into a training set (70%), a validation set (10%) and a testing set (20%). After implementation, our experimental results testing on 1,669 annotated sub-images, show that the proposed method outperforms commonly mainstream network architecture U-Net.

Xu Chen, Melissa Leak, Simon P. Harding, Yalin Zheng

An Ensemble of Deep Neural Networks for Segmentation of Lung and Clavicle on Chest Radiographs

Accurate segmentation of lungs and clavicles on chest radiographs plays a pivotal role in screening, diagnosis, treatment planning, and prognosis of many chest diseases. Although a number of solutions have been proposed, both segmentation tasks remain challenging. In this paper, we propose an ensemble of deep segmentation models (enDeepSeg) that combines the U-Net and DeepLabv3+ to address this challenge. We first extract image patches to train the U-Net and DeepLabv3+ model, respectively, and then use the weighted sum of the segmentation probability maps produced by both models to determine the label of each pixel. The weight of each model is adaptively estimated according to its error rate on the validation set. We evaluated the proposed enDeepSeg model on the Japanese Society of Radiological Technology (JSRT) database and achieved an average Jaccard similarity coefficient (JSC) of 0.961 and 0.883 in the segmentation of lungs and clavicles, respectively, which are higher than those obtained by ten lung segmentation and six clavicle segmentation algorithms. Our results suggest that the enDeepSeg model is able to segment lungs and clavicles on chest radiographs with the state-of-the-art accuracy.

Jingyi Zhang, Yong Xia, Yanning Zhang

Automated Corneal Nerve Segmentation Using Weighted Local Phase Tensor

There has been increasing interest in the analysis of corneal nerve fibers to support examination and diagnosis of many diseases, and for this purpose, automated nerve fiber segmentation is a fundamental step. Existing methods of automated corneal nerve fiber detection continue to pose difficulties due to multiple factors, such as poor contrast and fragmented fibers caused by inaccurate focus. To address these problems, in this paper we propose a novel weighted local phase tensor-based curvilinear structure filtering method. This method not only takes into account local phase features using a quadrature filter to enhance edges and lines, but also utilizes the weighted geometric mean of the blurred and shifted responses to allow better tolerance of curvilinear structures with irregular appearances. To demonstrate its effectiveness, we apply this framework to 1578 corneal confocal microscopy images. The experimental results show that the proposed method outperforms existing state-of-the-art methods in applicability, effectiveness, and accuracy.

Kun Zhao, Hui Zhang, Yitian Zhao, Jianyang Xie, Yalin Zheng, David Borroni, Hong Qi, Jiang Liu

Comparison of Interactions Between Control and Mutant Macrophages

This paper presents a preliminary study on macrophages migration in Drosophila embryos, comparing two types of cells. The study is carried out by a framework called macrosight which analyses the movement and interaction of migrating macrophages. The framework incorporates a segmentation and tracking algorithm into analysing motion characteristics of cells after contact. In this particular study, the interactions between cells is characterised in the case of control embryos and Shot3 mutants, where the cells have been altered to suppress a specific protein, looking to understand what drives the movement. Statistical significance between control and mutant cells was found when comparing the direction of motion after contact in specific conditions. Such discoveries provide insights for future developments in combining biological experiments to computational analysis.

José A. Solí­s-Lemus, Besaid J. Sánchez-Sánchez, Stefania Marcotti, Mubarik Burki, Brian Stramer, Constantino C. Reyes-Aldasoro

Comparison of Multi-atlas Segmentation and U-Net Approaches for Automated 3D Liver Delineation in MRI

Segmentation of medical images is typically one of the first and most critical steps in medical image analysis. Manual segmentation of volumetric images is labour-intensive and prone to error. Automated segmentation of images mitigates such issues. Here, we compare the more conventional registration-based multi-atlas segmentation technique with recent deep-learning approaches. Previously, 2D U-Nets have commonly been thought of as more appealing than their 3D versions; however, recent advances in GPU processing power, memory, and availability have enabled deeper 3D networks with larger input sizes. We evaluate methods by comparing automated liver segmentations with gold standard manual annotations, in volumetric MRI images. Specifically, 20 expert-labelled ground truth liver labels were compared with their automated counterparts. The data used is from a liver cancer study, HepaT1ca, and as such, presents an opportunity to work with a varied and challenging dataset, consisting of subjects with large anatomical variations responding from different tumours and resections. Deep-learning methods (3D and 2D U-Nets) proved to be significantly more effective at obtaining an accurate delineation of the liver than the multi-atlas implementation. 3D U-Net was the most successful of the methods, achieving a median Dice score of 0.970. 2D U-Net and multi-atlas based segmentation achieved median Dice scores of 0.957 and 0.931, respectively. Multi-atlas segmentation tended to overestimate total liver volume when compared with the ground truth, while U-Net approaches tended to slightly underestimate the liver volume. Both U-Net approaches were also much quicker, taking around one minute, compared with close to one hour for the multi-atlas approach.

James Owler, Ben Irving, Ged Ridgeway, Marta Wojciechowska, John McGonigle, Sir Michael Brady

DAISY Descriptors Combined with Deep Learning to Diagnose Retinal Disease from High Resolution 2D OCT Images

Optical Coherence Tomography (OCT) is commonly used to visualise tissue composition of the retina. Previously, deep learning has been used to analyse OCT images to automatically classify scans by the disease they display, however classification often requires downsampling to much lower dimensions. Downsampling often loses important features that may contain useful information. In this paper, a method is proposed which incorporates DAISY descriptors as ‘intelligent downsampling’. By avoiding random downsampling, we are able to keep more of the useful information to achieve more accurate results. The proposed method is tested on a publicly available dataset of OCT images, from patients with diabetic macula edema, drusen, and choroidal neovascularisation, as well as healthy patients. The method achieves an accuracy of 76.6% and an AUC of 0.935, this is an improvement to a previously used method which uses InceptionV3 with an accuracy of 67.8% and AUC of 0.912. This shows that DAISY descriptors do provide good representations of the image and can be used as an alternative to downsampling.

Joshua Bridge, Simon P. Harding, Yalin Zheng

Segmentation of Left Ventricle in 2D Echocardiography Using Deep Learning

The segmentation of Left Ventricle (LV) is currently carried out manually by the experts, and the automation of this process has proved challenging due to the presence of speckle noise and the inherently poor quality of the ultrasound images. This study aims to evaluate the performance of different state-of-the-art Convolutional Neural Network (CNN) segmentation models to segment the LV endocardium in echocardiography images automatically. Those adopted methods include U-Net, SegNet, and fully convolutional DenseNets (FC-DenseNet). The prediction outputs of the models are used to assess the performance of the CNN models by comparing the automated results against the expert annotations (as the gold standard). Results reveal that the U-Net model outperforms other models by achieving an average Dice coefficient of 0.93 ± 0.04, and Hausdorff distance of 4.52 ± 0.90.

Neda Azarmehr, Xujiong Ye, Stefania Sacchi, James P. Howard, Darrel P. Francis, Massoud Zolgharni

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise