Skip to main content

2018 | Buch

Medical Image Understanding and Analysis

22nd Conference, MIUA 2018, Southampton, UK, July 9-11, 2018, Proceedings

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 22st Annual Conference on Medical Image Understanding and Analysis, MIUA 2018, held in Southampton, UK, in July 2018.The 34 revised full papers presented were carefully reviewed and selected from 49 submissions. The papers are organized in topical sections on liver analysis, medical image analysis, texture and image analysis, MRI: applications and techniques, segmentation in medical images, CT: learning and planning, ocular imaging analysis, applications of medical image analysis.

Inhaltsverzeichnis

Frontmatter

Deep Learning in Medical Imaging

Frontmatter
A Multi-resolution Deep Learning Framework for Lung Adenocarcinoma Growth Pattern Classification

Identifying lung adenocarcinoma growth patterns is critical for diagnosis and treatment of lung cancer patients. Growth patterns have variable texture, shape, size and location. They could appear individually or fused together in a way that makes it difficult to avoid inter/intra variability in pathologists reports. Thus, employing a machine learning method to learn these patterns and automatically locate them within the tumour is indeed necessary. This will reduce the effort, assessment variability and provide a second opinion to support pathologies decision. To the best of our knowledge, no work has been done to classify growth patterns in lung adenocarcinoma. In this paper, we propose applying deep learning framework to perform lung adenocarcinoma pattern classification. We investigate what contextual information is adequate for training using patches extracted at several resolutions. We find that both cellular and architectural morphology features are required to achieve the best performance. Therefore, we propose using multi-resolution deep CNN for growth pattern classification in lung adenocarcinoma. Our preliminarily results show an increase in the overall classification accuracy.

Najah Alsubaie, Muhammad Shaban, David Snead, Ali Khurram, Nasir Rajpoot
Convolutional Neural Networks for Efficient Localization of Interstitial Lung Disease Patterns in HRCT Images

Lung field segmentation is the first step towards the development of any computer aided diagnosis (CAD) system for interstitial lung diseases (ILD) observed in chest high resolution computed tomography (HRCT) images. If the segmentation is not done efficiently it will compromise the accuracy of CAD system. In this paper, a deep learning-based method is proposed to localize several interstitial lung disease patterns (ILD) in HRCT images without performing lung field segmentation. In this paper, localization of several ILD patterns is performed in image slice. The pretrained models of ZF and VGG networks were fine-tuned in order to localize ILD patterns using Faster R-CNN framework. The three most difficult ILD patterns consolidation, emphysema, and fibrosis have been used for this study and the accuracy of the method has been evaluated in terms of mean average precision (mAP) and free receiver operating characteristic (FROC) curve. The model achieved mAP value of 75% and 83% on ZF and VGG networks, respectively. The result obtained shows the effectiveness of the method in the localization of different ILD patterns.

Sunita Agarwala, Abhishek Kumar, Debashis Nandi, Ashis Kumar Dhara, Anup Sadhu, Sumitra Basu Thakur, Ashok Kumar Bhadra
Segmentating Nucleus Membranes in SBFSEM Volume Data with Deep Neural Networks

We describe and evaluate a method for segmenting cell nuclei from serial block-face scanning electron microscope volumes. The nucleus is a roughly ellipsoidal structure near the centre of each cell, appearing as an irregular ellipse in each image slice. It is common to segment it manually, which is very time-consuming. We use a Convolutional Neural Network to locate the boundary of the nuclei in each image slice. Geometric constraints are used to discard false matches. The full 3D shape of each nucleus is reconstructed by linking the boundaries in neighbouring slices. We demonstrate and evaluate the system on several large image volumes.

Yassar Almutairi, Tim Cootes, Karl Kadler

Special Session: Liver Analysis

Frontmatter
Novel Quantitative Magnetic Resonance Imaging Features with Liver Function Tests to Distinguish Parenchymal and Biliary Disease

Liver disease affects millions of people worldwide and auto-immune disease in particular has unmet needs for improvement of non-invasive methods for risk-stratification. Especially in cases where clinical markers are inconclusive. In this study we develop novel imaging features for quantitative MRI and show that these features improve the differentiation of AIH from biliary disease in challenging cases, where including imaging features with clinical markers improved the AUROC from 0.76 to 0.85.

Katherine Arndtz, Benjamin Irving, Peter Eddowes, Dan Green, Matt Kelly, Naomi Jayaratne, Rajarshi Banerjee, Sir Michael Brady, Gideon M. Hirschfield
Regional Assessment of Liver Disease Progression and Response to Therapy by Multi-time Point m-SLIC Correspondence

Liver disease has reached worryingly high levels worldwide and there is a need for better analysis to monitor progression of disease and response to therapy. Quantitative imaging such as corrected T1 and PDFF can accurately quantify levels of inflammation/fibrosis and fat. In this study we develop a method to assess regional change throughout the liver to characterise disease change. We show that this method is stable in healthy test-retest cases but is able to characterise change in disease in autoimmune hepatitis cases.

Benjamin Irving, Chloe Hutton, Katherine Arndtz, Naomi Jayaratne, Matt Kelly, Rajarshi Banerjee, Gideon M. Hirschfield, Sir J. Michael Brady
Automated Detection of Cystic Lesions in Quantitative T1 Liver Images

Differentiating between cysts and other liver lesions seen on magnetic resonance images is an important diagnostic problem. Quantitative T1 mapping enables characterisation of liver tissue in vivo, by providing an estimate of the extracellular water content, and can be used in analysis of liquid-filled cysts. This paper presents an image processing method to automatically detect hepatic cysts in such quantitative T1 maps. The method has been tested on a cohort of 926 cases from the UK Biobank study and classified images as containing cysts or cyst-free with accuracy of 83%, sensitivity of 93% and specificity of 82%.

Marta Wojciechowska, Benjamin Irving, Andrea Dennis, Henry R. Wilman, Rajarshi Banerjee, Sir Michael Brady, Matt Kelly
Voxel-Wise Analysis of Paediatric Liver MRI

Paediatric liver disease is a growing problem, which would benefit from non-invasive techniques for early detection and treatment monitoring. Multiparametric quantitative MRI has shown promise for measuring liver steatosis, inflammation and fibrosis in adults, but is likely to need modification for children. The Kids4LIFe project (NCT03198104) aims to adapt and validate LiverMultiScan$$^\mathrm{TM}$$TM from Perspectum Diagnostics for paediatric applications, characterising healthy liver development and a range of diseases. The analysis of LiverMultiScan$$^\mathrm{TM}$$TM images usually focuses on a few regions of interest, or on distributional features of the segmented liver parenchyma. The present work is an initial investigation into the use of voxel-wise statistical analysis in atlas space, following nonlinear image registration, with the aim of localising effects (developmental or disease-related), as commonly done in neuroimaging. Preliminary results show statistically significant effects that warrant further characterisation, and suggest atlas-based analysis is a useful complement to current approaches.

Ged Ridgway, Kamil Janowski, Andrea Dennis, Velicia Bachtiar, John McGonigle, Chris Everitt, Angela Darekar, David Breen, Dan Green, Stefan Neubauer, Rajarshi Banerjee, Piotr Socha, Sir Michael Brady

Medical Image Analysis

Frontmatter
Quantitative Evaluation of Correction Methods and Simulation of Motion Artifacts for Rotary Pullback Imaging Catheters

In this work, we present a quantitative study based on the ground truth image and artificial motion artifacts and its correction using azimuthal en face image registration (AEIR) method. Motion artifacts in the in vivo imaging make identification of features and structures like blood vessels challenging. Correction of distortions of tissue features resultant from motion artifacts may enhance image quality and interpretation of images. Optical coherence tomography (OCT) and autofluorescence imaging (AFI) has been reported for in vivo endoscopic imaging. Motion artifacts in pulmonary OCT-AFI data sets may be estimated from both AFI and OCT images based on azimuthal registration of slowly varying structures in the 2D en face image. In our previous work, we have described a simulation of motion artifacts for 3D or 2D rotational catheter data and AEIR method, correcting motion artifacts. Our simulated artifacts may be applied on a ground truth image to create an image with known artifacts for the quantitative evaluation of performance of the correction methods. Since there might be some non-visible motion artifacts in the original ground truth image, we need apply the correction method before applying the simulated artifacts. However, there is no guarantee that this process converges to a motion-free scan; also the pre-corrected ground truth image is subjected to the correction method for further quantitative analysis. Here, we present a study for quantitative evaluations on a ground truth image of in silico phantom, NURD phantom and in vivo OCT and AF images.

Elham Abouei, Anthony M. D. Lee, Geoffrey Hohert, Pierre Lane, Stephen Lam, Calum MacAulay
A Voting-Based Encoding Technique for the Classification of Gleason Score for Prostate Cancers

We present a novel approach for classifying the Gleason score for prostate tumours based on MRI data. Proposed approach uses three scores: 2, 3 and 4–5 (representing Gleason scores 4 and 5 as one single class). Patches are extracted from annotated MRI data for each of the class. Raw image patches have been used as features, instead of extracting manual hand-crafted features. Each patch is encoded using a dictionary and the encoded feature vector is then used for classification. A voting-based encoding approach is used to transform data from the image domain to more discriminative class-specific representations. Initial investigation demonstrated excellent results (Classification Accuracy equal to 85% and Area Under the ROC Curve (AUC) of 0.932) for 3-class Gleason score classification for prostate tumours.

Zobia Suhail, Arif Mahmood, Liping Wang, Paul N. Malcolm, Reyer Zwiggelaar
Segmentation of Lung Field in HRCT Images Using U-Net Based Fully Convolutional Networks

Segmentation is a preliminary step towards the development of automated computer aided diagnosis system (CAD). The system accuracy and efficiency primarily depend on the accurate segmentation result. Effective lung field segmentation is major challenging task, especially in the presence of different types of interstitial lung diseases (ILD). At present, high resolution computed tomography (HRCT) is considered to be the best imaging modality to observe ILD patterns. The most common patterns based on their textural appearances are consolidation, emphysema, fibrosis, ground glass opacity (GGO), reticulation and micronodules. In this paper, automatic lung field segmentation of pathological lung has been done using U-Net based deep convolutional networks. Our proposed model has been evaluated on publicly available MedGIFT database. The segmentation result was evaluated in terms of the dice similarity coefficient (DSC). Finally, the experimental results obtained on 330 testing images of different patterns achieving 94% of average DSC.

Abhishek Kumar, Sunita Agarwala, Ashis Kumar Dhara, Debashis Nandi, Sumitra Basu Thakur, Ashok Kumar Bhadra, Anup Sadhu
Identifying Shape-Based Biomarkers for Diagnosis of Parkinson’s Disease from Ioflupane (123I) SPECT Data

Parkinson’s disease is a progressive, incurable neurodegenerative condition affecting movement which has been linked to poor quality of living and considerable socio-economic burdens. To date, treatment can at best slow down the degradation process. However, successful disease management is subject to early detection of the disease which, in turn, depends on the diagnostic process.Clinical investigation alone has proven insufficient in discriminating between early Parkinson’s disease and essential tremor. Functional neuroimaging circumvents this problem by visualising dopamine transporter concentrations in the brain, providing a differential even during the early stages of the disease. Yet the traditional visual assessment of SPECT data introduces subjectivity and susceptibility to variation whilst being impractical for monitoring and assessing disease progression.This work, presents a machine-learning approach to the assessment of three-dimensional SPECT data. The system extracts intensity and shape information from the data following binarisation which utilises an experimental approach towards the identification of an optimal threshold. The striatal binding ratio is calculated based on the three-dimensional data rather than two-dimensional clinical standard. The resulting semi-quantitative measure and the extracted intensity and shape information are collectively used as data features and are subjected to a support vector machine to classify between positive and negative cases of Parkinson’s disease. The classification system is reported to attain an average accuracy of 97%; with 96.6% sensitivity and 97.8% specificity. This shows an improvement over the clinical standard visual assessment which reportedly attained 94% sensitivity and 92% specificity.

Vanessa Azzopardi, Matthew Guy, Emma Lewis
Features for the Detection of Flat Polyps in Colonoscopy Video

Colorectal cancer is the second most common cause of cancer death in the United States, with an estimated 140,000 new cases leading to 50,000 deaths this year. The best treatment is to detect and treat the cancer before it becomes invasive and spreads. The most common form of detection is the use of optical colonoscopy in which the clinician visually inspects the surface of the colon through an endoscope to detect the presence of polyps. Studies have shown that even the best clinicians will sometimes miss polyps, especially the more subtle flat polyps, and that many cancers that develop in the years immediately following a colonoscopy likely originate from missed polyps. In this paper we describe techniques for extracting several medically-driven features from colonoscopy video that can be used to detect the presence of flat polyps. Initial quantitative and qualitative results show that each of these features on their own provide some level of discrimination and, when combined, have the potential to support robust detection of flat polyps.

Miao Fan, Jared Vicory, Sarah McGill, Stephen Pizer, Julian Rosenman
Feature Analysis of Biomarker Descriptors for HER2 Classification of Histology Slides

Whole slide images (WSI) of histology slides are increasingly being used for computer assisted evaluations, automated grading and classification. In this rapidly evolving research field, several classification algorithms and feature descriptors have been reported for histopathological analysis. While some algorithms use pixel values of entire images as features, other methods try to use specific biomarker related features. This paper analyses in detail feature descriptors that have been found to be efficient in classifying ImmunoHistoChemistry (IHC) stained slides. These features are directly related to the Human Epidermal Growth Factor Receptor 2 (HER2) biomarkers that are commonly used for grading such slides. Characteristic curves are intensity features that encode information about the variation of the percentage of stained membrane regions with saturation levels. The uniform Local Binary Patterns (ULBP) are texture features extracted from stained regions. ULBP contains several components and generates a high dimensional feature vector that needs to be compressed. Fisher Linear Discriminant Analysis (LDA) and Principal Component Analysis (PCA) are used to select feature components important in classification. The paper proposes a method to combine different types of features (e.g., intensity and texture) after dimensionality reduction, and to improve classification accuracy by maximizing inter-class separability. The paper also discusses methods to visualize class-wise distribution of the computed feature vectors. Experimental analysis performed using a WSI dataset of IHC stained slides and aforementioned features are also presented.

Ramakrishnan Mukundan
Analysis of the Symmetry of Electrodes for Electropalatography with Cone Beam CT Scanning

The process of compression of air and vibration of activity in the larynx through which speech is produced is of great interest in phonetics, phonology and psychology and is related to various areas of biomedical engineering as it has a strong relationship with cochlear implants, Parkinson’s disease and stroke. One technique by means of which speech production is analysed is the use of electropalatography, in which an artificial palate, moulded to the speakers’ hard palate is introduced in the mouth. The palate contains a series of electrodes, which monitor contact between the tongue and the palate during speech production. There is interest in the symmetry or asymmetry of the movement of the tongue as this may be related to languages or right- or left-handedness, however; this has never been thoroughly studied. A specific limitation of electropalatography for symmetry studies is that palates are hand-crafted and the position of the electrodes themselves may be asymmetric. In this work, we analyse the positioning of electrodes of one electropalatography setting. The symmetry was analysed by locating the electrodes of the palate through the observation of the palate with Computed Tomography. An algorithm to segment the electrodes and find the symmetry of left and right sides of the palates is described. No significant asymmetry was found for one specific palate. The methodology presented should allow the analysis of palates to be used in larger studies of speech production.

Jo Verhoeven, Naomi Rachel Miller, Constantino Carlos Reyes-Aldasoro

Texture and Image Analysis

Frontmatter
Breast Tissue Classification Using Local Binary Pattern Variants: A Comparative Study

Mammographic tissue density is considered to be one of the major risk factors for developing breast cancer. In this paper we use quantitative measurements of Local Binary Patterns and its variants for breast tissue classification. We compare the classification results of LBP, ELBP, Uniform ELBP and M-ELBP for classifying mammograms as fatty, glandular and dense. A Bayesian-Network classifier is used with stratified ten-fold cross-validation. The experimental results indicate that ELBP patterns at different orientations extract more relevant elliptical breast tissue information from the mammograms indicating the importance of directional filters for breast tissue classification.

Minu George, Reyer Zwiggelaar
Volumetric Texture Analysis Based on Three-Dimensional Gaussian Markov Random Fields for COPD Detection

This paper proposes a 3D GMRF-based descriptor for volumetric texture image classification. In our proposed method, the estimated parameters of the GMRF model in volumetric texture images are employed as texture features in addition to the mean of a processed image region. The descriptor of the volumetric texture is then constructed by computing the histograms of each feature element to characterize the local texture. The evaluation of this descriptor achieves a high classification accuracy on a 3D synthetic texture database. Our method is then applied on a clinical dataset to exploit its discriminatory power, achieving a high classification accuracy in COPD detection. To demonstrate the performance of the descriptor, a comparison is carried out against a 2D GMRF-based method using the same dataset, variables, and settings. The descriptor outperforms the 2D GMRF-based method by a significant margin.

Yasseen Almakady, Sasan Mahmoodi, Joy Conway, Michael Bennett
Texture Descriptors for Classifying Sparse, Irregularly Sampled Optical Endomicroscopy Images

Optical endomicroscopy (OEM) is a novel real-time imaging technology that provides endoscopic images at the microscopic level. Clinical OEM procedures generate large datasets making their post procedural analysis a subjective and laborious task. There has been effort to automatically classify OEM frame sequences into relevant classes in aid of a fast and reliable diagnosis. Most existing classification approaches adopt established texture metrics, such as Local Binary Patterns (LBPs) derived from the regularly sampled grid images. However, due to the nature of image transmission through coherent fibre bundles, raw OEM data are sparsely and irregularly sampled, post-processed to a regularly sampled grid image format. This paper adapts Local Binary Patterns, a commonly used image texture descriptor, taking into consideration the sparse, irregular sampling imposed by the imaging fibre bundle on OEM images. The performance of Sparse Irregular Local Binary Patterns (SILBP) is assessed in conjunction with widely used classifiers, including Support Vector Machines, Random Forests and Linear Discriminant Analysis, for the detection of uninformative frames (i.e. noise and motion-artefacts) within pulmonary OEM frame sequences. Uninformative frames can comprise a considerable proportion of a dataset, increasing the resources required to analyse the data and impacting on any automated quantification analysis. SILBPs achieve comparable performance to the optimal LBPs (>92% overall accuracy), while employing <13% of the associated data.

Oleksii Leonovych, Mohammad Rami Koujan, Ahsan Akram, Jody Westerfeld, David Wilson, Kevin Dhaliwal, Stephen McLaughlin, Antonios Perperidis
What’s in a Smile? Initial Results of Multilevel Principal Components Analysis of Facial Shape and Image Texture

Multilevel principal components analysis (mPCA) has previously been shown to provide a simple and straightforward method of forming point distribution models that can be used in (active) shape models. Here we extend the mPCA approach to model image texture as well as shape. As a test case, we consider a set of (2D frontal) facial images from a group of 80 Finnish subjects (34 male; 46 female) with two different facial expressions (smiling and neutral) per subject. Shape (in terms of landmark points) and image texture are considered separately in this initial analysis. Three-level models are constructed that contain levels for biological sex, “within-subject” variation (i.e., facial expression), and “between-subject” variation (i.e., all other sources of variation). By considering eigenvalues, we find that the order of importance as sources of variation for facial shape is: facial expression (47.5%), between-subject variations (45.1%), and then biological sex (7.4%). By contrast, the order for image texture is: between-subject variations (55.5%), facial expression (37.1%), and then biological sex (7.4%). The major modes for the facial expression level of the mPCA models clearly reflect changes in increased mouth size and increased prominence of cheeks during smiling for both shape and texture. Even subtle effects such as changes to eyes and nose shape during smile are seen clearly. The major mode for the biological sex level of the mPCA models similarly relates clearly to changes between male and female. Model fits yield “scores” for each principal component that show strong clustering for both shape and texture by biological sex and facial expression at appropriate levels of the model. We conclude that mPCA correctly decomposes sources of variation due to biological sex and facial expression (etc.) and that it provides a reliable method of forming models of both shape and image texture.

D. J. J. Farnell, J. Galloway, A. Zhurov, S. Richmond, P. Pirttiniemi, Raija Lähdesmäki
An Automated Age-Related Macular Degeneration Classification Based on Local Texture Features in Optical Coherence Tomography Angiography

In this paper, an age-related macular degeneration (AMD) classification algorithm based on local texture features is proposed to support the automated analysis of optical coherence tomography angiography (OCTA) images in wet AMD. The algorithm is based on rotation invariant uniform Local Binary Patterns ($$ LBPs^{riu2} $$LBPsriu2) as a texture measurement technique. It was chosen due to its computational simplicity and its invariance against any transformation of the grey level as well as against texture orientation change. The texture features are extracted from the whole image without targeting a particular area. The algorithm was tested on two-dimensional angiogram greyscale images of four different retinal layers acquired via OCTA scan. The evaluation was performed using a ten-fold cross-validation strategy applied to a set of 184 OCTA images consisting of 92 normal control and 92 wet AMD images. The classification was performed on each separate retinal layer, and on all layers together. According to the results, the algorithm was able to achieve a promising performance with mean accuracy of 89% for all layers together and 89%, 94%, 98% and 100% for the superficial, deep, outer and choriocapillaris layers respectively.

Abdullah Alfahaid, Tim Morris

MRI: Applications and Techniques

Frontmatter
Developing a Framework for Studying Brain Networks in Neonatal Hypoxic-Ischemic Encephalopathy

Newborns with hypoxic-ischemic encephalopathy (HIE) are at high risk of brain injury, with subsequent developmental problems including severe neuromotor, cognitive and behavioral impairment. Neural correlates of cognitive and behavioral impairment in neonatal HIE, in particular in infants who survive without severe neuromotor impairment, are poorly understood. It is reasonable to hypothesize that in HIE both structural and functional brain networks are altered, and that this might be the neural correlate of impaired cognitive and/or behavioral impairment in HIE.Here, an analysis pipeline to study the structural and functional brain networks from neonatal MRI in newborns with HIE is presented. The structural connectivity is generated from dense whole-brain tractograms derived from diffusion-weighted MR fibre tractography. This investigation of functional connectivity focuses on the emerging resting state networks (RSNs), which are sensitive to injuries from hypoxic-ischemic insults to the newborn brain. In conjunction with the structural connectivity, alterations to the structuro-functional connectivity of the RSNs can be studied. Preliminary results from a proof-of-concept study in a small cohort of newborns with HIE are promising. The obstacles encountered and improvements to the pipeline are discussed. The framework can be further extended for joint analysis with EEG functional-connectivity.

Finn Lennartsson, Angela Darekar, Koushik Maharatna, Daniel Konn, David Allen, J-Donald Tournier, John Broulidakis, Brigitte Vollmer
MRI Super-Resolution Using Multi-channel Total Variation

This paper presents a generative model for super-resolution in routine clinical magnetic resonance images (MRI), of arbitrary orientation and contrast. The model recasts the recovery of high resolution images as an inverse problem, in which a forward model simulates the slice-select profile of the MR scanner. The paper introduces a prior based on multi-channel total variation for MRI super-resolution. Bias-variance trade-off is handled by estimating hyper-parameters from the low resolution input scans. The model was validated on a large database of brain images. The validation showed that the model can improve brain segmentation, that it can recover anatomical information between images of different MR contrasts, and that it generalises well to the large variability present in MR images of different subjects.

Mikael Brudfors, Yaël Balbastre, Parashkev Nachev, John Ashburner
Manifold Modeling of the Beating Heart Motion

Modeling the heart motion has important applications for diagnosis and intervention. We present a new method for modeling the deformation of the myocardium in the cardiac cycle. Our approach is based on manifold learning to build a representation of shape variation across time. We experiment with various manifold types to identify the best manifold method, and with real patient data extracted from cine MRIs. We obtain a representation, common to all subjects, that can discriminate cardiac cycle phases and heart function types.

Paul Stroe, Xianghua Xie, Adeline Paiement

Segmentation in Medical Images

Frontmatter
Automated Segmentation of HeLa Nuclear Envelope from Electron Microscopy Images

This paper describes an image-processing pipeline for the automatic segmentation of the nuclear envelope of HeLa cells observed through Electron Microscopy. The pipeline was applied to a 3D stack of 300 images. The intermediate results of neighbouring slices are further combined to improve the final results. Comparison with a hand-segmented ground truth reported Jaccard similarity values between 94–98% on the central slices with a decrease towards the edges of the cell where the structure was considerably more complex. The processing is unsupervised and each 2D slice is processed in about 5–10 s running on a MacBook Pro. No systematic attempt to make the code faster was made. These encouraging results could be further used to provide data for more complex segmentation techniques like Deep Learning, which require a considerable amount of data to train architectures like Convolutional Neural Networks. The code is freely available from https://github.com/reyesaldasoro/HeLa-Cell-Segmentation.

Cefa Karabağ, Martin L. Jones, Christopher J. Peddie, Anne E. Weston, Lucy M. Collinson, Constantino Carlos Reyes-Aldasoro
Automatic Segmentation of Microcalcification Clusters

Early detection of microcalcification (MC) clusters plays a crucial role in enhancing breast cancer diagnosis. Two automated MC cluster segmentation techniques are proposed based on morphological operations that incorporate image decomposition and interpolation methods. For both approaches, initially the contrast between the background tissue and MC cluster was increased and subsequently morphological operations were used. Evaluation was based on the Dice similarity scores and the results of MC cluster classification. A total number of 248 (131 benign and 117 malignant) and 24 (12 benign and 12 malignant) biopsy-proven digitized mammograms were considered from the DDSM and MIAS databases, which showed a classification accuracy of $$94.48\pm 1.11$$94.48±1.11% and $$100.00\pm 0.00$$100.00±0.00% respectively.

Nashid Alam, Arnau Oliver, Erika R. E. Denton, Reyer Zwiggelaar
Analysis of the Interactions of Migrating Macrophages

Understanding the migrating patterns of cells in the immune system is of great importance; especially the changes of direction and its cause. For macrophages and other immune cells, excessive migration could be related to autoimmune diseases and cancer. In this work, an algorithm to analyse the change in direction of cells before and after they interact with another cell is proposed. The main objective is to provide insights into the notion that interactions between cell structures appear to anticipate migration. Such interactions are determined when the cells overlap and form clumps of two or more cells. The algorithm integrates a segmentation technique capable of detecting overlapping cells and a tracking framework into a tool for the analysis of the trajectories of cells before and after they overlap. The preliminary results show promise into the analysis and the hypothesis proposed, and it lays the ground work for further developments.

José Alonso Solís-Lemus, Brian Stramer, Greg Slabaugh, Constantino Carlos Reyes-Aldasoro

CT: Learning and Planning

Frontmatter
An Adaptive Sampling Scheme to Efficiently Train Fully Convolutional Networks for Semantic Segmentation

Deep convolutional neural networks (CNNs) have shown excellent performance in object recognition tasks and dense classification problems such as semantic segmentation. However, training deep neural networks on large and sparse datasets is still challenging and can require large amounts of computation and memory. In this work, we address the task of performing semantic segmentation on large data sets, such as three-dimensional medical images. We propose an adaptive sampling scheme that uses a-posterior error maps, generated throughout training, to focus sampling on difficult regions, resulting in improved learning. Our contribution is threefold: (1) We give a detailed description of the proposed sampling algorithm to speed up and improve learning performance on large images. (2) We propose a deep dual path CNN that captures information at fine and coarse scales, resulting in a network with a large field of view and high resolution outputs. (3) We show that our method is able to attain new state-of-the-art results on the VISCERAL Anatomy benchmark.

Lorenz Berger, Hyde Eoin, M. Jorge Cardoso, Sébastien Ourselin
A Comparison of Unsupervised Abnormality Detection Methods for Interstitial Lung Disease

Abnormality detection, also known as outlier detection or novelty detection, seeks to identify data that do not match an expected distribution. In medical imaging, this could be used to find data samples with possible pathology or, more generally, to exclude samples that are normal. This may be done by learning a model of normality, against which new samples are evaluated. In this paper four methods, each representing a different family of techniques, are compared: one-class support vector machine, isolation forest, local outlier factor, and fast-minimum covariance determinant estimator. Each method is evaluated on patches of CT interstitial lung disease where the patches are encoded with one of four embedding methods: principal component analysis, kernel principal component analysis, a flat autoencoder, and a convolutional autoencoder. The data consists of 5500 healthy patches from one patient cohort defining normality, and 2970 patches from a second patient cohort with emphysema, fibrosis, ground glass opacity, and micronodule pathology representing abnormality. From this second cohort 1030 healthy patches are used as an evaluation dataset. Evaluation occurs in both the accuracy (area under the ROC curve) and runtime efficiency. The fast-minimum covariance determinant estimator is demonstrated to have a fair time scaling with dataset dimensionality, while the isolation forest and one-class support vector machine scale well with dimensionality. The one-class support vector machine is the most accurate, closely followed by the isolation forest and fast-minimum covariance determinant estimator. The embeddings from kernel principal component analysis are the most generally useful.

Matt Daykin, Mathini Sellathurai, Ian Poole
SoftCut: A Virtual Planning Tool for Soft Tissue Resection on CT Images

With the increasing use of three-dimensional (3D) models and Computer Aided Design (CAD) in the medical domain, virtual surgical planning is now frequently used. Most of the current solutions focus on bone surgical operations. However, for head and neck oncologic resection, soft tissue ablation and reconstruction are common operations. In this paper, we propose a method to provide a fast and efficient estimation of shape and dimensions of soft tissue resections. Our approach takes advantage of a simple sketch-based interface which allows the user to paint the contour of the resection on a patient specific 3D model reconstructed from a computed tomography (CT) scan. The volume is then virtually cut and carved following this pattern. From the outline of the resection defined on the skin surface as a closed curve, we can identify which areas of the skin are inside or outside this shape. We then use distance transforms to identify the soft tissue voxels which are closer from the inside of this shape. Thus, we can propagate the shape of the resection inside the soft tissue layers of the volume. We demonstrate the usefulness of the method on patient specific CT data.

Ludovic Blache, Fredrik Nysjö, Filip Malmberg, Andreas Thor, Andrés Rodríguez Lorenzo, Ingela Nyström

Special Session: Ocular Imaging Analysis

Frontmatter
Simultaneous Segmentation of Multiple Retinal Pathologies Using Fully Convolutional Deep Neural Network

The segmentation of retinal pathologies is the primitive and essential step in the development of automated diagnostic system for various systemic, cardiovascular, and ophthalmic diseases. The existing state-of-the-art machine learning based retinal pathologic segmentation techniques mainly aim at delineating pathology of one kind only. In this context, we have proposed a novel end-to-end technique for simultaneous segmentation of multiple retinal pathologies (i.e., exudates, hemorrhages, and cotton-wool spots) using encoder-decoder based fully convolutional neural network architecture. Moreover, the task of retinal pathology extraction has been modeled as a semantic segmentation framework which enables us to obtain pixel-level class labels. The proposed algorithm has been evaluated on publically available Messidor dataset and achieved state-of-the-art mean accuracies of 99.24% (exudates), 97.86% (hemorrhages), and 88.65% (cotton-wool spots). The developed approach may aid in further optimization of pathology quantification module of the QUARTZ software which has been developed earlier by our research group.

Maryam Badar, M. Shahzad, M. M. Fraz
Improving the Resolution of Retinal OCT with Deep Learning

In medical imaging, high-resolution can be crucial for identifying pathologies and subtle changes in tissue structure. However, in many scenarios, achieving high image resolution can be limited by physics or available technology. In this paper, we aim to develop an automatic and fast approach to increasing the resolution of Optical Coherence Tomography (OCT) images using the data available, without any additional information or repeated scans. We adapt a fully connected deep learning network for the super-resolution task, allowing multi-scale similarity to be considered, and create a training and testing set of more than 40,000 sample patches from retinal OCT data. Testing our model, we achieve an impressive root mean squared error of 5.847 and peak signal-to-noise ratio (PSNR) of 33.28 dB averaged over 8282 samples. This represents a mean improvement in PSNR of 3.2 dB over nearest neighbour and 1.4 dB over bilinear interpolation. The results achieved so far improve over commonly used fast techniques for increasing resolution and are very encouraging for further development towards fast OCT super-resolution. The ability to increase quickly the resolution of OCT as well as other medical images has the potential to impact significantly on medical imaging at point of care, allowing significant small details to be revealed efficiently and accurately for inspection by clinicians and graders and facilitating earlier and more accurate diagnosis of disease.

Ying Xu, Bryan M. Williams, Baidaa Al-Bander, Zheping Yan, Yao-chun Shen, Yalin Zheng
An Improved U-Net Architecture for Simultaneous Arteriole and Venule Segmentation in Fundus Image

The segmentation and classification of retinal arterioles and venules play an important role in the diagnosis of various eye diseases and systemic diseases. The major challenges include complicated vessel structure, inhomogeneous illumination, and large background variation across subjects. In this study, we proposed an improved fully convolutional network that simultaneously segment arterioles and venules directly from the retinal image. To simultaneously segment retinal arterioles and venules, we configured the fully convolutional network to allow true color image as input and multiple labels as output. A domain-specific loss function is designed to improve the performance. The proposed method was assessed extensively on public datasets and compared with the state-of-the-art methods in literatures. The sensitivity and specificity of overall vessel segmentation on DRIVE is 0.870 and 0.980 with a misclassification rate of 23.7% and 9.8% for arteriole and venule, respectively. The proposed method outperforms the state-of-the-art methods and avoided possible error-propagation as in the segmentation-classification strategy. The proposed method holds great potential for the diagnostics and screening of various eye diseases and systemic diseases.

Xiayu Xu, Tao Tan, Feng Xu

Applications of Medical Image Analysis

Frontmatter
Image Analysis in Light Sheet Fluorescence Microscopy Images of Transgenic Zebrafish Vascular Development

The zebrafish has become an established model to study vascular development and disease in vivo. However, despite it now being possible to acquire high-resolution data with state-of-the-art fluorescence microscopy, such as lightsheet microscopy, most data interpretation in pre-clinical neurovascular research relies on visual subjective judgement, rather than objective quantification. Therefore, we describe the development of an image analysis workflow towards the quantification and description of zebrafish neurovascular development. In this paper we focus on data acquisition by lightsheet fluorescence microscopy, data properties, image pre-processing, and vasculature segmentation, and propose future work to derive quantifications of zebrafish neurovasculature development.

Elisabeth Kugler, Timothy Chico, Paul Armitage
Color Correction of Baby Images for Cyanosis Detection

An accurate assessment of the bluish discoloration of cyanosis in the newborn baby’s skin is essential for the doctors when making a comprehensive evaluation or a treatment decision. To date, midwives employ the score of APGAR to note any occurrence of discoloration on skin among newborn babies. However, there is still no known general method to automatically determine a cyanosis skin color and quantifying technique in a newborn baby. Furthermore, a viable yardstick is absent for evaluation purposes in training sessions. Hence, this study proposes a cyanosis skin detection in the image of a newborn with a new algorithm for a color correction using MacBeth Color Checker. This proposed system has three steps: (i) selecting cyanosis region of interest from images, (ii) correcting color via an algorithm to calibrate images, and (iii) generating a database of cyanosis CIE $${L}^{*}{a}^{*}{b}^{*} $$L∗a∗b∗(CIELAB) values. This proposed method calculates color error with $$\varDelta {E}^{*} $$ΔE∗ via comparing the actual color value of MacBeth Colorchecker especially before and after applying correction for color. This proposed method to detect cyanosis allows modification of images with minimal effect upon image quality, thus assuring the viability in detecting and ascertaining values of CIELAB for cyanosis skin. Besides, this study hopes to use the outcomes of CIELAB values of cyanosis skin in order to develop a baby manikin with cyanosis that is high in fidelity in upcoming studies. This study is not associated to clinical purposes.

Nur Fatihah Binti Azmi, Frank Delbressine, Loe Feijs, Sidarto Bambang Oetomo
Deep Autoencoder Features for Registration of Histology Images

Registration of histology whole slide images of consecutive sections of a tissue block is mandatory for cross-slide analysis. Due to the stain variations, a feature-based method for deriving the transformation maps for these images is considered to be a reasonable choice as compared to the methods which work on image intensities. Autoencoders have been employed in a wide variety of applications due to their potential for representation learning and transfer learning for deep architectures. Representation learned by autoencoders has been used for a number of challenging problems including classification and regression. In this study, we analyze deep autoencoder features for the purpose of registering histology images by maximizing the feature similarities between the fixed and moving images. In this paper, we demonstrate the capability of autoencoder features for registration of histology images.

Ruqayya Awan, Nasir Rajpoot
Backmatter
Metadaten
Titel
Medical Image Understanding and Analysis
herausgegeben von
Prof. Mark Nixon
Sasan Mahmoodi
Prof. Reyer Zwiggelaar
Copyright-Jahr
2018
Electronic ISBN
978-3-319-95921-4
Print ISBN
978-3-319-95920-7
DOI
https://doi.org/10.1007/978-3-319-95921-4