Skip to main content
Top

2018 | Book

Computational Pathology and Ophthalmic Medical Image Analysis

First International Workshop, COMPAY 2018, and 5th International Workshop, OMIA 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 16 - 20, 2018, Proceedings

Editors: Danail Stoyanov, Zeike Taylor, Francesco Ciompi, Dr. Yanwu Xu, Anne Martel, Lena Maier-Hein, Nasir Rajpoot, Dr. Jeroen van der Laak, Mitko Veta, Stephen McKenna, David Snead, Emanuele Trucco, Mona K. Garvin, Xin Jan Chen, Dr. Hrvoje Bogunovic

Publisher: Springer International Publishing

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

This book constitutes the refereed joint proceedings of the First International Workshop on Computational Pathology, COMPAY 2018, and the 5th International Workshop on Ophthalmic Medical Image Analysis, OMIA 2018, held in conjunction with the 21st International Conference on Medical Imaging and Computer-Assisted Intervention, MICCAI 2018, in Granada, Spain, in September 2018.

The 19 full papers (out of 25 submissions) presented at COMPAY 2018 and the 21 full papers (out of 31 submissions) presented at OMIA 2018 were carefully reviewed and selected. The COMPAY papers focus on artificial intelligence and deep learning. The OMIA papers cover various topics in the field of ophthalmic image analysis.

Table of Contents

Frontmatter

First International Workshop on Computational Pathology, COMPAY 2018

Frontmatter
Improving Accuracy of Nuclei Segmentation by Reducing Histological Image Variability

Histological analyses of tissue biopsies is an essential component in the diagnosis of several diseases including cancer. In the past, evaluation of tissue samples was done manually, but to improve efficiency and ensure consistent quality, there has been a push to evaluate these algorithmically. One important task in histological analysis is the segmentation and evaluation of nuclei. Nuclear morphology is important to understand the grade and progression of disease. However, implementing automated methods at scale across histological datasets is challenging due to differences in stain, slide preparation and slide storage. This paper evaluates the impact of four stain normalization methods on the performance of nuclei segmentation algorithms. The goal is to highlight the critical role of stain normalization in improving the usability of learning-based models (such as convolutional neural networks (CNNs)) for this task. Using stain normalization, the baseline segmentation accuracy across distinct training and test datasets was improved by more than 50% of its base value as measured by the AUC and Recall. We believe this is the first study to perform a comparative analysis of four stain normalization approaches (histogram equalization, Reinhart, Macenko, spline mapping) on segmentation accuracy of CNNs.

Yusuf H. Roohani, Eric G. Kiss
Multi-resolution Networks for Semantic Segmentation in Whole Slide Images

Digital pathology provides an excellent opportunity for applying fully convolutional networks (FCNs) to tasks, such as semantic segmentation of whole slide images (WSIs). However, standard FCNs face challenges with respect to multi-resolution, inherited from the pyramid arrangement of WSIs. As a result, networks specifically designed to learn and aggregate information at different levels are desired. In this paper, we propose two novel multi-resolution networks based on the popular ‘U-Net’ architecture, which are evaluated on a benchmark dataset for binary semantic segmentation in WSIs. The proposed methods outperform the U-Net, demonstrating superior learning and generalization capabilities.

Feng Gu, Nikolay Burlutskiy, Mats Andersson, Lena Kajland Wilén
Improving High Resolution Histology Image Classification with Deep Spatial Fusion Network

Histology imaging is an essential diagnosis method to finalize the grade and stage of cancer of different tissues, especially for breast cancer diagnosis. Specialists often disagree on the final diagnosis on biopsy tissue due to the complex morphological variety. Although convolutional neural networks (CNN) have advantages in extracting discriminative features in image classification, directly training a CNN on high resolution histology images is computationally infeasible currently. Besides, inconsistent discriminative features often distribute over the whole histology image, which incurs challenges in patch-based CNN classification method. In this paper, we propose a novel architecture for automatic classification of high resolution histology images. First, an adapted residual network is employed to explore hierarchical features without attenuation. Second, we develop a robust deep fusion network to utilize the spatial relationship between patches and learn to correct the prediction bias generated from inconsistent discriminative feature distribution. The proposed method is evaluated using 10-fold cross-validation on 400 high resolution breast histology images with balanced labels and reports 95% accuracy on 4-class classification and 98.5% accuracy, 99.6% AUC on 2-class classification (carcinoma and non-carcinoma), which substantially outperforms previous methods and close to pathologist performance.

Yongxiang Huang, Albert Chi-Shing Chung
Construction of a Generative Model of H&E Stained Pathology Images of Pancreas Tumors Conditioned by a Voxel Value of MRI Image

In this paper, we propose a method for constructing a multi-scale model of pancreas tumor of a KrasLSL.G12D/+; p53R172H/+; PdxCretg/+ (KPC) mouse that is a genetically engineered mouse model of pancreas tumor. The model represents the correlation between the value at each voxel in the MRI image of the tumor and the pathology image patches that are observed at each portion corresponds to the location of the voxel in the MRI image. The model is represented by a cascade of image generators trained by a Laplacian Pyramid of Generative Adversarial Network (LAPGAN). When some voxel in a pancreas tumor region in an MRI image is selected, the cascade of generators outputs patches of the pathology images that can be observed at the location corresponds to the selected voxel. We trained the generators by using an MRI image and a 3D pathology image, the latter was first reconstructed from a spatial series of the 2D pathology images and was then registered to the MRI image.

Tomoshige Shimomura, Kugler Mauricio, Tatsuya Yokota, Chika Iwamoto, Kenoki Ohuchida, Makoto Hashizume, Hidekata Hontani
Accurate 3D Reconstruction of a Whole Pancreatic Cancer Tumor from Pathology Images with Different Stains

When applied to 3D image reconstruction, conventional landmark-based registration methods tend to generate unnatural vertical structures due to inconsistencies between the employed model and the real tissue. This paper demonstrates a fully non-rigid image registration method for 3D image reconstruction which considers the spatial continuity and smoothness of each constituent part of the microstructures in the tissue. Corresponding landmarks are detected along the images, defining a set of trajectories, which are smoothed out in order to define a diffeomorphic mapping. The resulting reconstructed 3D image preserves the original tissue architecture, allowing the observation of fine details and structures.

Mauricio Kugler, Yushi Goto, Naoki Kawamura, Hirokazu Kobayashi, Tatsuya Yokota, Chika Iwamoto, Kenoki Ohuchida, Makoto Hashizume, Hidekata Hontani
Role of Task Complexity and Training in Crowdsourced Image Annotation

Accurate annotation of anatomical structures or pathological changes in microscopic images is an important task in computational pathology. Crowdsourcing holds promise to address this demand, but so far feasibility has only be shown for simple tasks and not for high-quality annotation of complex structures which is often limited by shortage of experts. Third-year medical students participated in solving two complex tasks, labeling of images and delineation of relevant image objects in breast cancer and kidney tissue. We evaluated their performance and addressed the requirements of task complexity and training phases. Our results show feasibility and a high agreement between students and experts. The training phase improved accuracy of image labeling.

Nadine S. Schaadt, Anne Grote, Germain Forestier, Cédric Wemmert, Friedrich Feuerhake
Capturing Global Spatial Context for Accurate Cell Classification in Skin Cancer Histology

The spectacular response observed in clinical trials of immunotherapy in patients with previously uncurable Melanoma, a highly aggressive form of skin cancer, calls for a better understanding of the cancer-immune interface. Computational pathology provides a unique opportunity to spatially dissect such interface on digitised pathological slides. Accurate cellular classification is a key to ensure meaningful results, but is often challenging even with state-of-art machine learning and deep learning methods.We propose a hierarchical framework, which mirrors the way pathologists perceive tumour architecture and define tumour heterogeneity to improve cell classification methods that rely solely on cell nuclei morphology. The SLIC superpixel algorithm was used to segment and classify tumour regions in low resolution H&E-stained histological images of melanoma skin cancer to provide a global context. Classification of superpixels into tumour, stroma, epidermis and lumen/white space, yielded a 97.7% training set accuracy and 95.7% testing set accuracy in 58 whole-tumour images of the TCGA melanoma dataset. The superpixel classification was projected down to high resolution images to enhance the performance of a single cell classifier, based on cell nuclear morphological features, and resulted in increasing its accuracy from 86.4% to 91.6%. Furthermore, a voting scheme was proposed to use global context as biological a priori knowledge, pushing the accuracy further to 92.8%.This study demonstrates how using the global spatial context can accurately characterise the tumour microenvironment and allow us to extend significantly beyond single-cell morphological classification.

Konstantinos Zormpas-Petridis, Henrik Failmezger, Ioannis Roxanis, Matthew Blackledge, Yann Jamin, Yinyin Yuan
Exploiting Multiple Color Representations to Improve Colon Cancer Detection in Whole Slide H&E Stains

Currently, colon cancer diagnosis is based on manual assessment of tissue samples stained with hematoxylin and eosin (H&E). This is a high volume, time consuming, and subjective task which could be aided by automatic cancer detection. We propose an algorithm for automatic cancer detection within WSI H&E stains using a multi class colon tissue classifier based on features extracted from 5 different color representations. Approx. 32000 tissue patches were extracted for the classifier from manual annotations of 9 representative colon tissue types from 74 WSI H&E stains. Colon tissue classifiers based on gray level or color features were trained using leave-one-out forward selection. The best colon tissue classifier was based on color texture features obtaining an average tissue precision-recall (PR) area under the curve (AUC) of 0.886 and a cancer PR-AUC of 0.950 on 20 validation WSI H&E stains.

Alex Skovsbo Jørgensen, Jonas Emborg, Rasmus Røge, Lasse Riis Østergaard
Leveraging Unlabeled Whole-Slide-Images for Mitosis Detection

Mitosis count is an important biomarker for prognosis of various cancers. At present, pathologists typically perform manual counting on a few selected regions of interest in breast whole-slide-images (WSIs) of patient biopsies. This task is very time-consuming, tedious and subjective. Automated mitosis detection methods have made great advances in recent years. However, these methods require exhaustive labeling of a large number of selected regions of interest. This task is very expensive because expert pathologists are needed for reliable and accurate annotations. In this paper, we present a semi-supervised mitosis detection method which is designed to leverage a large number of unlabeled breast cancer WSIs. As a result, our method capitalizes on the growing number of digitized histology images, without relying on exhaustive annotations, subsequently improving mitosis detection. Our method first learns a mitosis detector from labeled data, uses this detector to mine additional mitosis samples from unlabeled WSIs, and then trains the final model using this larger and diverse set of mitosis samples. The use of unlabeled data improves F1-score by $$\sim $$ ∼ 5% compared to our best performing fully-supervised model on the TUPAC validation set. Our submission (single model) to TUPAC challenge ranks highly on the leaderboard with an F1-score of 0.64.

Saad Ullah Akram, Talha Qaiser, Simon Graham, Juho Kannala, Janne Heikkilä, Nasir Rajpoot
Evaluating Out-of-the-Box Methods for the Classification of Hematopoietic Cells in Images of Stained Bone Marrow

Compared to the analysis of blood cells in microscope images of peripheral blood, bone marrow images are much more challenging for automated cell classification: not only are the cells more densely distributed, there are also significantly more types of hematopoietic cells. So far, several attempts have been made using custom image features and prior knowledge in form of cytoplasm and nuclei segmentations or a restricted number of cell types in peripheral blood. Instead of hand-crafting features and classification methods for bone marrow images, we compare several well-known methods on our more challenging dataset and we show that while generic classical machine learning approaches cannot compete with specialized algorithms, even out-of-the-box deep learning methods already yield valuable results. Our findings indicate that automated analysis of bone marrow images becomes possible with the advent of convolutional neural networks.

Philipp Gräbel, Martina Crysandt, Reinhild Herwartz, Melanie Hoffmann, Barbara M. Klinkhammer, Peter Boor, Tim H. Brümmendorf, Dorit Merhof
DeepCerv: Deep Neural Network for Segmentation Free Robust Cervical Cell Classification

Automated classification of cervical cancer cells has the potential to reduce high mortality rates due to cervical cancer in developing countries. However traditional algorithms for the same depend on accurate segmentation of cells, which in itself is an open problem. Often the algorithms are also not evaluated by considering the huge inter-observer variability in ground truth labels. We propose a new deep learning algorithm that does not depend on accurate segmentation by directly classifying image patches with cells. We evaluate the proposed algorithm on the popular Herlev dataset and show that it achieves state of the art accuracy while being extremely fast. The experimental results are also demonstrated using AIndra dataset collected by us, which also captures the inter observer variability.

O. U. Nirmal Jith, K. K. Harinarayanan, Srishti Gautam, Arnav Bhavsar, Anil K. Sao
Whole Slide Image Registration for the Study of Tumor Heterogeneity

Consecutive thin sections of tissue samples make it possible to study local variation in e.g. protein expression and tumor heterogeneity by staining for a new protein in each section. In order to compare and correlate patterns of different proteins, the images have to be registered with high accuracy. The problem we want to solve is registration of gigapixel whole slide images (WSI). This presents 3 challenges: (i) Images are very large; (ii) Thin sections result in artifacts that make global affine registration prone to very large local errors; (iii) Local affine registration is required to preserve correct tissue morphology (local size, shape and texture). In our approach we compare WSI registration based on automatic and manual feature selection on either the full image or natural sub-regions (as opposed to square tiles). Working with natural sub-regions, in an interactive tool makes it possible to exclude regions containing scientifically irrelevant information. We also present a new way to visualize local registration quality by a Registration Confidence Map (RCM). With this method, intra-tumor heterogeneity and characteristics of the tumor microenvironment can be observed and quantified.

Leslie Solorzano, Gabriela M. Almeida, Bárbara Mesquita, Diana Martins, Carla Oliveira, Carolina Wählby
Modality Conversion from Pathological Image to Ultrasonic Image Using Convolutional Neural Network

Relation analysis between physical properties and microstructure of the human tissue has been widely conducted. In particular, the relationships between acoustic parameters and the microstructure of the human brain fall within the scope of our research. In order to analyze the relationship between physical properties and microstructure of the human tissue, accurate image registration is required. To observe the microstructure of the tissue, pathological (PT) image, which is an optical image capturing a thinly sliced specimen has been generally used. However, spatial resolutions and image features of PT image are markedly different from those of other image modalities. This study proposes a modality conversion method from PT image to ultrasonic (US) image including downscale process using convolutional neural network (CNN). Namely, constructed conversion model estimates the US from patch image of PT image. The proposed method was applied to the PT images and we confirmed that the converted PT images were similar to the US images from visual assessment. Image registration was then performed with converted PT and US images measuring the consecutive pathological specimens. Successful registration results were obtained in every pair of the images.

Takashi Ohnishi, Shu Kashio, Takuya Ogawa, Kazuyo Ito, Stanislav S. Makhanov, Tadashi Yamaguchi, Yasuo Iwadate, Hideaki Haneishi
Structure Instance Segmentation in Renal Tissue: A Case Study on Tubular Immune Cell Detection

In renal transplantation pathology, the Banff grading system is used for diagnosis. We perform a case study on the detection of immune cells in tubules, with the goal of automating part of this grading. We propose a two-step approach, in which we first perform a structure segmentation and subsequently an immune cell detection. We used a dataset of renal allograft biopsies from the Radboud University Medical Centre, Nijmegen, the Netherlands. Our modified U-net reached a Dice score of 0.85 on the structure segmentation task. The F1-score of the immune cell detection was 0.33.

T. de Bel, M. Hermsen, G. Litjens, J. van der Laak
Cellular Community Detection for Tissue Phenotyping in Histology Images

A primary aim of detailed analysis of multi-gigapixel histology images is assisting pathologists for better cancer grading and prognostication. Several methods have been proposed for the analysis of histology images in the literature. However, these methods are often limited to the classification of two classes i.e., tumor and stroma. Also, most existing methods are based on fully supervised learning and require a large amount of annotations, which are very difficult to obtain. To alleviate these challenges, we propose a novel community detection algorithm for the classification of tissue in Whole-slide Images (WSIs). The proposed algorithm uses a novel graph-based approach to the problem of detecting prevalent communities in a collection of histology images in an semi-supervised manner resulting the identification of six distinct tissue phenotypes in the multi-gigapixel image data. We formulate the problem of identifying distinct tissue phenotypes as the problem of finding network communities using the geodesic density gradient in the space of potential interaction between different cellular components. We show that prevalent communities found in this way represent distinct and biologically meaningful tissue phenotypes. Experiments on two independent Colorectal Cancer (CRC) datasets demonstrate that the proposed algorithm outperforms current state-of-the-art methods.

Sajid Javed, Muhammad Moazam Fraz, David Epstein, David Snead, Nasir M. Rajpoot
Automatic Detection of Tumor Budding in Colorectal Carcinoma with Deep Learning

Colorectal cancer patients would benefit from a valid, reliable and efficient detection of Tumor Budding (TB), as this is a proven prognostic biomarker. We explored the application of deep learning techniques to detect TB in Hematoxylin and Eosin (H&E) stained slides, and used convolutional neural networks to classify image patches as containing tumor buds, tumor glands and background. As a reference standard for training we stained slides both with H&E and immunohistochemistry (IHC), where one pathologist first annotated buds in IHC and then transferred the obtained annotations to the corresponding H&E image. We show the effectiveness of the proposed three-class approach, which allows to substantially reduce the amount of false positives, especially when combined with a hard-negative mining technique. Finally we report the results of an observer study aimed at investigating the correlation between pathologists at detecting TB in IHC and H&E.

John-Melle Bokhorst, Lucia Rijstenberg, Danny Goudkade, Iris Nagtegaal, Jeroen van der Laak, Francesco Ciompi
Significance of Hyperparameter Optimization for Metastasis Detection in Breast Histology Images

Breast cancer (BC) is the second most leading cause of cancer deaths in women and BC metastasis accounts for the majority of deaths. Early detection of breast cancer metastasis in sentinel lymph nodes is of high importance for prediction and management of breast cancer progression. In this paper, we propose a novel deep learning framework for automatic detection of micro- and macro- metastasis in multi-gigapixel whole-slide images (WSIs) of sentinel lymph nodes. One of our main contributions is to incorporate a Bayesian solution for the optimization of network’s hyperparameters on one of the largest histology dataset, which leads to 5% gain in overall patch-based accuracy. Furthermore, we present an ensemble of two multi-resolution deep learning networks, one captures the cell level information and the other incorporates the contextual information to make the final prediction. Finally, we propose a two-step thresholding method to post-process the output of ensemble network. We evaluate our proposed method on the CAMELYON16 dataset, where we outperformed “human experts” and achieved the second best performance compared to 32 other competing methods.

Navid Alemi Koohbanani, Talha Qaisar, Muhammad Shaban, Jevgenij Gamper, Nasir Rajpoot
Image Magnification Regression Using DenseNet for Exploiting Histopathology Open Access Content

Open access medical content databases such as PubMed Central and TCGA offer possibilities to obtain large amounts of images for training deep learning models. Nevertheless, accurate labeling of large-scale medical datasets is not available and poses challenging tasks for using such datasets. Predicting unknown magnification levels and standardize staining procedures is a necessary preprocessing step for using this data in retrieval and classification tasks. In this paper, a CNN-based regression approach to learn the magnification of histopathology images is presented, comparing two deep learning architectures tailored to regress the magnification. A comparison of the performance of the models is done in a dataset of 34,441 breast cancer patches with several magnifications. The best model, a fusion of DenseNet-based CNNs, obtained a kappa score of 0.888. The methods are also evaluated qualitatively on a set of images from biomedical journals and TCGA prostate patches.

Sebastian Otálora, Manfredo Atzori, Vincent Andrearczyk, Henning Müller
Uncertainty Driven Pooling Network for Microvessel Segmentation in Routine Histology Images

Lymphovascular invasion (LVI) and tumor angiogenesis are correlated with metastasis, cancer recurrence and poor patient survival. In most of the cases, the LVI quantification and angiogenic analysis is based on microvessel segmentation and density estimation in immunohistochemically (IHC) stained tissues. However, in routine H&E stained images, the microvessels display a high level of heterogeneity in terms of size, shape, morphology and texture which makes microvessel segmentation a non-trivial task. Manual delineation of microvessels for biomarker analysis is labor-intensive, time consuming, irreproducible and can suffer from subjectivity among pathologists. Moreover, it is often beneficial to account for the uncertainty of a prediction when making a diagnosis. To address these challenges, we proposed a framework for microvessel segmentation in H&E stained histology images. The framework extends DeepLabV3+ by using an improved dice coefficient based custom loss function and also incorporating an uncertainty prediction mechanism. The proposed method uses an aligned Xception model, followed by atrous spatial pyramid pooling for feature extraction at multiple scales. This architecture counters the challenge of segmenting blood vessels of varying morphological appearance. To incorporate uncertainty, random transformations are introduced at test time for a superior segmentation result and simultaneous uncertainty map generation, highlighting ambiguous regions. The method is evaluated using 1167 images of size $$512\times 512$$ 512 × 512 pixels, extracted from 13 WSIs of oral squamous cell carcinoma (OSCC) tissue at 20x magnification. The proposed net-work achieves state-of-the-art performance compared to current semantic segmentation deep neural networks (FCN-8, U-Net, SegNet and DeepLabV3+).

M. M. Fraz, M. Shaban, S. Graham, S. A. Khurram, N. M. Rajpoot

5th International Workshop on Ophthalmic Medical Image Analysis, OMIA 2018

Frontmatter
Ocular Structures Segmentation from Multi-sequences MRI Using 3D Unet with Fully Connected CRFs

The use of 3D Magnetic Resonance Imaging (MRI) has attracted growing attention for the purpose of diagnosis and treatment planning of intraocular ocular cancers. Precise segmentation of such tumors are highly important to characterize tumors, their progression and to define a treatment plan. Along this line, automatic and effective segmentation of tumors and healthy eye anatomy would be of great value. The major challenge to this end however lies in the disease variability encountered over different populations, often imaged under different acquisition conditions and high heterogeneity of tumor characterization in location, size and appearance. In this work, we consider the Retinoblastoma disease, the most common eye cancer in children. To provide automated segmentations of relevant structures, a multi-sequences MRI dataset of 72 subjects is introduced, collected across different clinical sites with different magnetic fields (3T and 1.5T), with healthy and pathological subjects (children and adults). Using this data, we present a framework to segment both healthy and pathological eye structures. In particular, we make use of a 3D U-net CNN whereby using four encoder and decoder layers to produce conditional probabilities of different eye structures. These are further refined using a Conditional Random Field with Gaussian kernels to maximize label agreement between similar voxels in multi-sequence MRIs. We show experimentally that our approach brings state-of-the-art performances for several relevant eye structures and that these results are promising for use in clinical practice.

Huu-Giao Nguyen, Alessia Pica, Philippe Maeder, Ann Schalenbourg, Marta Peroni, Jan Hrbacek, Damien C. Weber, Meritxell Bach Cuadra, Raphael Sznitman
Classification of Findings with Localized Lesions in Fundoscopic Images Using a Regionally Guided CNN

Fundoscopic images are often investigated by ophthalmologists to spot abnormal lesions to make diagnoses. Recent successes of convolutional neural networks are confined to diagnoses of few diseases without proper localization of lesion. In this paper, we propose an efficient annotation method for localizing lesions and a CNN architecture that can classify an individual finding and localize the lesions at the same time. Also, we introduce a new loss function to guide the network to learn meaningful patterns with the guidance of the regional annotations. In experiments, we demonstrate that our network performed better than the widely used network and the guidance loss helps achieve higher AUROC up to $$4.1\%$$ 4.1 % and superior localization capability.

Jaemin Son, Woong Bae, Sangkeun Kim, Sang Jun Park, Kyu-Hwan Jung
Segmentation of Corneal Nerves Using a U-Net-Based Convolutional Neural Network

In-vivo confocal microscopy provides information on the corneal health state. In particular, images taken at a specific depth allow the visualization of the nerves fibers. The correlation between corneal nerves morphology and pathology has been shown several times. However, the difficulty in obtain an accurate tracing of the nerves (manually or automatically) and the execution times limit the widespread use of this technique in clinical practice. In this work, we propose a U-Net-based Convolutional Neural Network (CNN) for the fully automatic tracing of corneal nerves. The proposed CNN’s architecture consists of a contracting path, which captures nerve descriptors, and a symmetric expanding path, which enables precise nerve localization. The proposed algorithm provides nerve segmentation with sensitivity higher than 95% with respect to manual tracing, and improves the results obtained by a previous fully automatic technique. Furthermore, corneal nerve representation obtained in the proposed CNN provides an improvement in the image automatic classification between healthy subjects and subjects with diabetic neuropathy, demonstrating the potential of CNN in identifying clinically useful features.

Alessia Colonna, Fabio Scarpa, Alfredo Ruggeri
Automatic Pigmentation Grading of the Trabecular Meshwork in Gonioscopic Images

Gonioscopy is essential to make a correct diagnosis of glaucoma. However, it requires a skilled examiner for being performed, and it may provide subjective information. The assessment of the iridocorneal angle by means of currently available modalities (such as ultrasound biomicroscopy or anterior segment OCT) gives anatomical quantification of angle structures without providing any chromatic information. In this study, image analysis was carried out in the pictures acquired by the prototype of a recently developed gonioscopy device, capable to automatically acquire $$360^\circ $$ 360 ∘ images of the iridocorneal angle, to verify the feasibility of performing an automatic Scheie’s pigmentation grading of the trabecular meshwork.

Andrea De Giusti, Simone Pajaro, Masaki Tanito
Large Receptive Field Fully Convolutional Network for Semantic Segmentation of Retinal Vasculature in Fundus Images

Analysis of the retinal vasculature morphology from fundus images, using measures such as arterio-venous ratio, is a promising lead for the early diagnosis of cardiovascular risks. The accuracy of these measures relies on the robustness of the vessels segmentation and classification. However, algorithms based on prior topological knowledge have difficulty modelling the abnormal structure of pathological vasculatures, while patch-trained Fully Convolutional Neural Networks (FCNNs) struggle to learn the wide and extensive topology of the vessels because of their narrow receptive fields.This paper proposes a novel Fully Convolutional Neural Network architecture capable of processing high resolution images through a large receptive field at a minimal memory and computational cost. First, a single branch CNN is trained on whole images at low resolution to learn large scale features. Then, this branch is incorporated into a standard encoder/decoder FCNN: its large scale features are concatenated to those computed by the central layer of the FCNN. Finally, the whole network architecture is trained on high-resolution patches. During this last phase, the FCNN benefits from the large scale features while the low resolution branch parameters are fine-tuned. This architecture was evaluated on the publicly available retinal fundus database DRIVE. The trained network achieves an accuracy of 96.1% in segmenting the full retinal vessels and improves by 5% the artery/vein classification compared to a basic U-Net.

Gabriel Lepetit-Aimon, Renaud Duval, Farida Cheriet
Explaining Convolutional Neural Networks for Area Estimation of Choroidal Neovascularization via Genetic Programming

Choroidal neovascularization (CNV), which will cause deterioration of the vision, is characterized by the growth of abnormal blood vessels in the choroidal layer. Estimating the area of CNV is important for proper treatment and prognosis of the disease. As a noninvasive imaging modality, optical coherence tomography (OCT) has become an important modality for assisting the diagnosis. Due to the number of acquired OCT volumes increases, automating the OCT image analysis is becoming increasingly relevant. In this paper, we train a convolutional neural network (CNN) with the raw images to estimate the area of CNV directly. Experimental results show that the performance of such a simple way is very competitive with the segmentation based methods. To explain the reason why the CNN performs well, we try to find the function being approximated by the CNN. Thus, for each layer in the CNN, we propose using a surrogate model, which is desired to have the same input and output with the layer while its mathematical expression is explicit, to fit the function approximated by this layer. Genetic programming (GP), which can automatically evolve both the structure and the parameters of the mathematical model from the data, is employed to derive the model. Primary results show that using GP to derive the surrogate models is a potential way to find the function being approximated by the CNN.

Yibiao Rong, Kai Yu, Dehui Xiang, Weifang Zhu, Zhun Fan, Xinjian Chen
Joint Segmentation and Uncertainty Visualization of Retinal Layers in Optical Coherence Tomography Images Using Bayesian Deep Learning

Optical coherence tomography (OCT) is commonly used to analyze retinal layers for assessment of ocular diseases. In this paper, we propose a method for retinal layer segmentation and quantification of uncertainty based on Bayesian deep learning. Our method not only performs end-to-end segmentation of retinal layers, but also gives the pixel wise uncertainty measure of the segmentation output. The generated uncertainty map can be used to identify erroneously segmented image regions which is useful in downstream analysis. We have validated our method on a dataset of 1487 images obtained from 15 subjects (OCT volumes) and compared it against the state-of-the-art segmentation algorithms that does not take uncertainty into account. The proposed uncertainty based segmentation method results in comparable or improved performance, and most importantly is more robust against noise.

Suman Sedai, Bhavna Antony, Dwarikanath Mahapatra, Rahil Garnavi
cGAN-Based Lacquer Cracks Segmentation in ICGA Image

The increasing prevalence of high myopia has raised concern worldwide. In high myopia, myopia macular degeneration (MMD) is a major cause of vision impairment and lacquer crack (LC) is one of the main signs of MMD. Since the development of LC can reflect the severity of MMD, it is important and meaningful to segment LCs. Indocyanine green angiography (ICGA) has been used for visualizing LCs and is considered to be superior to fluorescein angiography (FA). However, LCs segmentation is difficult due to the image blurring and the confusion between LCs and the background. In this paper, we propose an automatic LCs segmentation method based on the improved conditional generative adversarial nets (cGAN). To apply the advanced cGAN on ICGA images, Dice loss function is added to improve the accuracy of segmentation. Experiments on the ICGA images of high myopia denoted that the proposed method can successfully segment LCs with the trained model and achieve better performance than other popular nets.

Hongjiu Jiang, Yuhui Ma, Weifang Zhu, Ying Fan, Yihong Hua, Qiuying Chen, Xinjian Chen
Localizing Optic Disc and Cup for Glaucoma Screening via Deep Object Detection Networks

Segmentation of the optic disc (OD) and optic cup (OC) from a retinal fundus image plays an important role for glaucoma screening and diagnosis. However, most existing methods only focus on pixel-level representations, and ignore the high level representations. In this work, we consider the high level concept, i.e., objectness constraint, for fundus structure analysis. Specifically, we introduce a deep object detection network to localize OD and OC simultaneously. The end-to-end architecture guarantees to learn more discriminative representations. Moreover, data from a similar domain can further contributes to our algorithm through transfer learning techniques. Experimental results show that our method achieves state-of-the-art OD and OC segmentation/localization results on ORIGA dataset. Moreover, the proposed method also obtains satisfactory glaucoma screening performance with the calculated vertical cup-to-disc ratio (CDR).

Xu Sun, Yanwu Xu, Mingkui Tan, Huazhu Fu, Wei Zhao, Tianyuan You, Jiang Liu
Fundus Image Quality-Guided Diabetic Retinopathy Grading

With the increasing use of fundus cameras, we can get a large number of retinal images. However there are quite a number of images in poor quality because of uneven illumination, occlusion and so on. The quality of images significantly affects the performance of automated diabetic retinopathy (DR) screening systems. Unlike the previous methods that did not face the unbalanced distribution, we propose weighted softmax with center loss to solve the unbalanced data distribution in medical images. Furthermore, we propose Fundus Image Quality (FIQ)-guided DR grading method based on multi-task deep learning, which is the first work using fundus image quality to help grade DR. Experimental results on the Kaggle dataset show that fundus image quality greatly impact DR grading. By considering the influence of quality, the experimental results validate the effectiveness of our propose method. All codes and fundus image quality label on Kaggle DR dataset are released in https://github.com/ClancyZhou/kaggle_DR_image_quality_miccai2018_workshop .

Kang Zhou, Zaiwang Gu, Annan Li, Jun Cheng, Shenghua Gao, Jiang Liu
DeepDisc: Optic Disc Segmentation Based on Atrous Convolution and Spatial Pyramid Pooling

The optic disc (OD) segmentation is an important step for fundus image base disease diagnosis. In this paper, we propose a novel and effective method called DeepDisc to segment the OD. It mainly contains two components: atrous convolution and spatial pyramid pooling. The atrous convolution adjusts filter’s field-of-view and controls the resolution of features. In addition, the spatial pyramid pooling module probes convolutional features at multiple scales and encodes global context information. Both of them are used to further boost OD segmentation performance. Finally, we demonstrate that our DeepDisc system achieves state-of-the-art disc segmentation performance on the ORIGA and Messidor datasets without any post-processing strategies, such as dense conditional random field.

Zaiwang Gu, Peng Liu, Kang Zhou, Yuming Jiang, Haoyu Mao, Jun Cheng, Jiang Liu
Large-Scale Left and Right Eye Classification in Retinal Images

Left and right eye information is an important priori for automatic retinal fundus image analysis. However, such information is often not available or even wrongly provided in many datasets. In this work, we spend a considerable amount of efforts in manually annotating the left and right eyes from the large-scale Kaggle Diabetic Retinopathy dataset consisting of 88,702 fundus images, based on our developed online labeling system. With the newly annotated large-scale dataset, we also train classification models based on convolutional neural networks to discriminate left and right eyes in fundus images. As experimentally evaluated on the Kaggle and Origa dataset, our trained deep learning models achieve 99.90% and 99.23% in term of classification accuracy, respectively, which can be considered for practical use.

Peng Liu, Zaiwang Gu, Fan Liu, Yuming Jiang, Shanshan Jiang, Haoyu Mao, Jun Cheng, Lixin Duan, Jiang Liu
Automatic Segmentation of Cortex and Nucleus in Anterior Segment OCT Images

We propose a pipeline for automatically segmenting cortex and nucleus in a 360-degree anterior segment optical coherence tomography (AS-OCT) image. The proposed pipeline consists of a U-shaped network followed by a shape template. The U-shaped network predicts a mask for cortex and nucleus. However, the boundary between cortex and nucleus is weak, so that the boundary of the prediction is an irregular shape and does not satisfy the physiological structure of nucleus. To address this problem, in the second step, we design a shape template according to the physiological structure of nucleus to refine the boundary. Our method integrates both appearance and structure information. The accuracy is measured by the normalized mean squared error (NMSE) between ground truth line and predicted line. We achieve NMSE 7.09/7.94 for nucleus top/bottom boundary and 2.49/2.43 for cortex top/bottom boundary.

Pengshuai Yin, Mingkui Tan, Huaqing Min, Yanwu Xu, Guanghui Xu, Qingyao Wu, Yunfei Tong, Higashita Risa, Jiang Liu
Local Estimation of the Degree of Optic Disc Swelling from Color Fundus Photography

Swelling of the optic nerve head (ONH) is most accurately quantitatively assessed via volumetric measures using 3D spectral-domain optical coherence tomography (SD-OCT). However, SD-OCT is not always available as its use is primarily limited to specialized eye clinics rather than in primary care or telemedical settings. Thus, there is still a need for severity assessment using more widely available 2D fundus photographs. In this work, we propose a machine-learning approach to locally estimate the degree of the optic disc swelling at each pixel location from only a 2D fundus photograph as the input. For training purposes, a thickness map of the swelling (reflecting the distance between the top and bottom surfaces of the ONH and surrounding retina) as measured from SD-OCT at each pixel location was used as the ground truth. A random-forest classifier was trained to output each thickness value from local fundus features pertaining to textural and color information. Eighty-eight image pairs of ONH-centered SD-OCT and registered fundus photographs from different subjects with optic disc swelling were used for training and evaluating the model in a leave-one-subject-out fashion. Comparing the thickness map from the proposed method to the ground truth via SD-OCT, a root-mean-square (RMS) error of 1.66 mm $$^3$$ 3 for the entire ONH region was achieved, and Spearman’s correlation coefficient was $$R=0.73$$ R = 0.73 . Regional volumes for the nasal, temporal, inferior, superior, and peripapillary regions had RMS errors of 0.64 mm $$^3$$ 3 , 0.61 mm $$^3$$ 3 , 0.74 mm $$^3$$ 3 , 0.71 mm $$^3$$ 3 , and 1.30 mm $$^3$$ 3 , respectively, suggesting that there is enough evidence in a singular color fundus photograph to estimate local swelling information.

Samuel S. Johnson, Jui-Kai Wang, Mohammad Shafkat Islam, Matthew J. Thurtell, Randy H. Kardon, Mona K. Garvin
Visual Field Based Automatic Diagnosis of Glaucoma Using Deep Convolutional Neural Network

In order to develop a deep neural network able to differentiate glaucoma from non-glaucoma patients based on visual filed (VF) test results, we collected VF tests from 3 different ophthalmic centers in mainland China. Visual fields (VFs) obtained by both Humphrey 30-2 and 24-2 tests were collected. Reliability criteria were established as fixation losses less than 2/13, false positive and false negative rates of less than 15%. All the VFs from both eyes of a single patient are assigned to either train or validation set to avoid data leakage. We split a total of 4012 PD images from 1352 patients into two sets, 3712 for training and another 300 for validation. On the validation set of 300 VFs, CNN achieves the accuracy of 0.876, while the specificity and sensitivity are 0.826 and 0.932, respectively. For ophthalmologists, the average accuracies are 0.607, 0.585 and 0.626 for resident ophthalmologists, attending ophthalmologists and glaucoma experts, respectively. AGIS and GSS2 achieved accuracy of 0.459 and 0.523 respectively. Three traditional machine learning algorithms, namely support vector machine (SVM), random forest (RF), and k-nearest neighbor (k-NN) were also implemented and evaluated in the experiments, which achieved accuracy of 0.670, 0.644, and 0.591 respectively. In glaucoma diagnosis based on VF, our algorithm based on CNN has achieved higher accuracy compared to human ophthalmologists and traditional rules (AGIS and GSS2). It will be a powerful tool to distinguish glaucoma from non-glaucoma VFs, and may help screening and diagnosis of glaucoma in the future.

Fei Li, Zhe Wang, Guoxiang Qu, Yu Qiao, Xiulan Zhang
Towards Standardization of Retinal Vascular Measurements: On the Effect of Image Centering

Within the general framework of consistent and reproducible morphometric measurements of the retinal vasculature in fundus images, we present a quantitative pilot study of the changes in measurements commonly used in retinal biomarker studies (e.g. caliber-related, tortuosity and fractal dimension of the vascular network) induced by centering fundus image acquisition on either the optic disc or on the macula. To our best knowledge, no such study has been reported so far. Analyzing 149 parameters computed from 80 retinal images (20 subjects, right and left eye, optic-disc and macula centered), we find strong variations and limited concordance in images of the two types. Although analysis of larger cohorts is obviously necessary, our results strengthen the need for a structured investigation into the uncertainty of retinal vasculature measurements, ideally in the framework of an international debate on standardization.

Muthu Rama Krishnan Mookiah, Sarah McGrory, Stephen Hogg, Jackie Price, Rachel Forster, Thomas J. MacGillivray, Emanuele Trucco
Feasibility Study of Subfoveal Choroidal Thickness Changes in Spectral-Domain Optical Coherence Tomography Measurements of Macular Telangiectasia Type 2

Macular Telangiectasia Type 2 (MacTel2) is a disease of the retina leading to a gradual deterioration of central vision. At the onset of the disease a good visual acuity is present, which declines as the disease progresses to cause reading difficulties. In this paper, we present new insights on the vascular changes in MacTel2. We investigated whether MacTel2 progression correlates to changes in the thickness of the choroid. For this purpose, we apply a recently published registration-based approach to detect deviations in the choroid on a dataset of 45 MacTel2 patients. Between 2012 and 2016 these subjects and a control group were measured twice within variable intervals of time in the Moorfields Eye Hospital in the MacTel Natural History Observation and Registry Study. Our results show that in the MacTel2 group the thickness of the choroid increased while in the control group a decrease was noted. Manual expert segmentation and an automated state-of-the-art method were used to validate the results.

Tiziano Ronchetti, Peter Maloca, Emanuel Ramos de Carvalho, Tjebo F. C. Heeren, Konstantinos Balaskas, Adnan Tufail, Catherine Egan, Mali Okada, Selim Orgül, Christoph Jud, Philippe C. Cattin
Segmentation of Retinal Layers in OCT Images of the Mouse Eye Utilizing Polarization Contrast

Retinal layer segmentation is crucial for the interpretation and visualization of optical coherence tomography (OCT) image data. In this work we utilized a polarization-sensitive OCT system to enhance the segmentation of the retinal pigment epithelium in the mouse retina together with the segmentation of five additional retinal surfaces. Hereby, retinal layers are segmented on a tomogram basis using a graph-based approach in the reflectivity images as well as the cross-polarization images. Thickness changes in the superoxide dismutase 1 (SOD1) knock-out mouse model were assessed and compared to a control group and revealed a thinning of the total and outer retina. Pathological drusen-like lesions were identified in the outer retina. Incorporating additional image contrast offered by the functional extensions of OCT into traditional layer segmentation approaches proved to be valuable. The proposed approach might be extended with other contrast channels such as OCT angiography.

Marco Augustin, Danielle J. Harper, Conrad W. Merkle, Christoph K. Hitzenberger, Bernhard Baumann
Glaucoma Diagnosis from Eye Fundus Images Based on Deep Morphometric Feature Estimation

Glaucoma is an ophthalmic disease related to damage in the optic nerve and it is without symptoms in its early stages. Left untreated, it can lead to vision limitation and blindness. Eye fundus images have been widely accepted by medical personnel to examine the morphology and texture of the optic nerve head and the physiologic cup but glaucoma diagnosis is still subjective and without clear consensus among experts. This paper presents a multi-stage deep learning model for glaucoma diagnosis based on a curriculum learning strategy. In curriculum learning, a model is sequentially trained to solve incrementally difficult tasks. Our proposed model includes the following stages: segmentation of the optic disc and physiological cup, prediction of morphometric features from segmentations, and prediction of disease level (healthy, suspicious and glaucoma). The experimental evaluation shows that our proposed method outperforms conventional convolutional deep learning models from the state of the art reported on the RIM-ONE-v1 and DRISHTI-GS1 datasets with an accuracy of 89.4% and an AUC of 0.82 respectively.

Oscar Perdomo, Vincent Andrearczyk, Fabrice Meriaudeau, Henning Müller, Fabio A. González
2D Modeling and Correction of Fan-Beam Scan Geometry in OCT

A-scan acquisitions in OCT images are acquired in a fan-beam pattern, but saved and displayed in a rectangular space. This results in an inaccurate representation of the scan geometry of OCT images, which introduces systematic distortions that can greatly impact shape and morphology based analysis of the retina. Correction of OCT scan geometry has proven to be a challenging task due to a lack of information regarding the true angle of entry of each A-scan through the pupil and the location of the A-scan nodal points. In this work, we present a preliminary model that solves for the OCT scan geometry in a restricted 2D setting. Our approach uses two repeat scans with corresponding landmarks to estimate the necessary parameters to correctly restore the fan-beam geometry of the input B-scans. Our results show accurate estimation of the ground truth geometry from simulated B-scans, and we found qualitatively promising result when the correction was applied to longitudinal B-scans of the same subject. We establish a robust 2D framework that can potentially be expanded for full 3D estimation and correction of OCT scan geometries.

Min Chen, James C. Gee, Jerry L. Prince, Geoffrey K. Aguirre
A Bottom-Up Saliency Estimation Approach for Neonatal Retinal Images

Retinopathy of Prematurity (ROP) is a potentially blinding disease occurring primarily in prematurely born neonates. Staging or classification of ROP into various stages is mainly dependant on the presence of ridge or demarcation line and its distance with respect to optic disc. Thus, computer aided diagnosis of ROP requires method to automatically detect the ridge. To this end, a new bottom up saliency estimation method for neonatal retinal images is proposed. The method consists of first obtaining a depth map of neonatal retinal image via an image restoration scheme based on a physical model. The obtain depth is then converted to a saliency map. Then the image is further processed to even out illumination and contrast variations and the border artifacts. Next, two additional saliency maps are estimated from the processed image using gradient and appearance cues. The obtained saliency maps are then fused by pixel-wise multiplication and addition operators. The obtained final saliency map facilitates the detection of demarcation line and is qualitatively shown to be more suitable for neonatal retinal images compared to the state of the art saliency estimation techniques. This method could thus serve as tool for improved and faster diagnosis. Additionally, we also explore the usefulness of saliency maps for the task of classification of ROP into four stages.

Sharath M. Shankaranarayana, Keerthi Ram, Anand Vinekar, Kaushik Mitra, Mohanasankar Sivaprakasam
Backmatter
Metadata
Title
Computational Pathology and Ophthalmic Medical Image Analysis
Editors
Danail Stoyanov
Zeike Taylor
Francesco Ciompi
Dr. Yanwu Xu
Anne Martel
Lena Maier-Hein
Nasir Rajpoot
Dr. Jeroen van der Laak
Mitko Veta
Stephen McKenna
David Snead
Emanuele Trucco
Mona K. Garvin
Xin Jan Chen
Dr. Hrvoje Bogunovic
Copyright Year
2018
Electronic ISBN
978-3-030-00949-6
Print ISBN
978-3-030-00948-9
DOI
https://doi.org/10.1007/978-3-030-00949-6

Premium Partner