Skip to main content

Über dieses Buch

This book constitutes the refereed proceedings of the 6th International Workshop on Ophthalmic Medical Image Analysis, OMIA 2020, held in conjunction with the 23rd International Conference on Medical Imaging and Computer-Assisted Intervention, MICCAI 2020, in Lima, Peru, in October 2020. The workshop was held virtually due to the COVID-19 crisis.

The 21 papers presented at OMIA 2020 were carefully reviewed and selected from 34 submissions. The papers cover various topics in the field of ophthalmic medical image analysis and challenges in terms of reliability and validation, number and type of conditions considered, multi-modal analysis (e.g., fundus, optical coherence tomography, scanning laser ophthalmoscopy), novel imaging technologies, and the effective transfer of advanced computer vision and machine learning technologies.



Bio-inspired Attentive Segmentation of Retinal OCT Imaging

Albeit optical coherence imaging (OCT) is widely used to assess ophthalmic pathologies, localization of intra-retinal boundaries suffers from erroneous segmentations due to image artifacts or topological abnormalities. Although deep learning-based methods have been effectively applied in OCT imaging, accurate automated layer segmentation remains a challenging task, with the flexibility and precision of most methods being highly constrained. In this paper, we propose a novel method to segment all retinal layers, tailored to the bio-topological OCT geometry. In addition to traditional learning of shift-invariant features, our method learns in selected pixels horizontally and vertically, exploiting the orientation of the extracted features. In this way, the most discriminative retinal features are generated in a robust manner, while long-range pixel dependencies across spatial locations are efficiently captured. To validate the effectiveness and generalisation of our method, we implement three sets of networks based on different backbone models. Results on three independent studies show that our methodology consistently produces more accurate segmentations than state-of-the-art networks, and shows better precision and agreement with ground truth. Thus, our method not only improves segmentation, but also enhances the statistical power of clinical trials with layer thickness change outcomes.
Georgios Lazaridis, Moucheng Xu, Saman Sadeghi Afgeh, Giovanni Montesano, David Garway-Heath

DR Detection Using Optical Coherence Tomography Angiography (OCTA): A Transfer Learning Approach with Robustness Analysis

OCTA imaging is an emerging modality for the discovery of retinal biomarkers in systemic disease. Several studies have already shown the potential of deep learning algorithms in the medical domain. However, they generally require large amount of manually graded images which may not always be available. In our study, we aim to investigate whether transfer learning can help in identifying patient status from a relatively small dataset. Additionally, we explore if data augmentation may help in improving our classification accuracy. Finally, for the first time, we propose a validation of our model on OCTA images acquired with a different device. OCTA scans from three different groups of participants were analysed: diabetic with and without retinopathy (DR and NoDR, respectively) and healthy subjects. We used the convolutional neural network architecture VGG16 and achieved \(83.29\%\) accuracy when classifying DR, NoDR and Controls. Our results demonstrate how transfer learning enables fairly accurate OCTA scan classification and augmentation based on geometric transformations helps in improving the classification accuracy further. Finally, we show how our model maintains consistent performance across OCTA imaging devices, without any re-training.
Rayna Andreeva, Alessandro Fontanella, Ylenia Giarratano, Miguel O. Bernabeu

What is the Optimal Attribution Method for Explainable Ophthalmic Disease Classification?

Deep learning methods for ophthalmic diagnosis have shown success for tasks like segmentation and classification but their implementation in the clinical setting is limited by the black-box nature of the algorithms. Very few studies have explored the explainability of deep learning in this domain. Attribution methods explain the decisions by assigning a relevance score to each input feature. Here, we present a comparative analysis of multiple attribution methods to explain the decisions of a convolutional neural network (CNN) in retinal disease classification from OCT images. This is the first such study to perform both quantitative and qualitative analyses. The former was performed using robustness, runtime, and sensitivity while the latter was done by a panel of eye care clinicians who rated the methods based on their correlation with diagnostic features. The study emphasizes the need for developing explainable models that address the end-user requirements, hence increasing the clinical acceptance of deep learning.
Amitojdeep Singh, Sourya Sengupta, Jothi Balaji J., Abdul Rasheed Mohammed, Ibrahim Faruq, Varadharajan Jayakumar, John Zelek, Vasudevan Lakshminarayanan

DeSupGAN: Multi-scale Feature Averaging Generative Adversarial Network for Simultaneous De-blurring and Super-Resolution of Retinal Fundus Images

Image quality is of utmost importance for image-based clinical diagnosis. In this paper, a generative adversarial network-based retinal fundus quality enhancement network is proposed. With the advent of different cheaper, affordable and lighter point-of-care imaging or telemedicine devices, the chances of making a better and more accessible healthcare system in developing countries become higher. But these devices often lack the quality of images. This single network simultaneously takes into account two different image degradation problems that are common i.e. blurring and low spatial resolution. A novel convolutional multi-scale feature averaging block (MFAB) is proposed which can extract feature maps with different kernel sizes and fuse them together. Both local and global feature fusion are used to get a stable training of wide network and to learn the hierarchical global features. The results show that this network achieves better results in terms of peak-signal-to-noise ratio (PSNR) and structural similarity index (SSIM) metrics compared with other super-resolution, de-blurring methods. To the best of our knowledge, this is the first work that has combined multiple degradation models simultaneously for retinal fundus images analysis.
Sourya Sengupta, Alexander Wong, Amitojdeep Singh, John Zelek, Vasudevan Lakshminarayanan

Encoder-Decoder Networks for Retinal Vessel Segmentation Using Large Multi-scale Patches

We propose an encoder-decoder framework for the segmentation of blood vessels in retinal images that relies on the extraction of large-scale patches at multiple image-scales during training. Experiments on three fundus image datasets demonstrate that this approach achieves state-of-the-art results and can be implemented using a simple and efficient fully-convolutional network with a parameter count of less than 0.8M. Furthermore, we show that this framework - called VLight - avoids overfitting to specific training images and generalizes well across different datasets, which makes it highly suitable for real-world applications where robustness, accuracy as well as low inference time on high-resolution fundus images is required.
Björn Browatzki, Jörn-Philipp Lies, Christian Wallraven

Retinal Image Quality Assessment via Specific Structures Segmentation

Quality of retinal image plays an essential role in ophthalmic disease diagnosis. However, most of the existing models neglect the potential correlation between retinal structure segmentation and retinal image quality assessment (RIQA), since the segmentation result is able to provide the region of interests (ROIs) and the RIQA model can extract more discriminative features. Therefore, in this paper, we incorporate the retinal structure segmentation process into RIQA tasks and thus propose a structure-guided deep neural network (SG-Net) for better image quality assessment. The SG-Net consists of a vessel segmentation module, an optic disc segmentation module, and a quality assessment module. The vessel segmentation module and optic disk segmentation module generate the segmentation results of important retinal structures (i.e., vessel and optic disc) that provide supplementary knowledge to support the quality assessment module. The quality assessment module is a three-branches classification network to extract and fuse features to estimate image quality. We evaluated our proposed SG-Net on the Eye-Quality (EyeQ) database, and the experiment results demonstrated that the proposed SG-Net outperforms other existing state-of-the-art methods. Our ablation studies also indicated that each structure segmentation module is able to achieve impressive performance gain on the EyeQ database.
Xinqiang Zhou, Yicheng Wu, Yong Xia

Cascaded Attention Guided Network for Retinal Vessel Segmentation

Segmentation of retinal vessels is of great importance in the diagnosis of eye-related diseases. Many learning-based methods have been proposed for this task and get encouraging results. In this paper, we propose a novel end-to-end Cascaded Attention Guided Network (CAG-Net) for retinal vessel segmentation, which can generate more accurate results for retinal vessel segmentation. Our CAG-Net is a two-step deep neural network which contains two modules, the prediction module and the refinement module. The prediction module is responsible for generating an initial segmentation map, while the refinement module aims at improving the initial segmentation map. The final segmentation result is obtained by integrating the outputs of the two modules. Both of the two modules adopt an Attention UNet++ (AU-Net++) to boost the performance, which employs Attention guided Convolutional blocks (AC blocks) on the decoder. The experimental results show that our proposed network achieved state-of-the-art performance on the three public retinal datasets DRIVE, CHASE_DB1 and STARE.
Mingxing Li, Yueyi Zhang, Zhiwei Xiong, Dong Liu

Self-supervised Denoising via Diffeomorphic Template Estimation: Application to Optical Coherence Tomography

Optical Coherence Tomography (OCT) is pervasive in both the research and clinical practice of Ophthalmology. However, OCT images are strongly corrupted by noise, limiting their interpretation. Current OCT denoisers leverage assumptions on noise distributions or generate targets for training deep supervised denoisers via averaging of repeat acquisitions. However, recent self-supervised advances allow the training of deep denoising networks using only repeat acquisitions without clean targets as ground truth, reducing the burden of supervised learning. Despite the clear advantages of self-supervised methods, their use is precluded as OCT shows strong structural deformations even between sequential scans of the same subject due to involuntary eye motion. Further, direct nonlinear alignment of repeats induces correlation of the noise between images. In this paper, we propose a joint diffeomorphic template estimation and denoising framework which enables the use of self-supervised denoising for motion deformed repeat acquisitions, without empirically registering their noise realizations. Strong qualitative and quantitative improvements are achieved in denoising OCT images, with generic utility in any imaging modality amenable to multiple exposures.
Guillaume Gisbert, Neel Dey, Hiroshi Ishikawa, Joel Schuman, James Fishbaugh, Guido Gerig

Automated Detection of Diabetic Retinopathy from Smartphone Fundus Videos

Even though it is important to screen patients with diabetes for signs of diabetic retinopathy (DR), doing so comprehensively remains a practical challenge in low- and middle-income countries due to limited resources and financial constraints. Supervised machine learning has shown a strong potential for automated DR detection, but has so far relied on photographs that show all relevant parts of the fundus, which require relatively costly imaging systems. We present the first approach that automatically detects DR from fundus videos that show different parts of the fundus at different times, and that can be acquired with a low-cost smartphone-based fundus imaging system. Our novel image analysis pipeline consists of three main steps: Detecting the lens with a circle Hough Transform, detecting informative frames using a Support Vector Machine, and detecting the disease itself with an attention-based multiple instance learning (MIL) CNN architecture. Our results support the feasibility of a smartphone video based approach.
Simon Mueller, Snezhana Karpova, Maximilian W. M. Wintergerst, Kaushik Murali, Mahesh P. Shanmugam, Robert P. Finger, Thomas Schultz

Optic Disc, Cup and Fovea Detection from Retinal Images Using U-Net++ with EfficientNet Encoder

The accurate detection of retinal structures like an optic disc (OD), cup, and fovea is crucial for the analysis of Age-related Macular Degeneration (AMD), Glaucoma, and other retinal conditions. Most segmentation methods rely on separate detection of these retinal structures due to which a combined analysis for computer-aided ophthalmic diagnosis and screening is challenging. To address this issue, the paper introduces an approach incorporating OD, cup, and fovea analysis together. The paper presents a novel method for the detection of OD with a cup and fovea using modified U-Net++ architecture with the EfficientNet-B4 model as a backbone. The extracted features from the EfficientNet are utilized using skip connections in U-Net++ for precise segmentation. Datasets from ADAM and REFUGE challenges are used for evaluating the performance. The proposed method achieved a success rate of 94.74% and 95.73% dice value for OD segmentation on ADAM and REFUGE data, respectively. For fovea detection, the average Euclidean distance of 26.17 pixels is achieved for the ADAM dataset. The proposed method stood first for OD detection and segmentation tasks in ISBI ADAM 2020 challenge.
Ravi Kamble, Pranab Samanta, Nitin Singhal

Multi-level Light U-Net and Atrous Spatial Pyramid Pooling for Optic Disc Segmentation on Fundus Image

Optic disc (OD) is the main anatomical structures in retinal images. It is very important to conduct reliable OD segmentation in the automatic diagnosis of many fundus diseases. For OD segmentation, the previous studies with stacked convolutional layers and pooling operations often neglect the detailed spatial information. However, this information is vital to distinguish the diversity of the profile of OD and the spatial distribution of vessels. In this paper, we propose a novel OD segmentation network by designing two modules, namely, light U-Net module and atrous convolution spatial pyramid pooling module. We first extract hierarchical features by using ResNet-101 as a base network. Light U-Net module is utilized to learn the intrinsic spatial information effectively and enhance the ability of feature representation in low-level feature maps. Atrous convolution and spatial pyramid pooling module is used to incorporate global spatial information in high-level semantic features. Finally, we integrate the spatial information by feature fusion to get the segmentation results. We estimate the proposed method on two public retinal fundus image datasets (REFUGE and Drishti-GS). For the REFUGE dataset, our model achieves about 2% improvement in the mIoU and Dice over the next best method. For Drishti-GS, our method also outperforms the other state-of-the-art methods with 99.74% Dice and 93.26% mIoU.
Weixin Liu, Haijun Lei, Hai Xie, Benjian Zhao, Guanghui Yue, Baiying Lei

An Interactive Approach to Region of Interest Selection in Cytologic Analysis of Uveal Melanoma Based on Unsupervised Clustering

Facilitating quantitative analysis of cytology images of fine needle aspirates of uveal melanoma is important to confirm diagnosis and inform management decisions. Extracting high-quality regions of interest (ROIs) from cytology whole slide images is a critical first step. To the best of our knowledge, we describe the first unsupervised clustering-based method for fine needle aspiration cytology (FNAC) that automatically suggests high-quality ROIs. Our method is integrated in a graphical user interface that allows for interactive refinement of ROI suggestions to tailor analysis to any specific specimen. We show that the proposed approach suggests ROIs that are in very good agreement with expert-extracted regions and demonstrate that interactive refinement results in the extraction of more high-quality regions compared to purely algorithmic extraction alone.
Haomin Chen, T. Y. Alvin Liu, Zelia Correa, Mathias Unberath

Retinal OCT Denoising with Pseudo-Multimodal Fusion Network

Optical coherence tomography (OCT) is a prevalent imaging technique for retina. However, it is affected by multiplicative speckle noise that can degrade the visibility of essential anatomical structures, including blood vessels and tissue layers. Although averaging repeated B-scan frames can significantly improve the signal-to-noise-ratio (SNR), this requires longer acquisition time, which can introduce motion artifacts and cause discomfort to patients. In this study, we propose a learning-based method that exploits information from the single-frame noisy B-scan and a pseudo-modality that is created with the aid of the self-fusion method. The pseudo-modality provides good SNR for layers that are barely perceptible in the noisy B-scan but can over-smooth fine features such as small vessels. By using a fusion network, desired features from each modality can be combined, and the weight of their contribution is adjustable. Evaluated by intensity-based and structural metrics, the result shows that our method can effectively suppress the speckle noise and enhance the contrast between retina layers while the overall structure and small blood vessels are preserved. Compared to the single modality network, our method improves the structural similarity with low noise B-scan from \(0.559\pm 0.033\) to \(0.576\pm 0.031\).
Dewei Hu, Joseph D. Malone, Yigit Atay, Yuankai K. Tao, Ipek Oguz

Deep-Learning-Based Estimation of 3D Optic-Nerve-Head Shape from 2D Color Fundus Photographs in Cases of Optic Disc Swelling

In cases of optic disc swelling, volumetric measurements and shape features are promising to evaluate the severity of the swelling and to differentiate the cause. However, previous studies have mostly focused on the use of volumetric spectral-domain optical coherence tomography (OCT), which is not always available in non-ophthalmic clinics and telemedical settings. In this work, we propose the use of a deep-learning-based approach (more specifically, an adaptation of a feature pyramid network, FPN) to obtain total-retinal-thickness (TRT) maps (as would normally be obtained from OCT) from more readily available 2D color fundus photographs. From only these thickness maps, we are able to compute both volumetric measures of swelling for quantification of the location/degree of swelling and 3D statistical shape measures for quantification of optic-nerve-head morphology. Evaluating our proposed approach (using nine-fold cross validation) on 102 paired color fundus photographs and OCT images (with the OCT acting as the ground truth) from subjects with various levels of optic disc swelling, we achieved significantly smaller errors and significantly larger linear correlations of both the volumetric measures and shape measures than that which would be obtained using a U-Net approach. The proposed method has great potential to make 3D ONH shape analysis possible even in situations where only color fundus photographs are available; these 3D shape measures can also be beneficial to help differentiate causes of optic disc swelling.
Mohammad Shafkat Islam, Jui-Kai Wang, Wenxiang Deng, Matthew J. Thurtell, Randy H. Kardon, Mona K. Garvin

Weakly Supervised Retinal Detachment Segmentation Using Deep Feature Propagation Learning in SD-OCT Images

Most automated segmentation approaches for quantitative assessment of sub-retinal fluid regions rely heavily on retinal anatomy knowledge (e.g. layer segmentation) and pixel-level annotation, which requires excessive manual intervention and huge learning costs. In this paper, we propose a weakly supervised learning method for the quantitative analysis of lesion regions in spectral domain optical coherence tomography (SD-OCT) images. Specifically, we first obtain more accurate positioning through improved class activation mapping; second, in the feature propagation learning network, the multi-scale features learned by the slice-level classification are employed to expand its activation area and generate soft labels; finally, we use generated soft labels to train a fully supervised network for more robust results. The proposed method is evaluated on subjects from a dataset with 23 volumes for cross-validation experiments. The experimental results demonstrate that the proposed method can achieve encouraging segmentation accuracy comparable to strong supervision methods only utilizing image-level labels.
Tieqiao Wang, Sijie Niu, Jiwen Dong, Yuehui Chen

A Framework for the Discovery of Retinal Biomarkers in Optical Coherence Tomography Angiography (OCTA)

Recent studies have demonstrated the potential of OCTA retinal imaging for the discovery of biomarkers of vascular disease of the eye and other organs. Furthermore, advances in deep learning have made it possible to train algorithms for the automated detection of such biomarkers. However, two key limitations of this approach are the need for large numbers of labeled images to train the algorithms, which are often not met by the typical single-centre prospective studies in the literature, and the lack of interpretability of the features learned during training. In the current study, we developed a network analysis framework to characterise retinal vasculature where geometric and topological information are exploited to increase the performance of classifiers trained on tens of OCTA images. We demonstrate our approach in two different diseases with a retinal vascular footprint: diabetic retinopathy (DR) and chronic kidney disease (CKD). Our approach enables the discovery of previously unreported retinal vascular morphological differences in DR and CKD, and demonstrate the potential of OCTA for automated disease assessment.
Ylenia Giarratano, Alisa Pavel, Jie Lian, Rayna Andreeva, Alessandro Fontanella, Rik Sarkar, Laura J. Reid, Shareen Forbes, Dan Pugh, Tariq E. Farrah, Neeraj Dhaun, Baljean Dhillon, Tom MacGillivray, Miguel O. Bernabeu

An Automated Aggressive Posterior Retinopathy of Prematurity Diagnosis System by Squeeze and Excitation Hierarchical Bilinear Pooling Network

Aggressive Posterior Retinopathy of Prematurity (AP-ROP) is a special type of Retinopathy of Prematurity (ROP), which is one of the most common childhood blindness that occurs in premature infants. AP-ROP is uncommon, atypical, progresses rapidly and prone to misdiagnosis. If it is not detected and treated timely, it will rapidly progress to the fifth stage of ROP that causes easily retinal detachment and blindness. Early diagnosis of AP-ROP is the key to reduce the blindness rate of the disease. In this paper, we apply computer-aided methods for early AP-ROP diagnosis. The proposed method utilizes a Squeeze and Excitation Hierarchical Bilinear Pooling (SE-HBP) network to complete early diagnosis of AP-ROP. Specifically, the SE module can automatically obtain the important information of the channel, where the useless features are suppressed and the useful features are emphasized to enhance the feature extraction capability of the network. The HBP module can complement the information of the feature layers to capture the feature relationship between the layers so that the representation ability of the model can be enhanced. Finally, in order to solve the imbalance problem of AP-ROP fundus image data, we use a focal loss function, which can effectively alleviate the accuracy reduction that caused by the data imbalance. The experimental results show that our system can effectively distinguish AP-ROP with the fundus images, which has a potential application in assisting the ophthalmologists to determine the AP-ROP.
Rugang Zhang, Jinfeng Zhao, Guozhen Chen, Hai Xie, Guanghui Yue, Tianfu Wang, Guoming Zhang, Baiying Lei

Weakly-Supervised Lesion-Aware and Consistency Regularization for Retinitis Pigmentosa Detection from Ultra-Widefield Images

Retinitis pigmentosa (RP) is one of the most common retinal diseases caused by gene defects, which can lead to night blindness or complete blindness. Accurate diagnosis and lesion identification are significant tasks for clinicians to assess fundus images. However, due to some limitations, it is still challenging to design a method that can simultaneously diagnose and accomplish lesion identification so that the accurate lesion identification can promote the accuracy of diagnosis. In this paper, we propose a method based on weakly-supervised lesion-aware and consistency regularization to detect RP and generate lesion attention map (LAM). Specifically, we extend global average pooling to multiple scales, and use multi-scale features to offset the gap between semantic information and spatial information to generate a more refined LAM. At the same time, we regularize LAMs with different affine transforms for the same sample, and force them to produce more accurate predictions and reduce the overconfidence of the network, which can enhance LAM to cover lesions. We use two central datasets to verify the effectiveness of the proposed model. We train the proposed model in one dataset and test it in the other dataset to verify the generalization performance. Experimental results show that our method achieves promising performance.
Benjian Zhao, Haijun Lei, Xianlu Zeng, Jiuwen Cao, Hai Xie, Guanghui Yue, Jiantao Wang, Guoming Zhang, Baiying Lei

A Conditional Generative Adversarial Network-Based Method for Eye Fundus Image Quality Enhancement

Eye fundus image quality represents a significant factor involved in ophthalmic screening. Usually, eye fundus image quality is affected by artefacts, brightness, and contrast hindering ophthalmic diagnosis. This paper presents a conditional generative adversarial network-based method to enhance eye fundus image quality, which is trained using automatically generated synthetic bad-quality/good-quality image pairs. The method was evaluated in a public eye fundus dataset with three classes: good, usable and bad quality according to specialist annotations with 0.64 Kappa. The proposed method enhanced the image quality from usable to good class in 72.33% of images. Likewise, the image quality was improved from the bad category to usable class, and from bad to good class in 56.21% and 29.49% respectively.
Andrés D. Pérez, Oscar Perdomo, Hernán Rios, Francisco Rodríguez, Fabio A. González

Construction of Quantitative Indexes for Cataract Surgery Evaluation Based on Deep Learning

Objective and accurate evaluation of cataract surgery is a necessary way to improve the operative level of resident and shorten the learning curve. Our objective in this study is to construct quantifiable evaluation indicators through deep learning techniques to assist experts in the implementation of evaluation and verify the reliability of the evaluation indicators. We use a data set of 98 videos of incision, which is a critical step in cataract surgery. According to the visual characteristics of incision evaluation indicators specified in the International Council of Ophthalmology’s Ophthalmology Surgical Competency Assessment Rubric: phacoemulsification (ICO-OSCAR: phaco), we propose using the ResNet and ResUnet to obtain the keratome tip position and the pupil shape to construct the quantifiable evaluation indexes, such as the tool trajectory, the size and shape of incision, and the scaling of a pupil. Referring to the motion of microscope and eye movement caused by keratome pushing during the video recording, we use the center of the pupil as a reference point to calculate the exact relative motion trajectory of the surgical instrument and the incision size, which can be used to directly evaluate surgical skill. The experiment shows that the evaluation indexes we constructed have high accuracy, which is highly consistent with the evaluation of the expert surgeons group.
Yuanyuan Gu, Yan Hu, Lei Mou, HuaYing Hao, Yitian Zhao, Ce Zheng, Jiang Liu

Hybrid Deep Learning Gaussian Process for Diabetic Retinopathy Diagnosis and Uncertainty Quantification

Diabetic Retinopathy (DR) is one of the microvascular complications of Diabetes Mellitus, which remains as one of the leading causes of blindness worldwide. Computational models based on Convolutional Neural Networks represent the state of the art for the automatic detection of DR using eye fundus images. Most of the current work address this problem as a binary classification task. However, including the grade estimation and quantification of predictions uncertainty can potentially increase the robustness of the model. In this paper, a hybrid Deep Learning-Gaussian process method for DR diagnosis and uncertainty quantification is presented. This method combines the representational power of deep learning, with the ability to generalize from small datasets of Gaussian process models. The results show that uncertainty quantification in the predictions improves the interpretability of the method as a diagnostic support tool. The source code to replicate the experiments is publicly available at https://​github.​com/​stoledoc/​DLGP-DR-Diagnosis.
Santiago Toledo-Cortés, Melissa de la Pava, Oscar Perdomo, Fabio A. González


Weitere Informationen

Premium Partner