Skip to main content

2017 | Buch

Fetal, Infant and Ophthalmic Medical Image Analysis

International Workshop, FIFI 2017, and 4th International Workshop, OMIA 2017, Held in Conjunction with MICCAI 2017, Québec City, QC, Canada, September 14, Proceedings

herausgegeben von: M. Jorge Cardoso, Tal Arbel, Dr. Andrew Melbourne, Dr. Hrvoje Bogunovic, Pim Moeskops, Prof. Xinjian Chen, Ernst Schwartz, Mona Garvin, Emma Robinson, Emanuele Trucco, Michael Ebner, Dr. Yanwu Xu, Antonios Makropoulos, Adrien Desjardin, Tom Vercauteren

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed joint proceedings of the International Workshop on Fetal and Infant Image Analysis, FIFI 2017, and the 6th International Workshop on Ophthalmic Medical Image Analysis, OMIA 2017, held in conjunction with the 20th International Conference on Medical Imaging and Computer-Assisted Intervention, MICCAI 2017, in Québec City, QC, Canada, in September 2017.

The 8 full papers presented at FIFI 2017 and the 20 full papers presented at OMIA 2017 were carefully reviewed and selected. The FIFI papers feature research on advanced image analysis approaches focused on the analysis of growth and development in the fetal, infant and paediatric period. The OMIA papers cover various topics in the field of ophthalmic image analysis.

Inhaltsverzeichnis

Frontmatter

International Workshop on Fetal and Infant Image Analysis, FIFI 2017

Frontmatter
Template-Free Estimation of Intracranial Volume: A Preterm Birth Animal Model Study
Abstract
Accurate estimation of intracranial volume (ICV) is key in neuro-imaging-based volumetric studies, since estimation errors directly propagate to the ICV-corrected volumes used in subsequent analyses. ICV estimation through registration to a reference atlas has the advantage of not requiring manually delineated data, and can thus be applied to populations for which labeled data might be inexistent or scarce, e.g., preterm born animal models. However, such method is not robust, since the estimation depends on a single registration. Here we present a groupwise, template-free ICV estimation method that overcomes this limitation. The method quickly aligns pairs of images using linear registration at low resolution, and then computes the most likely ICV values using a Bayesian framework. The algorithm is robust against single registration errors, which are corrected by registrations to other subjects. The algorithm was evaluated on a pilot dataset of rabbit brain MRI (\(N=7\)), in which the estimated ICV was highly correlated (\(\rho =0.99\)) with ground truth values derived from manual delineations. Additional regression and discrimination experiments with human hippocampal volume on a subset of ADNI (\(N=150\)) yielded reduced sample sizes and increased classification accuracy, compared with using a reference atlas.
Juan Eugenio Iglesias, Sebastiano Ferraris, Marc Modat, Willy Gsell, Jan Deprest, Johannes L. van der Merwe, Tom Vercauteren
Assessing Reorganisation of Functional Connectivity in the Infant Brain
Abstract
As maturation of neural networks continues throughout childhood, brain lesions insulting immature networks have different impact on brain function than lesions obtained after full network maturation. Thus, longitudinal studies and analysis of spatial and temporal brain signal correlations are a key component to get a deeper understanding of individual maturation processes, their interaction and their link to cognition. Here, we assess the connectivity pattern deviation of developing resting state networks after ischaemic stroke of children between 7 and 17 years. We propose a method to derive a reorganisational score to detect target regions for overtaking affected functional regions within a stroke location. The evaluation is performed using rs-fMRI data of 16 control subjects and 16 stroke patients. The developing functional connectivity affected by ischaemic stroke exhibits significant differences to the control cohort. This suggests an influence of stroke location and developmental stage on regenerating processes and the reorganisational patterns.
Roxane Licandro, Karl-Heinz Nenning, Ernst Schwartz, Kathrin Kollndorfer, Lisa Bartha-Doering, Hesheng Liu, Georg Langs
Fetal Skull Segmentation in 3D Ultrasound via Structured Geodesic Random Forest
Abstract
Ultrasound is the primary imaging method for prenatal screening and diagnosis of fetal anomalies. Thanks to its non-invasive and non-ionizing properties, ultrasound allows quick, safe and detailed evaluation of the unborn baby, including the estimation of the gestational age, brain and cranium development. However, the accuracy of traditional 2D fetal biometrics is dependent on operator expertise and subjectivity in 2D plane finding and manual marking. 3D ultrasound has the potential to reduce the operator dependence. In this paper, we propose a new random forest-based segmentation framework for fetal 3D ultrasound volumes, able to efficiently integrate semantic and structural information in the classification process. We introduce a new semantic features space able to encode spatial context via generalized geodesic distance transform. Unlike alternative auto-context approaches, this new set of features is efficiently integrated into the same forest using contextual trees. Finally, we use a new structured labels space as alternative to the traditional atomic class labels, able to capture morphological variability of the target organ. Here, we show the potential of this new general framework segmenting the skull in 3D fetal ultrasound volumes, significantly outperforming alternative random forest-based approaches.
Juan J. Cerrolaza, Ozan Oktay, Alberto Gomez, Jacqueline Matthew, Caroline Knight, Bernhard Kainz, Daniel Rueckert
Fast Registration of 3D Fetal Ultrasound Images Using Learned Corresponding Salient Points
Abstract
We propose a fast feature-based rigid registration framework with a novel feature saliency detection technique. The method works by automatically classifying candidate image points as salient or non-salient using a support vector machine trained on points which have previously driven successful registrations. Resulting candidate salient points are used for symmetric matching based on local descriptor similarity and followed by RANSAC outlier rejection to obtain the final transform. The proposed registration framework was applied to 3D real-time fetal ultrasound images, thus covering the entire fetal anatomy for extended FoV imaging. Our method was applied to data from 5 patients, and compared to a conventional saliency point detection method (SIFT) in terms of computational time, quality of the point detection and registration accuracy. Our method achieved similar accuracy and similar saliency detection quality in \(<5\%\) the detection time, showing promising capabilities towards real-time whole-body fetal ultrasound imaging.
Alberto Gomez, Kanwal Bhatia, Sarjana Tharin, James Housden, Nicolas Toussaint, Julia A. Schnabel
Automatic Segmentation of the Intracranial Volume in Fetal MR Images
Abstract
MR images of the fetus allow non-invasive analysis of the fetal brain. Quantitative analysis of fetal brain development requires automatic brain tissue segmentation that is typically preceded by segmentation of the intracranial volume (ICV). This is challenging because fetal MR images visualize the whole moving fetus and in addition partially visualize the maternal body. This paper presents an automatic method for segmentation of the ICV in fetal MR images. The method employs a multi-scale convolutional neural network in 2D slices to enable learning spatial information from larger context as well as detailed local information. The method is developed and evaluated with 30 fetal T2-weighted MRI scans (average age \(33.2\pm 1.2\) weeks postmenstrual age). The set contains 10 scans acquired in axial, 10 in coronal and 10 in sagittal imaging planes. A reference standard was defined in all images by manual annotation of the intracranial volume in 10 equidistantly distributed slices. The automatic analysis was performed by training and testing the network using scans acquired in the representative imaging plane as well as combining the training data from all imaging planes. On average, the automatic method achieved Dice coefficients of 0.90 for the axial images, 0.90 for the coronal images and 0.92 for the sagittal images. Combining the training sets resulted in average Dice coefficients of 0.91 for the axial images, 0.95 for the coronal images, and 0.92 for the sagittal images. The results demonstrate that the evaluated method achieved good performance in extracting ICV in fetal MR scans regardless of the imaging plane.
N. Khalili, P. Moeskops, N. H. P. Claessens, S. Scherpenzeel, E. Turk, R. de Heus, M. J. N. L. Benders, M. A. Viergever, J. P. W. Pluim, I. Išgum
Abdomen Segmentation in 3D Fetal Ultrasound Using CNN-powered Deformable Models
Abstract
In this paper, voxel probability maps generated by a novel fovea fully convolutional network architecture (FovFCN) are used as additional feature images in the context of a segmentation approach based on deformable shape models. The method is applied to fetal 3D ultrasound image data aiming at a segmentation of the abdominal outline of the fetal torso. This is of interest, e.g., for measuring the fetal abdominal circumference, a standard biometric measure in prenatal screening. The method is trained on 126 3D ultrasound images and tested on 30 additional scans. The results show that the approach can successfully combine the advantages of FovFCNs and deformable shape models in the context of challenging image data, such as given by fetal ultrasound. With a mean error of 2.24 mm, the combination of model-based segmentation and neural networks outperforms the separate approaches.
Alexander Schmidt-Richberg, Tom Brosch, Nicole Schadewaldt, Tobias Klinder, Angelo Cavallaro, Ibtisam Salim, David Roundhill, Aris Papageorghiou, Cristian Lorenz
Multi-organ Detection in 3D Fetal Ultrasound with Machine Learning
Abstract
3D ultrasound (US) is a promising technique to perform automatic extraction of standard planes for fetal anatomy assessment. This requires prior organ localization, which is difficult to obtain with direct learning approaches because of the high variability in fetus size and orientation in US volumes. In this paper, we propose a methodology to overcome this spatial variability issue by scaling and automatically aligning volumes in a common 3D reference coordinate system. This preprocessing allows the organ detection algorithm to learn features that only encodes the anatomical variability while discarding the fetus pose. All steps of the approach are evaluated on 126 manually annotated volumes, with an overall mean localization error of 11.9 mm, showing the feasibility of multi-organ detection in 3D fetal US with machine learning.
Caroline Raynaud, Cybèle Ciofolo-Veit, Thierry Lefèvre, Roberto Ardon, Angelo Cavallaro, Ibtisam Salim, Aris Papageorghiou, Laurence Rouet
Robust Regression of Brain Maturation from 3D Fetal Neurosonography Using CRNs
Abstract
We propose a fully three-dimensional Convolutional Regression Network (CRN) for the task of predicting fetal brain maturation from 3D ultrasound (US) data. Anatomical development is modelled as the sonographic patterns visible in the brain at a given gestational age, which are aggregated by the model into a single value: the brain maturation (BM) score. These patterns are learned from 589 3D fetal volumes, and the model is applied to 3D US images of 146 fetal subjects acquired at multiple, ethnically diverse sites, spanning an age range of 18 to 36 gestational weeks. Achieving a mean error of 7.7 days between ground-truth and estimated maturational scores, our method outperforms the current state-of-art for automated BM estimation from 3D US images.
Ana I. L. Namburete, Weidi Xie, J. Alison Noble

4th International Workshop on Ophthalmic Medical Image Analysis, OMIA 2017

Frontmatter
Segmentation of Retinal Blood Vessels Using Dictionary Learning Techniques
Abstract
In this paper, we aim at proving the effectiveness of dictionary learning techniques on the task of retinal blood vessel segmentation. We present three different methods based on dictionary learning and sparse coding that reach state-of-the-art results. Our methods are tested on two, well-known, publicly available datasets: DRIVE and STARE. The methods are compared to many state-of-the-art approaches and turn out to be very promising.
Taibou Birgui Sekou, Moncef Hidane, Julien Olivier, Hubert Cardot
Detecting Early Choroidal Changes Using Piecewise Rigid Image Registration and Eye-Shape Adherent Regularization
Abstract
Recognizing significant temporal changes in the thickness of the choroid and retina at an early stage is a crucial factor in the prevention and treatment of ocular diseases such as myopia or glaucoma. Such changes are expected to be among the first indicators of pathological manifestations and are commonly dealt using segmentation-based approaches. However, segmenting the choroid is challenging due to low contrast, loss of signal and presence of artifacts in optical coherence tomography (OCT) images. In this paper, we present a novel method for early detection of choroidal changes based on piecewise rigid image registration. In order to adhere to the eye’s natural shape, the regularization enforces the local homogeneity of the transformations in nasal-temporal (x-) and superior-inferior (y-) direction by penalizing their radial differences. We restrict our transformation model to anterior-posterior (z-) direction, as we focus on juvenile myopia, which correlates to thickness changes in the choroid rather than to structural alterations. First, the precision of the method was tested on an OCT scan-rescan data set of 62 healthy Asian children, ages 7 to 13, from a population with a high prevalence of myopia. Furthermore, the accuracy of the method in recognizing synthetically induced changes in the data set was evaluated. Finally, the results were compared to those of manually annotated scans.
Tiziano Ronchetti, Peter Maloca, Christoph Jud, Christoph Meier, Selim Orgül, Hendrik P. N. Scholl, Boris Považay, Philippe C. Cattin
Patch-Based Deep Convolutional Neural Network for Corneal Ulcer Area Segmentation
Abstract
We present a novel approach to automatically identify the corneal ulcer areas using fluorescein staining images. The proposed method is based on a deep convolutional neural network that labels each pixel in the corneal image as either ulcer area or non-ulcer area, which is essentially a two-class classification problem. Patch-based approach was employed; for every image pixel, a surrounding patch of size 19 × 19 was used to extract the RGB intensities to be used as features for training and testing. For the architecture of our deep network, there were four convolutional layers followed by three fully connected layers with dropout. The final classification was inferred from the probabilistic output from the network. The proposed approach has been validated on a total of 48 images using 5-fold cross-validation, with high segmentation accuracy established; the proposed method was found to be superior to both a baseline method (active contour) and another representative network method (VGG net). Our automated segmentation method had a mean Dice overlap of 0.86 when compared to the manually delineated gold standard as well as a strong and significant manual-vs-automatic correlation in terms of the ulcer area size (correlation coefficient = 0.9934, p-value = 6.3e-45). To the best of our knowledge, this is one of the first few works that have accurately tackled the corneal ulcer area segmentation challenge using deep neural network techniques.
Qichao Sun, Lijie Deng, Jianwei Liu, Haixiang Huang, Jin Yuan, Xiaoying Tang
Model-Driven 3-D Regularisation for Robust Segmentation of the Refractive Corneal Surfaces in Spiral OCT Scans
Abstract
Measuring the cornea’s anterior and posterior refractive surface is essential for corneal topography, used for diagnostics and the planning of surgeries. Corneal topography by Optical Coherence Tomography (OCT) relies on proper segmentation. Common segmentation methods are limited to specific, B-scan-based scan patterns and fail when applied to data acquired by recently proposed spiral scan trajectories. We propose a novel method for the segmentation of the anterior and posterior refractive surface in scans acquired by 2-D scan trajectories – including but not limited to spirals. Key feature is a model-driven, three-dimensional regularisation of the region of interest, slope and curvature. The regularisation is integrated into a graph-based segmentation with feature-directed smoothing and incremental segmentation. We parameterise the segmentation based on test surface measurements and evaluate its performance by means of 18 in vivo measurements acquired by spiral and radial scanning. The comparison with expert segmentations shows successful segmentation of the refractive corneal surfaces.
Joerg Wagner, Simon Pezold, Philippe C. Cattin
Automatic Retinal Layer Segmentation Based on Live Wire for Central Serous Retinopathy
Abstract
Central serous retinopathy is a serious retinal disease. Retinal layer segmentation for this disease can help ophthalmologists to provide accurate diagnosis and proper treatment for patients. In order to detect surfaces in optical coherence tomography images with pathological changes, an automatic method is reported by combining random forests and a live wire algorithm. First, twenty four features are designed for the random forest classifiers to find initial surfaces. Then, a live wire algorithm is proposed to accurately detect surfaces between retinal layers even though OCT images with fluids are of low contrast and layer boundaries are blurred. The proposed method was evaluated on 24 spectral domain OCT images with central serous retinopathy. The experimental results showed that the proposed method outperformed the state-of-art methods.
Dehui Xiang, Geng Chen, Fei Shi, Weifang Zhu, Xinjian Chen
Retinal Image Quality Classification Using Fine-Tuned CNN
Abstract
Retinal image quality classification makes a great difference in automated diabetic retinopathy screening systems. With the increase of application of portable fundus cameras, we can get a large number of retinal images, but there are quite a number of images in poor quality because of uneven illumination, occlusion and patients movements. Using the dataset with poor quality training networks for DR screening system will lead to the decrease of accuracy. In this paper, we first explore four CNN architectures (AlexNet, GoogLeNet, VGG-16, and ResNet-50) from ImageNet image classification task to our Retinal fundus images quality classification, then we pick top two networks out and jointly fine-tune the two networks. The total loss of the network we proposed is equal to the sum of the losses of all channels. We demonstrate the super performance of our proposed algorithm on a large retinal fundus image dataset and achieve an optimal accuracy of 97.12%, outperforming the current methods in this area.
Jing Sun, Cheng Wan, Jun Cheng, Fengli Yu, Jiang Liu
Optic Disc Detection via Deep Learning in Fundus Images
Abstract
In order to realize the localization of optic disc (OD) effectively, a new end-to-end approach based on CNN was proposed in this paper. CNN is a revolutionary network structure which has shown its power in fields of computer vision like classification, object detection and segmentation. We intend to make use of CNN in the study of fundus images. Firstly, we use a basic CNN on which specialized layers are trained to find the pixels probably in OD region. Then we sort out candidate pixels furtherly via threshold. By calculating the center of gravity of these pixels, the location of OD is finally determined. The method has been tested on three databases including ORIGA, MESSIDOR and STARE. In totally 1240 images to be tested, the OD of 1193 are successfully located with the rate of 96.2%. Besides the accuracy, the time cost is another advantage. It takes only 0.93 s to test one image on average in STARE and 0.51 s in MESSIDOR.
Peiyuan Xu, Cheng Wan, Jun Cheng, Di Niu, Jiang Liu
3D Choroid Neovascularization Growth Prediction with Combined Hyperelastic Biomechanical Model and Reaction-Diffusion Model
Abstract
Choroid neovascularization (CNV) usually causes varying degrees of irreversible retinal degradation, central scotoma, metamorphopsia or permanent visual lose. If early prediction can be achieved, timely clinical treatment can be applied to prevent further deterioration. In this paper, a CNV growth prediction framework based on physiological structure revealed in noninvasive optical coherence tomography (OCT) images is proposed. The method consists of three steps: pre-processing, CNV growth modeling and prediction. For growth modeling, a new combination model is proposed. The hyperelastic biomechanical model and reaction-diffusion model with treatment factor are combined through mass effect. For parameter optimization, the genetic algorithm is applied. The proposed method was tested on a data set with 6 subjects, each with 12 longitudinal 3-D images. The experimental results showed that the average TPVF, FPVF and Dice coefficient of 80.0 ± 7.62%, 23.4 ± 8.36% and 78.9 ± 7.54% could be achieved, respectively.
Chang Zuo, Fei Shi, Weifang Zhu, Haoyu Chen, Xinjian Chen
Retinal Biomarker Discovery for Dementia in an Elderly Diabetic Population
Abstract
Dementia is a devastating disease, and has severe implications on affected individuals, their family and wider society. A growing body of literature is studying the association of retinal microvasculature measurement with dementia. We present a pilot study testing the strength of groups of conventional (semantic) and texture-based (non-semantic) measurements extracted from retinal fundus camera images to classify patients with and without dementia. We performed a 500-trial bootstrap analysis with regularized logistic regression on a cohort of 1,742 elderly diabetic individuals (median age 72.2). Age was the strongest predictor for this elderly cohort. Semantic retinal measurements featured in up to 81% of the bootstrap trials, with arterial caliber and optic disk size chosen most often, suggesting that they do complement age when selected together in a classifier. Textural features were able to train classifiers that match the performance of age, suggesting they are potentially a rich source of information for dementia outcome classification.
Ahmed E. Fetit, Siyamalan Manivannan, Sarah McGrory, Lucia Ballerini, Alexander Doney, Thomas J. MacGillivray, Ian J. Deary, Joanna M. Wardlaw, Fergus Doubal, Gareth J. McKay, Stephen J. McKenna, Emanuele Trucco
Non-rigid Registration of Retinal OCT Images Using Conditional Correlation Ratio
Abstract
In this work, we propose a novel similarity measure for non-rigid retinal optical coherence tomography image registration called conditional correlation ratio (CCR). CCR calculates the correlation ratio (CR) between the moving and fixed image intensities, given a certain spatial distribution. The proposed CCR-based registration is robust to noise and less sensitive to the number of samples used to estimation the density function. Compared to mutual information (MI) and CR, both the quantitative indicators using Hausdorff distance (HD) and M-Hausdorff distance (MHD) and the qualitative indicator using checkerboard images show that CCR is more suitable to align the retinal OCT images.
Xueying Du, Lun Gong, Fei Shi, Xinjian Chen, Xiaodong Yang, Jian Zheng
Joint Optic Disc and Cup Segmentation Using Fully Convolutional and Adversarial Networks
Abstract
Glaucoma is a highly threatening and widespread ocular disease which may lead to permanent loss in vision. One of the important parameters used for Glaucoma screening in the cup-to-disc ratio (CDR), which requires accurate segmentation of optic cup and disc. We explore fully convolutional networks (FCNs) for the task of joint segmentation of optic cup and disc. We propose a novel improved architecture building upon FCNs by using the concept of residual learning. Additionally, we also explore if adversarial training helps in improving the segmentation results. The method does not require any complicated preprocessing techniques for feature enhancement. We learn a mapping between the retinal images and the corresponding segmentation map using fully convolutional and adversarial networks. We perform extensive experiments of various models on a set of 159 images from RIM-ONE database and also do extensive comparison. The proposed method outperforms the state of the art methods on various evaluation metrics for both disc and cup segmentation.
Sharath M. Shankaranarayana, Keerthi Ram, Kaushik Mitra, Mohanasankar Sivaprakasam
Automated Segmentation of the Choroid in EDI-OCT Images with Retinal Pathology Using Convolution Neural Networks
Abstract
The choroid plays a critical role in maintaining the portions of the eye responsible for vision. Specific alterations in the choroid have been associated with several disease states, including age-related macular degeneration (AMD), central serous chorioretinopathy, retinitis pigmentosa and diabetes. In addition, choroid thickness measures have been shown as a predictive biomarker for treatment response and visual function. Where several approaches currently exist for segmenting the choroid in optical coherence tomography (OCT) images of healthy retina, very few are capable of addressing images with retinal pathology. The difficulty is due to existing methods relying on first detecting the retinal boundaries before performing the choroidal segmentation. Performance suffers when these boundaries are disrupted or suffer large morphological changes due to disease, and cannot be found accurately. In this work, we show that a learning based approach using convolutional neural networks can allow for the detection and segmentation of the choroid without the prerequisite delineation of the retinal layers. This avoids the need to model and delineate unpredictable pathological changes in the retina due to disease. Experimental validation was performed using 62 manually delineated choroid segmentations of retinal enhanced depth OCT images from patients with AMD. Our results show segmentation accuracy that surpasses those reported by state of the art approaches on healthy retinal images, and overall high values in images with pathology, which are difficult to address by existing methods without pathology specific heuristics.
Min Chen, Jiancong Wang, Ipek Oguz, Brian L. VanderBeek, James C. Gee
Spatiotemporal Analysis of Structural Changes of the Lamina Cribrosa
Abstract
Glaucoma, a progressive and degenerative disease of the optic nerve, is the second leading cause of blindness worldwide. Mechanical deformation of the lamina cribrosa (LC) under high intraocular pressure (IOP) can lead to axonal death of optic nerve fibers. To explore the effect of pressure on the LC, we utilize an experimental setup where longitudinal 3D optical coherence tomography (OCT) images are acquired at different levels of IOP administered via a well-controlled external force. Structural changes are measured via image deformations which map all observed images simultaneously into a common coordinate space. These deformations encode local patterns of structural and volume change across the image sequence, resulting in quantification of the spatiotemporal deformation pattern of the LC due to variation of pressure. We also describe a 3D segmentation algorithm to restrict our deformation analysis separately to the beams or pores of the LC. A single case study demonstrates the potential of the proposed methodology for non-invasive in-vivo analysis of LC dynamics in individual subjects.
Charly Girot, Hiroshi Ishikawa, James Fishbaugh, Gadi Wollstein, Joel Schuman, Guido Gerig
Fast Blur Detection and Parametric Deconvolution of Retinal Fundus Images
Abstract
Blur is a significant problem in medical imaging which can hinder diagnosis and prevent further automated or manual processing. The problem of restoring an image from blur degradation remains a challenging task in image processing. Semi-blind deblurring is a useful technique which may be developed to restore the underlying sharp image given some assumed or known information about the cause of degradation. Existing models assume that the blur is of a particular type, such as Gaussian, and do not allow for the approximation of images corrupted by other blur types which are not easily incorporated into deblurring frameworks. We present an automated approach to image deconvolution which assumes that the cause of blur belongs to a set of common types. We develop a hierarchical approach with convolutional neural networks (CNNs) to distinguish between blur types, achieving an accuracy of 0.96 across a test set of 900 images, and to determine the blur strength, achieving accuracy of 0.77 across 1500 test images. Given this, we are able to reconstruct the underlying image to mean ISNR of 7.53.
Bryan M. Williams, Baidaa Al-Bander, Harry Pratt, Samuel Lawman, Yitian Zhao, Yalin Zheng, Yaochun Shen
Towards Topological Correct Segmentation of Macular OCT from Cascaded FCNs
Abstract
Optical coherence tomography (OCT) is used to produce high resolution depth images of the retina and is now the standard of care for in-vivo ophthalmological assessment. In particular, OCT is used to study the changes in layer thickness across various pathologies. The automated image analysis of these OCT images has primarily been performed with graph based methods. Despite the preeminence of graph based methods, deep learning based approaches have begun to appear within the literature. Unfortunately, they cannot currently guarantee the strict biological tissue order found in human retinas. We propose a cascaded fully convolutional network (FCN) framework to segment eight retina layers and preserve the topological relationships between the layers. The first FCN serves as a segmentation network which takes retina images as input and outputs the segmentation probability maps of the layers. We next perform a topology check on the segmentation and those patches that do not satisfy the topology criterion are passed to a second FCN for topology correction. The FCNs have been trained on Heidelberg Spectralis images and validated on both Heidelberg Spectralis and Zeiss Cirrus images.
Yufan He, Aaron Carass, Yeyi Yun, Can Zhao, Bruno M. Jedynak, Sharon D. Solomon, Shiv Saidha, Peter A. Calabresi, Jerry L. Prince
Boosted Exudate Segmentation in Retinal Images Using Residual Nets
Abstract
Exudates in retinal images are one of the early signs of the vision-threatening diabetic retinopathy and diabetic macular edema. Early diagnosis is very helpful in preventing the progression of the disease. In this work, we propose a fully automatic exudate segmentation method based on the state-of-the-art residual learning framework. With our proposed end-to-end architecture the training is done on small patches, but at the test time, the full sized segmentation is obtained at once. The small number of exudates in the training set and the presence of other bright regions are the limiting factors, which are tackled by our proposed importance sampling approach. This technique selects the misleading normal patches with a higher priority, and at the same time avoids the network to overfit to those samples. Thus, no additional post-processing is needed. The method was evaluated on three public datasets for both detecting and segmenting the exudates and outperformed the state-of-the-art techniques.
Samaneh Abbasi-Sureshjani, Behdad Dashtbozorg, Bart M. ter Haar Romeny, François Fleuret
Development of Clinically Based Corneal Nerves Tortuosity Indexes
Abstract
In-vivo specular microscopy provides information on the corneal health state. The correlation between corneal nerve tortuosity and pathology has been shown several times. However, because there is no unique formal definition of tortuosity, reproducibility is poor. Recently, two distinct forms of corneal nerve tortuosity have been identified, describing either short-range or long-range directional changes. Using 30 images and their manual grading provided by 7 experts, we automatically traced corneal nerves with a custom computerized procedure and identified the combination of geometrical measurements that best represents each tortuosity definition (Spearman Rank Correlation equal to 0.94 and 0.88, respectively). Then, we evaluated both of these tortuosity indexes in 100 images from 10 healthy and 10 diabetic subjects (5 images per subject). A Linear Discriminant Analysis showed a very good capability (accuracy 85%) to differentiate healthy subjects from pathological ones by using both tortuosity indexes together.
Fabio Scarpa, Alfredo Ruggeri
A Comparative Study Towards the Establishment of an Automatic Retinal Vessel Width Measurement Technique
Abstract
In this paper, we propose an automatic technique for the assessment of retinal vessel caliber in fundus images using a fully automatic technique exploiting a multi-scale active contour technique. The proposed method is compared with the well-known semi-automated IVAN software and the Vampire width annotation tool. Experimental results show that our approach is able to provide fast and fully automatic caliber measurements with similar caliber measurement and comparable system error as the IVAN software. It will benefit the analysis of quantitative retinal vessel caliber measurements in large-scale screening programs.
Fan Huang, Behdad Dashtbozorg, Alexander Ka Shing Yeung, Jiong Zhang, Tos T. J. M. Berendschot, Bart M. ter Haar Romeny
Automatic Detection of Folds and Wrinkles Due to Swelling of the Optic Disc
Abstract
We propose a method for detecting mechanically induced wrinkles that present themselves around the optic disc as a result of swelling. Folds and wrinkles have recently been found to be useful features for diagnosing optic disc swelling and for differentiating papilledema (optic nerve swelling due to raised intracranial pressure) from pseudopapilledema. A total of 22 patients were diagnosed with varying degrees and causes of optic disc swelling, with 3D spectral domain optical coherence tomography (SD-OCT) images obtained. The images were used to create fold-enhanced 2D images. Features were extracted pertaining to the orientation, Gabor responses, Fourier responses, and coherence to train a pixel-level classifier to distinguish between folds, vessels, image artifacts, and background. An area under the curve of 0.804 was achieved for the classification.
Jason Agne, Jui-Kai Wang, Randy H. Kardon, Mona K. Garvin
Representation Learning for Retinal Vasculature Embeddings
Abstract
The retinal vasculature imaged with fundus photography has the potential of encoding precious information for image-based retinal biomarkers, however, progress in their development is slow due to the need of defining vasculature morphology variables a priori and developing algorithms specific to these variables. In this paper, we introduce a novel approach to learn a general descriptor (or embedding) that captures the vasculature morphology in a numerically compact vector with minimal feature engineering. The vasculature embedding is computed by leveraging the internal representation of a new encoder-enhanced fully convolutional neural network, trained end-to-end with the raw pixels and manually segmented vessels. This approach effectively transfers the vasculature patterns learned by the network into a general purpose vasculature embedding vector. Using Messidor and Messidor-2, two publicly available datasets, we test the vasculature embeddings on two tasks: (1) an image retrieval task, which retrieved similar images according to their vasculature; (2) a diabetic retinopathy classification task, where we show how the vasculature embeddings improve the classification of an algorithm based on microaneurysms detection by 0.04 AUC on average.
Luca Giancardo, Kirk Roberts, Zhongming Zhao
Backmatter
Metadaten
Titel
Fetal, Infant and Ophthalmic Medical Image Analysis
herausgegeben von
M. Jorge Cardoso
Tal Arbel
Dr. Andrew Melbourne
Dr. Hrvoje Bogunovic
Pim Moeskops
Prof. Xinjian Chen
Ernst Schwartz
Mona Garvin
Emma Robinson
Emanuele Trucco
Michael Ebner
Dr. Yanwu Xu
Antonios Makropoulos
Adrien Desjardin
Tom Vercauteren
Copyright-Jahr
2017
Electronic ISBN
978-3-319-67561-9
Print ISBN
978-3-319-67560-2
DOI
https://doi.org/10.1007/978-3-319-67561-9

Premium Partner