Skip to main content
Erschienen in: Progress in Artificial Intelligence 3/2023

Open Access 28.07.2023 | Regular Paper

An automated classification framework for glaucoma detection in fundus images using ensemble of dynamic selection methods

verfasst von: Sumaiya Pathan, Preetham Kumar, Radhika M. Pai, Sulatha V. Bhandary

Erschienen in: Progress in Artificial Intelligence | Ausgabe 3/2023

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Glaucoma is an optic neuropathy, which leads to vision loss and is irreversible due to damage in the optic nerve head mainly caused by increased intra-ocular pressure. Retinal fundus photography facilitates ophthalmologist in detection of glaucoma but is subjective to human intervention and is time-consuming. Computational methods such as image processing and machine learning classifiers can aid in computer-based glaucoma detection which helps in mass screening of glaucoma. In this context, the proposed method develops an automated glaucoma detection system, in the following steps: (i) pre-processing by segmenting the blood vessels using directional filter; (ii) segmenting the region of interest by using statistical features; (iii) extracting the clinical and texture-based features; and (iv) developing ensemble of classifier models using dynamic selection techniques. The proposed method is evaluated on two publically available datasets and 300 fundus images collected from a hospital. The best results are obtained using ensemble of random forest using META-DES dynamic ensemble selection technique, and the average specificity, sensitivity and accuracy for glaucoma detection on hospital dataset are 100%, respectively. For RIM-ONE dataset, the average specificity, sensitivity and accuracy for glaucoma detection are 100%, 93.85% and 97.86%, respectively. For Drishti dataset, the average specificity, sensitivity and accuracy for glaucoma detection are 90%, 100% and 97%, respectively. The quantitative results and comparative study indicate the ability of the developed method, and thus, it can be deployed in mass screening and also as a second opinion in decision making by the ophthalmologist for glaucoma detection.
Hinweise

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

Glaucoma is a leading cause of irreversible blindness, and the manifestation of glaucoma is unknown until it reaches the advanced stage. Hence, periodic eye checkup is the sole way of detecting the disease and preventing further blindness. Glaucoma is defined as a progressive optic neuropathy that damages the structural appearance of optic nerve head also known as optic disk (OD). The major cause of glaucoma is a decrease in outflow of the intra-ocular fluid called as aqueous humor in the eye [13]. Glaucoma prevalence is likely to increase by 112 million by 2040 worldwide [3]. Fundus photography of the optic nerve head is a non-invasive way used by ophthalmologist for observing the changes such as cup-to-disk ratio (CDR) and loss in the neuro-retinal rim (NRR) area which are caused due to glaucoma [3, 4]. The area of optic cup (OC) to the area of OD is known as CDR [1]. NRR area is the area present between OD and OC. The thinning of NRR area is also observed as glaucoma advances [1]. Figure 1 shows the fundus image of a normal eye with parts labeled. Manual assessment of the fundus images is subjective to inter- or intra-observer variations. Hence, development of an automated detection system which processes the fundus images for quantifying glaucoma is of great advantage in mass screening and providing a second opinion for ophthalmologist during diagnosis.

2 Literature review

The recent methods described in the literature for development of a glaucoma detection system can be divided into segmentation- and non-segmentation-based methods.
Segmentation-based methods involve extraction of clinical features such as CDR and NRR area. Dela et al. [5] applied active contours and Hough transform in the red channel to segment OD. The displacement of vessels in the OD is estimated using chessboard metric to detect glaucoma. The method obtained an accuracy of 92% for a hospital dataset of 67 images. Although the method developed a unique feature such as displacement of vessel, it lacked the main clinical feature CDR. Ashish et al. [6] used adaptive threshold obtained using intensity of pixels for segmentation of OD and OC. Further, CDR and NRR features are calculated and the accuracy obtained is 94% for a hospital dataset of 67 images. The segmentation method is not designed to exclude artifact, which in some images is segmented as a part of OD. Soorya et al. [7] developed a method which tracks the bends in the vessel present inside the OD in order to obtain the OC contours. The OD contour is obtained using thresholding and point contour joining. The method used CDR as a feature for glaucoma detection and obtained an average accuracy of 97% for 225 images, which were collected from a local hospital. Although the method achieved a good accuracy, the OC contours can be detected only in images having high vessel contrast. Pardha et al. [8] performed OD segmentation using region-based active contour and OC segmentation using clustering. The method is tested on 59 images obtained from a hospital. The method obtained an average dice coefficient of 97% and 87% for OD and OC, respectively. The method does not report glaucoma detection accuracy. Kasu et al. [9] segmented OD and OC using fuzzy-C-mean (FCM) and Otsu thresholding method. CDR and energy-based wavelet features were further estimated for glaucoma detection. The obtained accuracy for glaucoma detection is 97% using artificial neural network (ANN). Although the method achieved good detection result on 86 images obtained from a hospital, it needs to evaluate its OD and OC segmentation algorithm for clinical implementation. Soltani et al. [10] used canny edge detection for obtaining OD and OC contours. A fuzzy engine is developed which considers CDR along with patient’s health data for classifying the fundus images as normal or glaucoma. The obtained classification accuracy is 96%. Chia et al. [11] developed a fully convolutional network in order to obtain the OD and OC contours. Along with the CDR feature, the patients’ health data was considered for glaucoma detection. The method is implemented on a hospital dataset of 2554 fundus images. The accuracy of correct classification is 91%. The main limitation of this method is the need of a large dataset for training the model and resizing the image in order to reduce the computation time. Julian et al. [12] segmented OD and OC by developing a framework based on convolution neural network (CNN). The output of the filters is trained using a soft max logistic model and is subjected to convex hull and graph cut. For final segmentation, CDR is estimated for glaucoma detection. The method achieved a dice coefficient of 97% and 87% for OD and OC, respectively. The method is evaluated on Drishti dataset which is publically available for research. Nevertheless, the segmentation results achieved are good, and the optimization of parameters is an issue for reducing computation complexity. Perdoma et al. [13] developed a three-step CNN model. In the first step, CNN with 15 layers is developed for OD and OC segmentation. In the second step, CNN with 12 layers is designed for extracting morphometric features from the segmented OD and OC. In the third step, CNN is trained for classifying the fundus images based on the features extracted. The method obtained a classification accuracy of 89% on Drishti dataset. Sevastopolsky et al. [14] developed OD and OC segmentation method based on U-Net. U-Net consists of contracting and expansive path. The CNN architecture is used to build the contracting path, and the image information is merged in expansive path. The dice coefficient for OD and OC is 94% and 85%, respectively. The segmented OD and OC are used to compute CDR. The glaucoma detection accuracy is not reported. Thakur et al. [15] developed a hybrid model consisting of adaptive FCM and level set for segmenting OD and OC. The accuracy of OD and OC segmentation for Drishti dataset is 93% and 92%, respectively. The main limitation of this method is over- and under-segmentation for low-contrast fundus images. Accuracy of glaucoma detection is not reported. Cheng et al. [16] used super-pixel-based classification of OD and OC. The OD boundary is initialized using features based on histogram and statistics. For OC boundary, the local information is used along with histogram and statistics. The area under curve achieved for glaucoma detection is 0.80 on 650 images collected on a hospital dataset. Civit et al. [53] selected U-Net as a segmentation network and developed functions that implement generalized U-Net which is adapted to execution of tensor processing unit for service based on cloud. Tulsani et al. [54] segmented OD and OC using custom U-Net++ architecture which minimizes the loss of context and image local information by employing the encoder, decoder, and skip connection. The study shows that U-Net is effective for the process of segmentation even for a small data. For glaucoma detection, the method achieved an accuracy of 94% and 91% for Drishti and RIM dataset, respectively. When using a U-Net model, for extraction of feature and restoration it executes repeated convolution functions and thus requires several trainable factors.
Non-segmentation-based methods involve extraction and classification of features based on how spatially the color or intensity of the pixels is distributed in the image. Singh et al. [17] localized OD using bit plane analysis and extracted wavelet features. Evolutionary attributes and principal component analysis (PCA) are used as feature selection methods. The obtained accuracy is 94% using support vector machine (SVM) classifier for 63 images collected from a local hospital. The accuracy can be further improved by adding clinical features. Maheshwari et al. [18] extracted features such as Kapoor, Reyni, fractal dimension and Yager. Least square SVM is used for classifying the extracted features. The accuracy obtained is 95% for a hospital dataset of 488 images. Kevin et al. [19] developed a glaucoma detection model using higher-order cumulant (HOS) features. Linear discriminant analysis is used as a feature reduction method. These features are used to train naïve Bayes (NB) and SVM classifier for classification. The method was tested on 272 images collected from a hospital and achieved an accuracy of 92%. The detection accuracy can be further improved by using clinical features such as CDR. Rajendra et al. [20] extracted features such as kurtosis, Kapoor entropies, energy, mean, Reyni, Shannon and variance from the Gabor transform coefficients. These features are ranked using t-test. The method obtained a classification accuracy of 93% on 510 images collected from a hospital. Haleem et al. [21] developed an image feature model, which uses vascular convergence to locate OD. The localized OD is used to extract Gaussian, wavelet, gradient and Gabor features. These features are used to train the SVM classifier. The method obtained an accuracy of 94% on a RIM dataset, which is a publically available dataset. The redundant features are reduced since they are extracted from OD. Dua et al. [22] trained Lib SVM, sequential minimal optimization classifier with wavelet-based features. SVM achieved a higher accuracy of 93% on 63 images collected from a hospital. The method needs to be tested on a large dataset for clinical implementation. Raghavendra et al. [23] proposed an 18-layer CNN model which maps the pixels in a hierarchical form to classify the fundus images. The model is trained on a hospital dataset of 1426 images and achieved an accuracy of 98%. Although the method achieved a good classification accuracy, the method cannot be generalized on small number of images, as it requires a large number of images for training. Gour et al. [24] proposed a glaucoma detection model which uses histogram-based gradient features and gradient information scale for capturing the shape features. A total of 1448 features are extracted, and the prominent features are selected using PCA. The method obtained an accuracy of 79% on Drishti dataset. The accuracy can be further improved by using clinical features. Akram et al. [25] localized OD in the red channel by using Laplacian of Gaussian filter. The vascular density information is considered, and multivariate with m-mediods is used for classification. An accuracy of 91% is obtained for 462 images. Mookiah et al. [26] proposed a model which uses random transform and histogram equalization as a pre-processing method. Discrete wavelet transform and HOS features are extracted and used to train SVM classifier. The method obtained an accuracy of 95% on 60 images. The method developed a glaucoma risk index, which aided in good classification accuracy. However, the method needs to be tested on a large dataset for clinical implementation. Raja et al. [27] developed a hybrid swarm optimizing method. The features are extracted from hyper-analytic wavelet transformation to preserve the phase information. Classification is performed using SVM classifier with radial basic function. A group search optimizer with ranging and area scanning feature is embedded in the particle swarm framework for better detection. The method obtained an accuracy of 95% on RIM dataset.
The above discussed methods report good results for glaucoma detection. But, there still exist some limitations such as: (i) over- and under-segmentation which can cause a significant difference in clinical features such as CDR; (ii) there is a lack of methods which considers both the features based on clinical evaluation (CDR and NRR) and features based on texture (color and intensity) for classifying the fundus images; and (iii) different classifiers make different errors; hence, developing ensemble of classifiers needs to be explored for glaucoma detection. The proposed methodology aims to overcome these limitations. Hence, the main contributions of the proposed method are as follows: (i) robust automated OD and OC segmentation methods; (ii) feature extraction includes clinical evaluation and texture arrangement; and (iii) robust classifier model by creating an ensemble of classifiers using dynamic selection techniques.

3 Proposed method

The proposed method for glaucoma detection is evaluated on three datasets. Here, in the datasets used, the term "annotations" refers to the class (normal or glaucoma) and also to the disk and cup boundary masks. (1) The images are collected from Kasturba Medical College (KMC), Manipal, Karnataka, India. The annotations for all the 300 (glaucoma—205, normal—95) fundus images are given by the ophthalmologist. The images are captured by using a Zeiss FF450 plus fundus camera with a resolution of 2588 × 1958 pixels. The data collection has also been approved by the KMC ethics committee. (2) Drishti [28] is an online dataset which is publically available. The annotations for all the 101 (glaucoma—70, normal—31) fundus images have been provided. The fundus images have a resolution of 2896 × 1944 pixels. (3) RIM version 3 [29] is an online dataset which is publically available. The annotations for all the 124 (glaucoma—39, normal—85) fundus images have been provided. The fundus images have a resolution of 1072 × 1424 pixels.
The proposed methodology for automated glaucoma detection consists of the following steps: pre-processing, segmentation methods for OD and OC, extraction of features and classification. Figure 2 shows the flow design of the proposed framework for detection of glaucoma.

3.1 Pre-processing

The segmentation is often hindered by the presence of blood vessels. Hence, the blood vessels are detected and excluded from the fundus image. In the RGB fundus image, the blood vessels appear to be more evident in the green channel. Hence, the green channel is selected based on the literature [55]. The noise in the green channel is removed by applying a 2D Gaussian smoothing kernel having \(\sigma = 4\). This results in a Gaussian filtered image. On a set of 20 images, the value of standard deviation varies for all possible values. It is found that \(\sigma = 4\) gives more appropriate blood vessel detection. In order to enhance the contrast of blood vessel, a linear structuring element with size 150 is considered for angle orientations varying in steps of 45° from 0° to 360°. The responses are summed up, and dilation and erosion operation is performed. This response image is subtracted from the Gaussian filtered image in order to enhance the irregularly distributed blood vessels. The image is then subjected to Otsu thresholding. In-painting is performed by using the Mumford–Shah method [30]. This gives blood vessel excluded fundus image. Figure 3 illustrates the blood vessel extraction and exclusion.

3.2 OD segmentation

The pre-processed image is used for segmentation of OD and OC. The OD appears to be more prominent in the red channel of the RGB image [5, 6, 8, 9, 15]. The red channel is considered, and statistical feature, namely absolute mean, is computed from the red channel and is subtracted in an iterative manner. The number of iterations considered is three. The procedure is repeated for different iteration numbers, and the best results are retained and reported. For the third iterative image, Prewitt operator is used to compute the edges having high intensity. A circle finder operation is performed to determine all the possible circles. For this purpose, the minimum radius of OD considered is 2.5 mm [31] which is 9.5 pixel in terms of pixel distance. The minimum radius is taken as 10, and the maximum radius is taken as twice the minimum radius, i.e., 20. A circle is defined as (1)
$$ (x - a)^{2 } + (y - b)^{2 } = r^{2} , $$
(1)
where r is the radius and the circle center is defined by a and b. For r value in the range (10, 20), the optimal centers of the circles are obtained by using (2) and (3):
$$ a = x - r \;\cos \theta , $$
(2)
$$ b = y - r\sin \theta , $$
(3)
where x and y are the edge coordinates defined by the Prewitt operator, and the range of angle \(\theta\) varies as \( \left[ {\frac{2}{{r_{{{\text{minimum}}}} }} , \;\;\frac{2}{{r_{{{\text{maximum}}}} }}} \right]\). This gives all the possible circles. The circumference points are obtained by varying the angle in steps of 45° from 0° to 360°. The mean of the circumference points is subtracted with the iterative image. This further reduces the background variability and enhances the OD region. This image is then subjected to threshold operation.
To determine the threshold value, a decision tree classifier, which uses Iterative Dichotomiser 3 (ID3) [32] algorithm, is used. A set of 20 images obtained after the third iteration of absolute mean computation and subtraction in an iterative manner is considered from the RIM dataset. The images are overlapped with their corresponding binary mask, and the mean intensity values of the background and the region of interest (OD) are obtained. These intensity values are given for a decision tree with a single split which uses ID3 algorithm. The ID3 gives a boundary value, which minimizes the entropy over all the possible boundaries. This is used as a threshold value, to obtain the binary mask of OD. The threshold value obtained for the OD segmentation is not specific to a particular dataset, and hence, it can be used for other fundus dataset. An eccentricity threshold value of 0.6 helps to eliminate any noisy pixels which are present in the binary mask of OD. The threshold value for eccentricity is obtained by examining different values on 20 images [33]. The final resulting binary mask is the region of OD. Figure 4 explains the segmentation of OD.

3.3 OC segmentation

For OC segmentation, the green channel is considered because the OC appears to be more prominent in the green channel [8, 9, 15]. In order to reduce background variation, in the green channel successive computation and subtraction of absolute mean are performed in an iterative manner. The number of iterations considered is three, and the resulting image shows the pixels having high intensity that belongs to OD and OC regions. In order to retain only the pixels belonging to OC region, all the absolute means computed in the three iterations are added and the result is subtracted with the absolute mean of the last iteration. In order to sharpen the edges belonging to OC region, the resulting image is subjected to successive computation and subtraction of standard deviation. This is done for two iterations. This gives a new channel, with increasing contrast of pixels belonging to OC region. The new channel is subjected to K-means clustering. Clustering helps in grouping high-intensity pixels in a single group. K-means clustering helps in quick convergence. Clustering is performed as follows: (1) The number of k initial centroids considered is 4. This is evaluated using silhouette criteria [34] to avoid randomization in selecting the initial number of clusters. In the silhouette method, the silhouette coefficients of each point are computed. This measures the similarity of a point to its own cluster when compared with other clusters [34]. (2) The distances from each pixel \(P(x,\;y)\) to the centroid \(\{ C_{1} , \;C_{2 } , \;C_{3} , \; C_{4} \}\) are computed by squared Euclidean distance d = (\( P_{i} - C_{n }\))2, where n is the cluster number and {i = 1, 2, 3 …N} in which N is the pixel number. \( P_{i}\) is assigned to the cluster with smaller distance. (3) After assigning all the pixels, the centroids are recalculated. (4) Step 2 and step 3 are repeated until there is no change in the centroid. (5) The cluster that has average pixel intensity value greater than 3 is segmented as region belonging to OC. Figure 5 explains the segmentation of OC.

3.4 Feature extraction

Using the segmented binary mask of OD and OC, clinical features, namely CDR and NRR area, are obtained. From the fundus image, features, namely: gray-level co-occurrence matrix (GLCM)-based features, texture directionality feature extracted from N + 1 directional difference of Gaussian filters, Gabor features, Hu-invariant moments and color features, are extracted. Each of these features is explained in the following.

3.4.1 CDR area

CDR is defined as given in (4). CDR value of 0.3 is considered as normal. Using the binary image of segmented OD and OC, the area of white pixels is estimated and the CDR ratio is obtained. The CDR values will be used as a feature for glaucoma classification.
$$ {\text{CDR }} = \frac{{{\text{Area}}\;{\text{ of }}\;{\text{segmented }}\;{\text{ OC}}}}{{{\text{Area}}\;{\text{ of }}\;{\text{segmented }}\;{\text{OD}}}}. $$
(4)

3.4.2 NRR area

The area present between OC and OD is the region of NRR. The changes in the NRR area are estimated using the inferior superior nasal and temporal (ISNT) rule. According to the ISNT rule, the NRR area is in a decreasing order of thickness around \((I > S > N > T)\) area. Figure 6 shows the ISNT quadrants in NRR area for a normal eye. The NRR area ratio defined in (5) is used to verify ISNT rule. A normal eye has NRR area greater than 1, and for a glaucoma eye, the NRR area is close to 1 or less than 1 [8, 9, 21].
$$ {\text{NRR}} = \frac{{{\text{sum }}\;{\text{of}}\;{\text{ area}}\;{\text{ in}}\;{\text{ inferior }}\;{\text{and}}\;{\text{ superior }}\;{\text{quadrant}}}}{{{\text{sum }}\;{\text{of}}\;{\text{ area}}\;{\text{ in}}\;{\text{ nasal }}\;{\text{and }}\;{\text{temporal }}\;{\text{quadrant}}}}. $$
(5)

3.4.3 GLCM

GLCM defined by Harlick is the statistical approach which helps in examining the image texture by considering the spatial relationship of the pixels [35]. In order to achieve rotation invariance, the 13 GLCM features [35] are calculated at \(\theta = (0, \;45,\; 90, \;135 )\) [35, 45]. For better characterization of the fundus image, the best four subwindows [45] defined using the coordinates, 1: (0, 0) to (127, 127), 2: (128, 128) to (255, 255), 3: (0, 128) to (255, 128) and 4: (0, 128) to (128, 255), are also used for calculating the GLCM features. This results in 104 features.

3.4.4 Invariant moments feature

Invariant moments help in describing the image texture features and shape. They have a significant contribution in image analysis and pattern recognition. Hu constructed seven invariant moments by considering the second- and third-order central moments according to the algebraic invariant [36]. While constructing the seven Hu-invariant moments, the central moment is used to eliminate the impact of image translation. The normalized processing is used to remove the influence of image scaling. The rotation invariance is achieved by polynomial construction. Hence, to obtain translation, rotation and scaling invariants we extract the seven Hu-invariant moments as defined by [3638]. These seven Hu-invariant moments are used as features for glaucoma classification.

3.4.5 Gabor features

In a 2D plane, the impulse response of a Gabor function [39] is given as (6)
$$ f(x,\;y) = \frac{1}{{2\pi \sigma_{x} \sigma_{y} }}\exp \left( { - \frac{1}{2} \left( {\frac{{x^{2} }}{{\sigma_{x}^{2} }} + \frac{{y^{2} }}{{\sigma_{y}^{2} }}} \right)} \right)\exp (2\pi j \mu x), $$
(6)
where \(\mu\) = Gabor functions radial frequency and \(\sigma_{x} \;{\text{and}}\; \sigma_{y}\) = Gaussian envelop along x- and y-axis, respectively. Filter banks \(f_{pq} (x,\;y)\) are created by rotating (orientation) and dilating (scale) using Eq. (6). Each of the Gabor filters will have a real part and an imaginary part, which is stored in masks of sizes \( M \times M\). M is usually preferred to be an odd number, in order to have a symmetric region. Scale is denoted as \(p = 1 \ldots S \) and orientations \( q = 1 \ldots L\). In the proposed method, \(S\) = 8 scales \( (p = 1,\;2,\;3,\;4,\;5,\;6,\;7,\;8)\), L = 8 orientations \((L = 1,\;2,\;3,\;4,\;5,\;6,\;7,\;8)\) and M = 27 are considered. For a given ROI, \(I (x, \;y)\) the filtered image \(f_{pq} (x,\;y)\) is obtained using filter banks \(f_{pq} (x,\;y)\) to each segmented window of the ROI as shown in (7):
$$\begin{aligned} & I_{pq} (x,\;y) = \left\{ \left[ {f_{pq} (x,\;y)_{{{\text{real}}}} \times I (x, \;y)} \right]^{2}\right. \\ & \quad \left. + \left[ {f_{pq} (x,\;y)_{{{\text{imaginary}}}} \times I (x, \;y)} \right]^{2} \right\}^{1/2},\end{aligned} $$
(7)
where * denotes the 2D convolution operation.
The Gabor features \({\text{Gabor}}\;{\text{Features}}_{pq}\) are obtained as average output of \(I_{pq} (x,\;y)\), i.e., S × L (8 × 8 = 64), for each segmented window. Out of this, three features, namely (i) maximum of Gabor features (Gmax), (ii) minimum of Gabor features (Gmin) and (iii) the range \(\{ G_{\max } - (G_{\min } )\}\), are used to represent the texture features, since they are rotation invariant.

3.4.6 Texture directionality feature extracted from N + 1 directional difference of Gaussian filters

The Tamura directionality equation is used to compute the texture directionality [41]. Unlike the existing Tamura directionality feature which uses the Prewitt operator for edge detection, the proposed method uses difference of Gaussian for edge detection. The use of difference of Gaussian produces a sharpened image with edges having increased contrast when compared with the Prewitt operator.
Directional filters are used as a prominent descriptor of texture in image analysis, and they are used to smooth the images and retain the edge information [40]. The impulse response of a directional filter is given by (8):
$$ h_{{\theta_{i} }} (x,\;y) = G_{1} (x,\; y) - G_{2} (x, \;y), $$
(8)
where \(G_{k} (x,\;y)\) is a Gaussian filter given by (9):
$$ G_{k} (x,\;y) = C_{k} \exp \left\{ { - \frac{{x^{{l^{2} }} }}{{2\sigma^{2}_{x} }} - \frac{{y^{{l^{2} }} }}{{2\sigma^{2}_{y} }}} \right\}, $$
(9)
where \(C_{k}\) = normalized constant, and the values of \((x^{\imath } , \;y^{\imath } )\) are related to \((x, \;y)\) by a rotational amplitude \(\theta_{i}\) as (10) and (11):
$$ x^{\iota } = x \cos \theta_{i} + y \sin \theta_{i} , $$
(10)
$$ y^{\iota } = y \cos \theta_{i} - x \sin \theta_{i} . $$
(11)
The parameters \(\sigma_{{x_{k} }}\) and \(\sigma_{{y_{k} }}\) values are chosen such that the second filter is directional and the first filter is isotropic or less directional. In order to capture a better enhancement of directional patterns in fundus images, a difference of Gaussian is chosen. The rotation parameter \(\theta\) decides the number of directional filters N. \(\theta\) varies in steps of \(\frac{\pi }{N}\), i.e., 12° from 0 to \( \pi\). N = 15 gives a step size of 12°. It is observed that a step size less than 12° resulted in increased processing time, without significant changes in the response image. A step size greater than 12° results in incorrect values. The output of filter bank is expressed as N + 1 filter images \( I_{i}\). Since \(\theta\) varies between 0 and \( \pi\), with a step size of 12°, there are 16 directional Gaussian filters. In order to synthesize the final image (F), a maximization is performed for each pixel as given in (12):
$$ F (x,\;y) = \max_{{\theta_{i} }} \left\{ {I_{i} (x,\;y)} \right\}. $$
(12)
The outputs after applying the difference of Gaussian filter are \(\Delta H \) and \(\Delta V\), which are further used in Tamura directionality equation given in (15).
Edge of a pixel is a vector, and it has a magnitude \((\Delta G)\) given in (13) and direction \((\theta )\) given in (14):
$$ \Delta G = \frac{{\left| {\Delta H} \right| + \left| {\Delta V} \right|}}{2} , $$
(13)
where (\(\left| {\Delta H} \right|\)) and (\(\left| {\Delta V} \right|\)) indicate the horizontal and vertical change in direction, respectively.
$$ \theta = \tan^{ - \imath } \frac{{\left| {\Delta H} \right|}}{{\left| {\Delta V} \right|}} + \frac{\pi }{2}. $$
(14)
The histogram for directionality, i.e., HD, is obtained by quantizing \(\theta\) \((0 \le \theta < \pi )\) and taking the count of pixels having a magnitude greater than a particular threshold. If the histogram has n peaks, then wi is the window of bins from previous valley to the next valley. \(\varphi_{i}\) is the angular position of a peak in wi. At angular position \( \varphi\), let \(H_{D} (\varphi )\) be the bin height and the texture directionality \(D\) using the sharpness of \(H_{D}\) [41] is calculated as given in (15):
$$ D = 1 - r \times n \times \mathop \sum \limits_{i = 1}^{n} \mathop \sum \limits_{{w_{i \in \varphi } }} (\varphi - \varphi_{i} )^{2} \times H_{D} (\varphi ). $$
(15)

3.4.7 Color features

The change in color is significant in pattern analysis. Three color models, namely RGB, CIEL × a × b and HSV color spaces, are used to identify the color information present within an image as they maintain a color difference ratio. For choosing a color model, there are no pre-requisite criteria. Hence, the color features are extracted from RGB, CIEL × a × b and HSV color spaces. From the three color models, for each color channel corresponding to nine channels, six statistical features, namely skewness, variance, standard deviation, average, entropy and energy, are extracted. This leads to computation of 6 × 9 = 54 color features.

3.5 Classification

The features extracted are used by the classifier for predicting the glaucomatous versus normal class. Different classifiers may produce different errors; to overcome this, we can create an ensemble of classifiers to give accurate decision. The proposed method overcomes these errors by creating an ensemble of classifiers using dynamic selection techniques [43]. Dynamic selection methods can either be an ensemble of competent classifiers termed as dynamic ensemble selection (DES) or one classifier termed as dynamic classifier selection (DCS) [42, 44].
In the proposed method instead of selecting one single classifier for all the dataset, dynamic selection is preferred, because it dynamically selects the most suitable classifier from a pool of classifiers for every test data. This makes the classification more flexible and efficient since each test data has a different pattern. Hence, dynamic selection is more suitable for handling imbalanced data and finding patterns from biomedical images. Dynamic selection-based classification also reduces the risk of overfitting and generalization.
The main consideration related to dynamic selection classifier is the hyperparameters. The hyperparameters should be chosen appropriately, as this plays a crucial role in improving classification accuracy.
For classification, two models are created: The first classification model is created by a homogeneous ensemble of random forest classifiers and the second classification model is created by an ensemble of heterogeneous classifiers. In the first classification model, a pool of 100 classifiers are considered which are a homogeneous ensemble of random forest classifiers. Random forest classifiers are considered since they are more diverse by the use of random samples and help in better predictive performance [46, 47]. In order to train a random forest to be used as pool of classifiers, by experimenting on different values, the maximum depth of the tree is set to 5, so that it can estimate probabilities. In the second classification model, the system generates a pool of heterogeneous classifiers built by bagging and is composed of different classification models, namely: perceptron, Gaussian naïve Bayes, k-NN, Gaussian SVM and decision trees. The diversity is achieved due to the intrinsic properties of each classifier [42, 48]. For these two classification models, four dynamic selection techniques such as overall local accuracy (OLA), multiple classifier behavior (MCB), meta-learning for dynamic ensemble classification (META-DES) and dynamic ensemble selection performance (DES-P) [42, 48, 49] are applied for the selection of the classifiers.
The OLA and MCB techniques belong to DCS method. The META-DES and DES-P techniques belong to DES method [42, 48, 49]. For application of the dynamic selection techniques, the region of competence is determined as the set of nearest neighbors of the test sample in the training samples [42, 4850]. The appropriate size of neighborhood is decided by experimenting on one dataset by considering several dynamic selection techniques, and the best value is reported [42, 4850]. Hence, the proposed method uses five nearest neighbors of the test sample from the training set to precisely define the region of competence for the test sample.
In OLA, for each base classifier, the level of competence is computed as its accuracy of classification obtained in its region of competence and the classifier having the highest level of competence is selected in order to classify the test sample.
In MCB, the behavior knowledge space is used to filter and preselect from the region of competence. Then, the competence of the base classifier in the resulting region of competence is computed as its accuracy of classification. A single classifier is used for classification of the test sample, if its competence level is higher than all the base classifiers present in the pool. Otherwise, majority voting is used for determining the class of the test sample.
In META-DES, for each of the base classifiers five sets of meta-features are extracted, namely neighbor classification, posterior probability, overall local accuracy, output profile classification and classifiers confidence [49]. The major advantage of meta-leaning is that meta-features encode different multiple criteria which helps in estimating the level of competence of the base classifier. The meta-classifier is used to predict whether the base classifier is competent enough to classify the input test sample. A multilayer perceptron consisting of ten hidden neurons is used as a meta-classifier. The meta-classifier is trained by the meta-feature vector in the training phase. The training of the meta-classifier is stopped if there is no improvement in the performance for five consecutive epochs. The base classifiers considered as competent by the meta-classifier are considered, and their outputs are aggregated by majority voting to estimate the class of test sample. In order to handle tie-breaking, the class with highest posteriori probability is chosen as the class of test sample.
In DES-P, the competence of the base classifiers is estimated by the difference between the base classifier accuracy obtained in the region of competence and the performance of the random classifier. The random classifier is the classification model which randomly chooses the class with same probabilities. The base classifiers that have obtained a higher level of competence are chosen, and their outputs are aggregated by majority voting to estimate the class of test sample. A more detailed explanation of these selection techniques is found in [42, 4850]. The two classification models are tested on 171 feature sets obtained from KMC, RIM and Drishti databases.

4 Results

A set of 70 percent is used in training and 30 percent is used for testing the classification models. The results reported are the average performance values of the classifier for three iterations. The average performance values of the classifier models are reported using the classification metrics: sensitivity, specificity and accuracy [1727]. The definitions of these classification metrics are given in Eqs. (1618).
$$ {\text{Sensitivity:}}\quad {\text{SE}} = \frac{{{\text{TP}}}}{{({\text{FN}} + {\text{TP}})}}, $$
(16)
$$ {\text{Specificity:}}\quad {\text{SP}} = \frac{{{\text{TN}}}}{{({\text{FP}} + {\text{TN}})}}, $$
(17)
$$ {\text{Accuracy:}}\quad {\text{ACC}} = \frac{{{\text{TN}} + {\text{TP}}}}{{\left( {{\text{FN}} + {\text{FP}} + {\text{TN}} + {\text{TP}}} \right)}}, $$
(18)
where TP = true positive, TN = true negative, FP = false positive and FN = false negative. For classifiers, TP indicates that the classifier predicts correctly that the image is glaucoma. TN indicates that the classifier predicts correctly that the image is normal. FP indicates that the classifier prediction states that the image is normal but its correct label is glaucoma. FN indicates that the classifier prediction states that the image is glaucoma but its correct label is normal.
The training set for KMC, RIM and Drishti consists of 143, 27 and 49 numbers of glaucoma samples, respectively. The test set for KMC, RIM and Drishti consists of 62, 12 and 21 numbers of glaucoma samples, respectively. The training set for KMC, RIM and Drishti consists of 66, 59 and 22 numbers of normal samples, respectively. The test set for KMC, RM and Drishti consists of 29, 26 and 9 numbers of normal samples, respectively.
Results of classification for KMC dataset
Table 1 gives the results of classification using homogeneous ensemble of random forest using dynamic selection methods for KMC dataset. Table 2 gives the results of classification using the heterogeneous ensemble of classifiers using dynamic selection methods for KMC dataset.
Table 1
Performance parameters obtained using ensemble of random forest on KMC dataset
Dynamic selection methods
SP (%)
SE (%)
ACC (%)
OLA
96.15
96.87
96.66
MCB
100
98.43
98.88
META-DES
100
100
100
DES-P
100
100
100
Bold values signifies Good result
Table 2
Performance parameters obtained using ensemble of heterogeneous classifiers on KMC dataset
Dynamic selection methods
SP (%)
SE (%)
ACC (%)
OLA
88.46
96.87
94.44
MCB
85
95.31
92.22
META-DES
92
100
98
DES-P
92.30
100
97.77
Bold values signifies Good result
Figure 7 illustrates the best receiver operating characteristics (ROC) curve for the best result obtained by ensemble of random forest using META-DES and DES-P dynamic selection method for KMC dataset.
Results of classification for RIM dataset
Table 3 gives the results of classification using homogeneous ensemble of random forest using dynamic selection methods for RIM dataset. Table 4 gives the results of classification using the heterogeneous ensemble of classifiers using dynamic selection methods for RIM dataset.
Table 3
Performance parameters obtained using ensemble of random forest on RIM dataset
Dynamic selection methods
SP (%)
SE (%)
ACC (%)
OLA
100
92.85
97.36
MCB
96
91.66
95
META-DES
100
92.85
97.86
DES-P
100
85.77
92.10
Bold values signifies Good result
Table 4
Performance parameters obtained using ensemble of heterogeneous classifiers on RIM dataset
Dynamic selection methods
SP (%)
SE (%)
ACC (%)
OLA
100
60.12
84
MCB
100
64.28
86.84
META-DES
100
60.22
84
DES-P
100
80.22
92
Bold values signifies Good result
Figure 8 illustrates the ROC curve for the best result obtained by ensemble of random forest using META-DES and OLA dynamic selection method for RIM dataset.
Results of classification for Drishti dataset
Table 5 gives the results of classification using homogeneous ensemble of random forest using dynamic selection methods for Drishti dataset. Table 6 gives the results of classification using the heterogeneous ensemble of classifiers using dynamic selection methods for Drishti dataset.
Table 5
Performance parameters obtained using ensemble of random forest on Drishti dataset
Dynamic selection methods
SP (%)
SE (%)
ACC (%)
OLA
100
95.23
97
MCB
90
95.23
93.54
META-DES
90
100
97
DES-P
90
95.23
94
Bold values signifies Good result
Table 6
Performance parameters obtained using ensemble of heterogeneous classifiers on Drishti dataset
Dynamic selection methods
SP (%)
SE (%)
ACC (%)
OLA
93.33
94.41
94.11
MCB
90
100
97
META-DES
82
100
94
DES-P
80
100
90.32
Bold values signifies Good result
Figure 9 illustrates the ROC curve for the best result obtained by ensemble of random forest using META-DES and DES-P dynamic selection method for Drishti dataset.
From Tables 1, 2, 3, 4, 5 and 6, it can be observed that the two classification models developed with four dynamic selection techniques produced good classification results. Figure 10 illustrates the best performance frequency of the dynamic selection methods for the two classification models. As shown in Fig. 10, the classifier model that is found to be more optimal for classification of normal and glaucoma class is ensemble of random forest with META-DES. This is because of the fact that META-DES uses several information sources (meta-features) while performing the dynamic selection scheme. Therefore, META-DES performs better classification among all the other dynamic selection methods. A stratified k-fold cross-validation is performed with k = 10 for the best result obtained with META-DES technique using ensemble of random forest. The obtained average accuracy for k = 10-fold cross-validation for KMC, Drishti and RIM dataset is 93%, 90% and 91%, respectively. The area under the curve (AUC) gives degree of separability between the normal and glaucoma classes. A good classification model will have an AUC near to 1, which means the measure of separability is good. The AUC for the best results obtained with META-DES using ensemble of random forest for KMC, Drishti and RIM dataset is 0.99, 0.99 and 0.94, respectively.
Generalization ability of the two classifier models
The generalization ability of the two classifier models is measured by concatenating the three datasets (KMC, RIM-ONE and Drishti) in order to create a single dataset. Tables 7 and 8 present the results of classification using ensemble of random forest and ensemble of heterogeneous classifiers using DCS and DES methods, respectively.
Table 7
Generalization ability of the classifier using ensemble of random forest classifiers with dynamic selection methods
Dynamic selection methods
SP (%)
SE (%)
ACC (%)
OLA
95.22
96.84
96.2
MCB
92.06
96.88
95
META-DES
95.23
97
96.77
DES-P
92.06
96
94.3
Bold values signifies Good result
Table 8
Generalization ability of the classifier using ensemble of heterogeneous classifiers with dynamic selection methods
Dynamic selection methods
SP (%)
SE (%)
ACC (%)
OLA
87.77
92.63
87
MCB
83.01
88.42
82.27
META-DES
85.12
93
90
DES-P
87.3
89.47
89
Bold values signifies Good result

4.1 Comparative analysis

A comparative analysis for glaucoma detection with the existing methods which considers RIM and Drishti dataset is given in Tables 9 and 10, respectively. The best results reported for the proposed method for RIM and Drishti dataset belong to ensemble of random forest using META-DES dynamic ensemble selection technique. Most of the methods reported in Tables 9 and 10 extracted spatial information of pixels based on intensity or texture. Including clinical features based on OD can further improve the detection accuracy. Also, under- and over-segmentation can greatly affect the clinical features. Extracting both domain (clinical) and features based on textural arrangement of pixels may give promising outcome rather than considering a single set. CNN models are useful methods in computer vision for solving problems such as image segmentation and classification. This comes with a disadvantage such as large training data, since training a CNN with only 50 images will not ensure that the model will capture all the variability present in the retinal fundus image. Also, to reduce the complexity of CNN computation the optimization of network parameters is an issue. A single classifier model focuses to generalize the complete test data with that particular classifier model. By using ensemble of classifier with dynamic ensemble selection approaches, flexibility is achieved by dynamically redistributing multiple set of classifiers for each test sample.
Table 9
Comparative analysis for glaucoma classification for RIM dataset
References
Method
SP (%)
SE (%)
ACC (%)
[21]
Regional image feature model with SVM classifier
94
[27]
Hybrid method that uses swarm algorithm with SVM
95
[12]
Entropy sampling and CNN architecture
95
92
94
[13]
Morphometric features for multistage CNN model
89.4
Proposed
Clinical, color and texture-based features with ensemble of classifier models using dynamic selection methods
100
93.85
97.86
Table 10
Comparative analysis for glaucoma classification for Drishti dataset
References
Method
SP (%)
SE (%)
ACC (%)
AUC
[51]
Eight-layer CNN with overfeat and VGG-S architecture
0.763
[12]
Entropy sampling and CNN architecture
95.60
92.30
94.10
[24]
Texture and shape features using SVM linear kernel
79.2
0.86
[13]
Morphometric features for multistage CNN model
89.4
89.5
88.9
0.82
[52]
Statistical ACM with structure priori
89.01
Proposed
Clinical, color and texture-based features with ensemble of classifier models using dynamic selection methods
90
100
97
0.99

5 Conclusion

A framework for computer-based automated detection of glaucoma is developed. As a pre-processing approach, the blood vessels are detected and segmented for accurately determining the region of interest. The use of statistical features for segmentation enhances the OD and OC region, thereby reducing the background variation. The threshold for OD segmentation is obtained using the decision tree classifier. This has increased the accuracy of OD segmentation, even for images where the OD is surrounded by exudates. The determined threshold value for segmentation is not specific to a particular dataset and results in efficient segmentation. Feature extraction includes both clinical and image texture features. For classification, two robust ensemble of classifier models using dynamic selection approaches is developed. The classifier having more competence level (DCS or DES) is considered for detecting the class of test sample.
The proposed framework is developed on three glaucoma datasets. The performance of proposed framework is illustrated by the evaluation parameters. The best results are obtained using ensemble of random forest using META-DES dynamic ensemble selection technique. The average specificity, sensitivity and accuracy for glaucoma detection using ensemble of random forest using META-DES on hospital dataset are 100%, respectively. For RIM-ONE dataset, the average specificity, sensitivity and accuracy for glaucoma detection are 100%, 93.85% and 97.86%, respectively. For Drishti dataset, the average specificity, sensitivity and accuracy for glaucoma detection are 90%, 100% and 97%, respectively.
The main reason for META-DES performing better in comparison with other dynamic selection methods is the meta-classifier uses meta-features in order to determine the capability of the classifiers used in creating the ensemble.
A typical single classifier strategy focuses on using one classifier to generalize the full test dataset. While using a general ensemble of classifiers without dynamic selection methods, several classifiers are used, and the suitable classifier is chosen for classifying the entire test data. In dynamic ensemble models, flexibility is provided by dynamically redistributing a collection of multiple classifiers to each test data. Based on its performance in the competence region, the test sample picks the suitable ensemble of classifiers. This strategy accomplishes two goals: (i) It aids in the redistribution of the ensemble of classifiers for each test sample, preventing a whole test set from being over-generalized to a single classifier. (ii) The ensemble of classifiers is assigned to each test sample based on their performance in the neighborhood. This can aid in the selection of a suitable ensemble of classifiers for the test data. As a result, the dynamic ensemble selection algorithms can improve the performance of classifiers.
The comparative analysis indicate that the developed method performs better when compared with methods reported in the literature for respective datasets. Hence, the method can be deployed as a second opinion during glaucoma screening and in mass glaucoma detection.
The proposed classification method can be further implemented by using different prominent feature selection methods. Also, for comprehensive glaucoma analysis, automated glaucoma detection algorithms can be developed by using optical coherence tomography images. Several clinical features can also be explored for grading different stages of glaucoma.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Literatur
1.
Zurück zum Zitat Khurana, A.K.: Ophthalmology. New Age International, New Delhi (2007) Khurana, A.K.: Ophthalmology. New Age International, New Delhi (2007)
2.
Zurück zum Zitat Ernest, P.J., Schouten, J.S., Beckers, H.J., Hendrikse, F., Prins, M.H., Webers, C.A.: Prediction of glaucomatous visual field progression using baseline clinical data. J. Glaucoma 25(2), 228–235 (2016)CrossRef Ernest, P.J., Schouten, J.S., Beckers, H.J., Hendrikse, F., Prins, M.H., Webers, C.A.: Prediction of glaucomatous visual field progression using baseline clinical data. J. Glaucoma 25(2), 228–235 (2016)CrossRef
3.
Zurück zum Zitat Quigley, H.A.: The number of people with glaucoma worldwide in 2010 and 2020. Br. J. Ophthalmol. 90(3), 262–267 (2006)CrossRef Quigley, H.A.: The number of people with glaucoma worldwide in 2010 and 2020. Br. J. Ophthalmol. 90(3), 262–267 (2006)CrossRef
5.
Zurück zum Zitat De La Fuente-Arriaga, J.A., Felipe-Riverón, E.M., Garduño-Calderón, E.: Application of vascular bundle displacement in the optic disc for glaucoma detection using fundus images. Comput. Biol. Med. 47, 27–35 (2014)CrossRef De La Fuente-Arriaga, J.A., Felipe-Riverón, E.M., Garduño-Calderón, E.: Application of vascular bundle displacement in the optic disc for glaucoma detection using fundus images. Comput. Biol. Med. 47, 27–35 (2014)CrossRef
6.
Zurück zum Zitat Issac, A., Partha Sarathi, M., Dutta, M.K.: An adaptive threshold based image processing technique for improved glaucoma detection and classification. Comput. Methods Programs Biomed. 122, 229–244 (2015)CrossRef Issac, A., Partha Sarathi, M., Dutta, M.K.: An adaptive threshold based image processing technique for improved glaucoma detection and classification. Comput. Methods Programs Biomed. 122, 229–244 (2015)CrossRef
7.
Zurück zum Zitat Soorya, M., Issac, A., Dutta, M.K.: An automated and robust image processing algorithm for glaucoma diagnosis from fundus images using novel blood vessel tracking and bend point detection. Int. J. Med. Inform. 110, 2–70 (2018) Soorya, M., Issac, A., Dutta, M.K.: An automated and robust image processing algorithm for glaucoma diagnosis from fundus images using novel blood vessel tracking and bend point detection. Int. J. Med. Inform. 110, 2–70 (2018)
8.
Zurück zum Zitat Mittapalli, P.S., Kande, G.B.: Segmentation of optic disk and optic cup from digital fundus images for the assessment of glaucoma. Biomed. Signal Process. Control 24, 34–46 (2016)CrossRef Mittapalli, P.S., Kande, G.B.: Segmentation of optic disk and optic cup from digital fundus images for the assessment of glaucoma. Biomed. Signal Process. Control 24, 34–46 (2016)CrossRef
9.
Zurück zum Zitat Kausu, T., Gopi, V.P., Wahid, K.A., Doma, W., Niwas, S.I.: Combination of clinical and multiresolution features for glaucoma detection and its classification using fundus images. Biocybern. Biomed. Eng. 38(2), 329–341 (2018)CrossRef Kausu, T., Gopi, V.P., Wahid, K.A., Doma, W., Niwas, S.I.: Combination of clinical and multiresolution features for glaucoma detection and its classification using fundus images. Biocybern. Biomed. Eng. 38(2), 329–341 (2018)CrossRef
10.
Zurück zum Zitat Soltani, A., Battikh, T., Jabri, I., Lakhoua, N.: A new expert system based on fuzzy logic and image processing algorithms for early glaucoma diagnosis. Biomed. Signal Process. Control 40, 366–377 (2018)CrossRef Soltani, A., Battikh, T., Jabri, I., Lakhoua, N.: A new expert system based on fuzzy logic and image processing algorithms for early glaucoma diagnosis. Biomed. Signal Process. Control 40, 366–377 (2018)CrossRef
11.
Zurück zum Zitat Chai, Y., Liu, H., Xu, J.: Glaucoma diagnosis based on both hidden features and domain knowledge through deep learning models. Knowl. Based Syst. 161, 147–156 (2018)CrossRef Chai, Y., Liu, H., Xu, J.: Glaucoma diagnosis based on both hidden features and domain knowledge through deep learning models. Knowl. Based Syst. 161, 147–156 (2018)CrossRef
12.
Zurück zum Zitat Zillya, J., Buhmannb, J.M., Mahapatrab, D.: Glaucoma detection using entropy sampling and ensemble learning for automatic optic cup and disc segmentation. Comput. Med. Imaging Graph. 55, 28–41 (2017)CrossRef Zillya, J., Buhmannb, J.M., Mahapatrab, D.: Glaucoma detection using entropy sampling and ensemble learning for automatic optic cup and disc segmentation. Comput. Med. Imaging Graph. 55, 28–41 (2017)CrossRef
13.
Zurück zum Zitat Perdomo, O., Andrearczyk, V., Meriaudeau, F., Müller, H., González, F. A.: Glaucoma diagnosis from eye fundus images based on deep morphometric feature estimation. Computational Pathology and Ophthalmic Medical Image Analysis Lecture Notes in Computer Science, pp. 319–327 (2018) Perdomo, O., Andrearczyk, V., Meriaudeau, F., Müller, H., González, F. A.: Glaucoma diagnosis from eye fundus images based on deep morphometric feature estimation. Computational Pathology and Ophthalmic Medical Image Analysis Lecture Notes in Computer Science, pp. 319–327 (2018)
14.
Zurück zum Zitat Sevastopolsky, A.: Optic disc and cup segmentation methods for glaucoma detection with modification of U-Net convolutional neural network. Pattern Recognit. Image Anal. 27(3), 618–624 (2017)CrossRef Sevastopolsky, A.: Optic disc and cup segmentation methods for glaucoma detection with modification of U-Net convolutional neural network. Pattern Recognit. Image Anal. 27(3), 618–624 (2017)CrossRef
15.
Zurück zum Zitat Thakur, N., Juneja, M.: Optic disc and optic cup segmentation from retinal images using hybrid approach. Expert Syst. Appl. 127(2), 308–322 (2019)CrossRef Thakur, N., Juneja, M.: Optic disc and optic cup segmentation from retinal images using hybrid approach. Expert Syst. Appl. 127(2), 308–322 (2019)CrossRef
16.
Zurück zum Zitat Cheng, J., Liu, J., Xu, Y., Yin, F., Wong, D.W.K., Tan, N.-M., Tao, D., Cheng, C.-Y., Aung, T., Wong, T.Y.: Superpixel classification based optic disc and optic cup segmentation for glaucoma screening. IEEE Trans. Med. Imaging 32(6), 1019–1032 (2013)CrossRef Cheng, J., Liu, J., Xu, Y., Yin, F., Wong, D.W.K., Tan, N.-M., Tao, D., Cheng, C.-Y., Aung, T., Wong, T.Y.: Superpixel classification based optic disc and optic cup segmentation for glaucoma screening. IEEE Trans. Med. Imaging 32(6), 1019–1032 (2013)CrossRef
17.
Zurück zum Zitat Singh, A., Dutta, M.K., Parthasarathi, M., Uher, V., Burget, R.: Image processing based automatic diagnosis of glaucoma using wavelet features of segmented optic disc from fundus image. Comput. Methods Programs Biomed. 124, 108–120 (2016)CrossRef Singh, A., Dutta, M.K., Parthasarathi, M., Uher, V., Burget, R.: Image processing based automatic diagnosis of glaucoma using wavelet features of segmented optic disc from fundus image. Comput. Methods Programs Biomed. 124, 108–120 (2016)CrossRef
18.
Zurück zum Zitat Maheshwari, S., Pachori, R.B., Kanhangad, V., Bhandary, S.V., Acharya, U.R.: Iterative variational mode decomposition based automated detection of glaucoma using fundus images. Comput. Biol. Med. 88, 142–149 (2017)CrossRef Maheshwari, S., Pachori, R.B., Kanhangad, V., Bhandary, S.V., Acharya, U.R.: Iterative variational mode decomposition based automated detection of glaucoma using fundus images. Comput. Biol. Med. 88, 142–149 (2017)CrossRef
19.
Zurück zum Zitat Noronha, K.P., Rajendra Acharya, U., Prabhakar Nayak, K., Martis, R.J., Bhandary, S.V.: Automated classification of glaucoma stages using higher order cumulant features. Biomed. Signal Process. Control 10, 174–183 (2014)CrossRef Noronha, K.P., Rajendra Acharya, U., Prabhakar Nayak, K., Martis, R.J., Bhandary, S.V.: Automated classification of glaucoma stages using higher order cumulant features. Biomed. Signal Process. Control 10, 174–183 (2014)CrossRef
20.
Zurück zum Zitat Rajendra Acharya, U., Ng, E.Y.K., Eugene, L.W.J., Noronha, K.P., Min, L.C., Prabhakar Nayak, K., Bhandary, S.V.: Decision support system for the glaucoma using Gabor transformation. Biomed. Signal Process. Control 15, 18–26 (2015)CrossRef Rajendra Acharya, U., Ng, E.Y.K., Eugene, L.W.J., Noronha, K.P., Min, L.C., Prabhakar Nayak, K., Bhandary, S.V.: Decision support system for the glaucoma using Gabor transformation. Biomed. Signal Process. Control 15, 18–26 (2015)CrossRef
21.
Zurück zum Zitat Haleem, M.S., Han, L., Hemert, J.V., Fleming, A., Pasquale, L.R., Silva, P.S., Song, B.J., Aiello, L.P.: Regional image features model for automatic classification between normal and glaucoma in fundus and scanning laser ophthalmoscopy (SLO) images. J. Med. Syst. 40(6), 132 (2016)CrossRef Haleem, M.S., Han, L., Hemert, J.V., Fleming, A., Pasquale, L.R., Silva, P.S., Song, B.J., Aiello, L.P.: Regional image features model for automatic classification between normal and glaucoma in fundus and scanning laser ophthalmoscopy (SLO) images. J. Med. Syst. 40(6), 132 (2016)CrossRef
22.
Zurück zum Zitat Dua, S., Acharya, U.R., Chowriappa, P., Sree, S.V.: Wavelet based energy features for glaucomatous image classification. IEEE Trans. Inf. Technol. Biomed. 16, 80–87 (2012)CrossRef Dua, S., Acharya, U.R., Chowriappa, P., Sree, S.V.: Wavelet based energy features for glaucomatous image classification. IEEE Trans. Inf. Technol. Biomed. 16, 80–87 (2012)CrossRef
23.
Zurück zum Zitat Raghavendra, U., Fujita, H., Bhandary, S.V., Gudigar, A., Tan, J.H., Acharya, U.R.: Deep convolution neural network for accurate diagnosis of glaucoma using digital fundus images. Inf. Sci. 441, 41–49 (2018)MathSciNetCrossRef Raghavendra, U., Fujita, H., Bhandary, S.V., Gudigar, A., Tan, J.H., Acharya, U.R.: Deep convolution neural network for accurate diagnosis of glaucoma using digital fundus images. Inf. Sci. 441, 41–49 (2018)MathSciNetCrossRef
24.
Zurück zum Zitat Gour, N., Khanna, P.: Automated glaucoma detection using GIST and pyramid histogram of oriented gradients (PHOG) descriptors. Pattern Recognit. Lett. 137, 3–11 (2020)CrossRef Gour, N., Khanna, P.: Automated glaucoma detection using GIST and pyramid histogram of oriented gradients (PHOG) descriptors. Pattern Recognit. Lett. 137, 3–11 (2020)CrossRef
25.
Zurück zum Zitat Akram, M.U., Tariq, A., Khalid, S., Javed, M.Y., Abbas, S., Yasin, U.U.: Glaucoma detection using novel optic disc localization, hybrid feature set and classification techniques. Australas. Phys. Eng. Sci. Med. 38(4), 643–655 (2015)CrossRef Akram, M.U., Tariq, A., Khalid, S., Javed, M.Y., Abbas, S., Yasin, U.U.: Glaucoma detection using novel optic disc localization, hybrid feature set and classification techniques. Australas. Phys. Eng. Sci. Med. 38(4), 643–655 (2015)CrossRef
26.
Zurück zum Zitat Mookiah, M.R.K., Acharya, U.R., Lim, C.M., Petznick, A., Suri, J.S.: Data mining technique for automated diagnosis of glaucoma using higher order spectra and wavelet energy features. Knowl. Based Syst. 33, 73–82 (2012)CrossRef Mookiah, M.R.K., Acharya, U.R., Lim, C.M., Petznick, A., Suri, J.S.: Data mining technique for automated diagnosis of glaucoma using higher order spectra and wavelet energy features. Knowl. Based Syst. 33, 73–82 (2012)CrossRef
27.
Zurück zum Zitat Raja, C., Gangatharan, N.: A Hybrid Swarm Algorithm for optimizing glaucoma diagnosis. Comput. Biol. Med. 63, 196–207 (2015)CrossRef Raja, C., Gangatharan, N.: A Hybrid Swarm Algorithm for optimizing glaucoma diagnosis. Comput. Biol. Med. 63, 196–207 (2015)CrossRef
28.
Zurück zum Zitat Sivaswamy, J., Krishnadas, S., Chakravarty, A., Joshi, G., Tabish, A.: A comprehensive retinal image dataset for the assessment of glaucoma for the optic nerve head analysis. JSM Biomed. Imaging Data Pap. 2(1), 1004 (2015) Sivaswamy, J., Krishnadas, S., Chakravarty, A., Joshi, G., Tabish, A.: A comprehensive retinal image dataset for the assessment of glaucoma for the optic nerve head analysis. JSM Biomed. Imaging Data Pap. 2(1), 1004 (2015)
29.
Zurück zum Zitat Fumero, F., Sigut, J., Alayon, S., Gonzalez-Hernandez, M., Gonzalez, M.: Interactive tool and database for optic disc and cup segmentation of stereo and monocular retinal fundus images. In: Short Papers Proceedings—WSCG, pp. 91–97 (2015) Fumero, F., Sigut, J., Alayon, S., Gonzalez-Hernandez, M., Gonzalez, M.: Interactive tool and database for optic disc and cup segmentation of stereo and monocular retinal fundus images. In: Short Papers Proceedings—WSCG, pp. 91–97 (2015)
30.
Zurück zum Zitat Esedoglu, S., Shen, J.: Digital inpainting based on the Mumford–Shah–Euler image model. Eur. J. Appl. Math. 13(04), 353–370 (2002)MathSciNetMATHCrossRef Esedoglu, S., Shen, J.: Digital inpainting based on the Mumford–Shah–Euler image model. Eur. J. Appl. Math. 13(04), 353–370 (2002)MathSciNetMATHCrossRef
31.
Zurück zum Zitat Quigley, H.A., Brown, A.E., Morrison, J.D., Drance, S.M.: The size and shape of the optic disc in normal human eyes. Arch. Ophthalmol. 108(1), 51–57 (1990)CrossRef Quigley, H.A., Brown, A.E., Morrison, J.D., Drance, S.M.: The size and shape of the optic disc in normal human eyes. Arch. Ophthalmol. 108(1), 51–57 (1990)CrossRef
32.
Zurück zum Zitat Mienye, I., Sun, Y., Wang, Z.: Prediction performance of improved decision tree-based algorithms: a review. Procedia Manuf. 35, 698–703 (2019)CrossRef Mienye, I., Sun, Y., Wang, Z.: Prediction performance of improved decision tree-based algorithms: a review. Procedia Manuf. 35, 698–703 (2019)CrossRef
33.
Zurück zum Zitat Pathan, S., Kumar, P., Pai, R., Bhandary, S.V.: Automated detection of optic disc contours in fundus images using decision tree classifier. Biocybern. Biomed. Eng. 40(1), 52–64 (2020)CrossRef Pathan, S., Kumar, P., Pai, R., Bhandary, S.V.: Automated detection of optic disc contours in fundus images using decision tree classifier. Biocybern. Biomed. Eng. 40(1), 52–64 (2020)CrossRef
34.
Zurück zum Zitat Celebi, M.E., Kingravi, H.A., Vela, P.A.: A comparative study of efficient initialization methods for the k-means clustering algorithm. Expert Syst. Appl. 40(1), 200–210 (2013)CrossRef Celebi, M.E., Kingravi, H.A., Vela, P.A.: A comparative study of efficient initialization methods for the k-means clustering algorithm. Expert Syst. Appl. 40(1), 200–210 (2013)CrossRef
35.
Zurück zum Zitat Haralick, R.M., Shanmugam, K., Dinstein, I.: Textural features for image classification. IEEE Trans. Syst. Man Cybern. SMC-3(6), 610–621 (1973)CrossRef Haralick, R.M., Shanmugam, K., Dinstein, I.: Textural features for image classification. IEEE Trans. Syst. Man Cybern. SMC-3(6), 610–621 (1973)CrossRef
36.
Zurück zum Zitat Hu, M.K.: Visual pattern recognition by moment invariants. IRE Trans. Inf. Theory 8(2), 179–187 (1962)MATHCrossRef Hu, M.K.: Visual pattern recognition by moment invariants. IRE Trans. Inf. Theory 8(2), 179–187 (1962)MATHCrossRef
37.
Zurück zum Zitat García-Floriano, A., Ferreira-Santiago, A., Camacho-Nieto, O., Yáñez-Márquez, C.: A machine learning approach to medical image classification: detecting age-related macular degeneration in fundus images. Comput. Electr. Eng. 75, 218–229 (2019)CrossRef García-Floriano, A., Ferreira-Santiago, A., Camacho-Nieto, O., Yáñez-Márquez, C.: A machine learning approach to medical image classification: detecting age-related macular degeneration in fundus images. Comput. Electr. Eng. 75, 218–229 (2019)CrossRef
38.
Zurück zum Zitat Wu, Z., Jiang, S., Zhou, X., Wang, Y., Zuo, Y., Wu, Z., Liang, L., Liu, Q.: Application of image retrieval based on convolutional neural networks and Hu invariant moment algorithm in computer telecommunications. Comput. Commun. 150, 729–738 (2020)CrossRef Wu, Z., Jiang, S., Zhou, X., Wang, Y., Zuo, Y., Wu, Z., Liang, L., Liu, Q.: Application of image retrieval based on convolutional neural networks and Hu invariant moment algorithm in computer telecommunications. Comput. Commun. 150, 729–738 (2020)CrossRef
39.
Zurück zum Zitat Kumar, A., Pang, G.: Defect detection in textured materials using Gabor filters. IEEE Trans. Ind. Appl. 38(2), 425–440 (2002)CrossRef Kumar, A., Pang, G.: Defect detection in textured materials using Gabor filters. IEEE Trans. Ind. Appl. 38(2), 425–440 (2002)CrossRef
40.
Zurück zum Zitat Sun, X., Wang, J., Chen, R., Kong, L., She, M.F.: Directional Gaussian filter-based LBP descriptor for textural image classification. Procedia Eng. 15, 1771–1779 (2011)CrossRef Sun, X., Wang, J., Chen, R., Kong, L., She, M.F.: Directional Gaussian filter-based LBP descriptor for textural image classification. Procedia Eng. 15, 1771–1779 (2011)CrossRef
41.
Zurück zum Zitat Tamura, H., Mori, S., Yamawaki, T.: Textural features corresponding to visual perception. IEEE Trans. Syst. Man Cybern. 8(6), 460–473 (1978)CrossRef Tamura, H., Mori, S., Yamawaki, T.: Textural features corresponding to visual perception. IEEE Trans. Syst. Man Cybern. 8(6), 460–473 (1978)CrossRef
42.
Zurück zum Zitat Britto, A.S., Sabourin, R., Oliveira, L.E.: Dynamic selection of classifiers—a comprehensive review. Pattern Recognit. 47(11), 3665–3680 (2014)CrossRef Britto, A.S., Sabourin, R., Oliveira, L.E.: Dynamic selection of classifiers—a comprehensive review. Pattern Recognit. 47(11), 3665–3680 (2014)CrossRef
43.
Zurück zum Zitat Duin, R.P.W., Tax, D.M.J.: Experiments with classifier combining rules. Multiple Classifier Systems Lecture Notes in Computer Science, pp. 16–29 (2000) Duin, R.P.W., Tax, D.M.J.: Experiments with classifier combining rules. Multiple Classifier Systems Lecture Notes in Computer Science, pp. 16–29 (2000)
44.
Zurück zum Zitat Ko, A.H., Sabourin, R., Britto, J.A.S.: From dynamic classifier selection to dynamic ensemble selection. Pattern Recognit. 41(5), 1718–1731 (2008)MATHCrossRef Ko, A.H., Sabourin, R., Britto, J.A.S.: From dynamic classifier selection to dynamic ensemble selection. Pattern Recognit. 41(5), 1718–1731 (2008)MATHCrossRef
45.
Zurück zum Zitat Nanni, L., Brahnam, S., Ghidoni, S., Menegatti, E., Barrier, T.: Different approaches for extracting information from the co-occurrence matrix. PLoS One 8(12), 83554 (2013)CrossRef Nanni, L., Brahnam, S., Ghidoni, S., Menegatti, E., Barrier, T.: Different approaches for extracting information from the co-occurrence matrix. PLoS One 8(12), 83554 (2013)CrossRef
46.
Zurück zum Zitat Rodriguez, J., Kuncheva, L., Alonso, C.: Rotation forest: a new classifier ensemble method. IEEE Trans. Pattern Anal. Mach. Intell. 28(10), 1619–1630 (2006)CrossRef Rodriguez, J., Kuncheva, L., Alonso, C.: Rotation forest: a new classifier ensemble method. IEEE Trans. Pattern Anal. Mach. Intell. 28(10), 1619–1630 (2006)CrossRef
47.
Zurück zum Zitat Ozçift, A.: Random forests ensemble classifier trained with data resampling strategy to improve cardiac arrhythmia diagnosis. Comput. Biol. Med. 41(5), 265–271 (2011)CrossRef Ozçift, A.: Random forests ensemble classifier trained with data resampling strategy to improve cardiac arrhythmia diagnosis. Comput. Biol. Med. 41(5), 265–271 (2011)CrossRef
48.
Zurück zum Zitat Cruz, R.M., Sabourin, R., Cavalcanti, G.D.: Dynamic classifier selection: recent advances and perspectives. Inf. Fusion 41, 195–216 (2018)CrossRef Cruz, R.M., Sabourin, R., Cavalcanti, G.D.: Dynamic classifier selection: recent advances and perspectives. Inf. Fusion 41, 195–216 (2018)CrossRef
49.
Zurück zum Zitat Cruz, R.M., Sabourin, R., Cavalcanti, G.D.: META-DES. Oracle: meta-learning and feature selection for dynamic ensemble selection. Inf. Fusion 38, 84–103 (2017)CrossRef Cruz, R.M., Sabourin, R., Cavalcanti, G.D.: META-DES. Oracle: meta-learning and feature selection for dynamic ensemble selection. Inf. Fusion 38, 84–103 (2017)CrossRef
50.
Zurück zum Zitat Oliveira, D.V.R., Cavalcanti, G.D.C., Sabourin, R.: Online pruning of base classifiers for dynamic ensemble selection. Pattern Recognit 72, 44–58 (2017)CrossRef Oliveira, D.V.R., Cavalcanti, G.D.C., Sabourin, R.: Online pruning of base classifiers for dynamic ensemble selection. Pattern Recognit 72, 44–58 (2017)CrossRef
51.
Zurück zum Zitat Orlando, J.I., Prokofyeva, E., Fresno, M.D., Blaschko, M.B.: Convolutional neural network transfer for automated glaucoma identification. 12th International Symposium on Medical Information Processing and Analysis (2017) Orlando, J.I., Prokofyeva, E., Fresno, M.D., Blaschko, M.B.: Convolutional neural network transfer for automated glaucoma identification. 12th International Symposium on Medical Information Processing and Analysis (2017)
52.
Zurück zum Zitat Zhou, W., Yi, Y., Gao, Y., Dai, J.: Optic disc and cup segmentation in retinal images for glaucoma diagnosis by locally statistical active contour model with structure prior. Comput. Math. Methods Med. 2019, 1–16 (2019)MathSciNetMATHCrossRef Zhou, W., Yi, Y., Gao, Y., Dai, J.: Optic disc and cup segmentation in retinal images for glaucoma diagnosis by locally statistical active contour model with structure prior. Comput. Math. Methods Med. 2019, 1–16 (2019)MathSciNetMATHCrossRef
53.
Zurück zum Zitat Civit-Masot, J., Luna-Perejon, F., Vicente-Diaz, S., Rodriguez Corral, J.M., Civit, A.: TPU cloud-based generalized U-Net for Eye Fundus Image segmentation. IEEE Access 7, 142379–142387 (2019)CrossRef Civit-Masot, J., Luna-Perejon, F., Vicente-Diaz, S., Rodriguez Corral, J.M., Civit, A.: TPU cloud-based generalized U-Net for Eye Fundus Image segmentation. IEEE Access 7, 142379–142387 (2019)CrossRef
54.
Zurück zum Zitat Tulsani, A., Kumar, P., Pathan, S.: Automated segmentation of optic disc and optic cup for glaucoma assessment using improved UNET++ architecture. Biocybern. Biomed. Eng. 41(2), 819–832 (2021)CrossRef Tulsani, A., Kumar, P., Pathan, S.: Automated segmentation of optic disc and optic cup for glaucoma assessment using improved UNET++ architecture. Biocybern. Biomed. Eng. 41(2), 819–832 (2021)CrossRef
55.
Zurück zum Zitat Vostatek, P., Claridge, E., Uusitalo, H., Hauta-Kasari, M., Fält, P., Lensu, L.: Performance comparison of publicly available retinal blood vessel segmentation methods. Comput. Med. Imaging Graph. 55, 2–12 (2017)CrossRef Vostatek, P., Claridge, E., Uusitalo, H., Hauta-Kasari, M., Fält, P., Lensu, L.: Performance comparison of publicly available retinal blood vessel segmentation methods. Comput. Med. Imaging Graph. 55, 2–12 (2017)CrossRef
Metadaten
Titel
An automated classification framework for glaucoma detection in fundus images using ensemble of dynamic selection methods
verfasst von
Sumaiya Pathan
Preetham Kumar
Radhika M. Pai
Sulatha V. Bhandary
Publikationsdatum
28.07.2023
Verlag
Springer Berlin Heidelberg
Erschienen in
Progress in Artificial Intelligence / Ausgabe 3/2023
Print ISSN: 2192-6352
Elektronische ISSN: 2192-6360
DOI
https://doi.org/10.1007/s13748-023-00304-x

Weitere Artikel der Ausgabe 3/2023

Progress in Artificial Intelligence 3/2023 Zur Ausgabe

Premium Partner