Skip to main content
Erschienen in: Complex & Intelligent Systems 2/2018

Open Access 10.10.2017 | Original Article

EFS-MI: an ensemble feature selection method for classification

An ensemble feature selection method

verfasst von: Nazrul Hoque, Mihir Singh, Dhruba K. Bhattacharyya

Erschienen in: Complex & Intelligent Systems | Ausgabe 2/2018

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Feature selection methods have been used in various applications of machine learning, bioinformatics, pattern recognition and network traffic analysis. In high dimensional datasets, due to redundant features and curse of dimensionality, a learning method takes significant amount of time and performance of the model decreases. To overcome these problems, we use feature selection technique to select a subset of relevant and non-redundant features. But, most feature selection methods are unstable in nature, i.e., for different training datasets, a feature selection method selects different subsets of features that yields different classification accuracy. In this paper, we provide an ensemble feature selection method using feature–class and feature-feature mutual information to select an optimal subset of features by combining multiple subsets of features. The method is validated using four classifiers viz., decision trees, random forests, KNN and SVM on fourteen UCI, five gene expression and two network datasets.
Hinweise

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Introduction

Feature selection is used to select a subset of relevant and non-redundant features from a large feature space. In many applications of machine learning and pattern recognition, feature selection is used to select an optimal feature subset to train the learning model. While dealing with large datasets, it is often the case that information available is somehow redundant for the learning process. The process of identifying and removing the irrelevant features from the original feature space, so that the learning algorithms can mainly focus on the relevant data which are useful for analysis and future predictions, is called feature or variable or attribute selection. The main objectives of feature selection are: (i) to improve predictive accuracy (ii) to remove redundant features and (iii) to reduce time consumption during analysis. Rodriguez et al. [23] state that performance of classification models improves when irrelevant and redundant features are eliminated from the original dataset. Moreover, a single feature selection method may generate local optimal or sub-optimal feature subset for which a learning method compromises its performance. In ensemble-based feature selection method, multiple feature subsets are combined to select an optimal subset of features using combination of feature ranking that improves classification accuracy. In the first step of ensemble method, a set of different feature selectors are chosen and each selector provides a sorted order of features. The second step aggregates the selected subsets of features using different aggregation techniques [28].
In the last two decades, a significant number of feature selection methods have been proposed that work differently using various metrics (like probability distribution, entropy, correlation etc) [1, 3, 11, 12, 15, 22, 26, 29]. Feature selection methods are used to reduce dimensionality of data for big data analytics [8, 16], gene expression data analysis and network traffic traffic analysis [7, 13, 19]. For a given dataset, different feature selection algorithms may select different subset of features and hence the result obtained may have different accuracy. So, people use ensemble-based feature selection method to select a stable feature set which improves classification accuracy. But, the main problem which needs to be considered in designing an ensemble-based feature selection is diversity [27]. Diversity may be achieved through using different datasets, feature subsets, or classifiers.

Types of feature selection

Filter approach [9] is used to select a subset of features from high dimensional datasets without using a learning algorithm. Filter-based feature selection methods are typically faster but the classifier accuracy is not ensured. Whereas, wrapper approach [4] uses a learning algorithm to evaluate the accuracy of a selected subset of features during classification. Wrapper methods can give high classification accuracy than filter method for particular classifiers but they are less cost effective. Embedded approach [9] performs feature selection during the process of training and is specific to the applied learning algorithms. Hybrid approach [14] is basically a combination of both filter and wrapper-based methods. Here, a filter approach selects a candidate feature set from the original feature set and the candidate feature set is refined by the wrapper approach. It exploits the advantages of both these approaches.

Discussion

As the dimensionality of the data is increasing day-by-day, difficulty in analyzing the data is also increasing with the same pace. In that context, feature selection is becoming an essential requirement in many data mining applications. From our empirical study, it has been observed that use of a single filter often fails to provide consistent performance on multiple datasets. So different filters can be ensembled to overcome the biasness or limitations of the identical classifiers and to provide consistent performance over a wide range of applications.

Motivation and problem definition

In literature, we found a large number of filter-based feature selection methods such as cfs, gain ratio, info gain and reliefF. The main problems of filter-based feature selection are (i) most of them do not consider the redundancy among the selected features, (ii) a single filter-based method may have biasness on selected feature subset and (iii) inconsistent prediction accuracy during classification. To overcome these problems, an ensemble of feature selection methods is introduced to select an optimal subset of non-redundant and relevant features which improves prediction accuracy during classification. However, most ensemble-based feature selection methods do not consider redundancy among the selected features during feature selection. For example, Canedo et al. [5] tested an ensemble-based feature selection method where the combiner simply takes the union of different subsets of features generated by multiple filter methods. After the experimental analysis on 5–15 datasets, they claim that the accuracy of their ensemble method was degraded. We are motivated to improve the classification accuracies of classifiers using an ensemble method that uses feature-class and feature-feature mutual information to combine different subsets of features. Feature-class mutual information is used to select relevant features whereas feature-feature mutual information is used to select non-redundant features. The union operation simply selects unique features from different subsets of features but it does not consider the redundant features in terms of prediction information or irrelevant features. Hence, we propose an ensemble feature-selection method that selects only a subset of relevant and non-redundant features. We formulate our problem as follows:
For a given dataset D, it is aimed to select initially different subsets of features, say \(S_1, S_2, \ldots , S_n\) using n filter-based feature-selection methods. Next, is to combine these feature subsets using feature-class and feature-feature mutual information to generate an optimal feature subset, say F which contains only non-redundant, yet relevant features. The appropriateness of the feature subset is to be validated using unbiased classifiers on benchmark datasets.

Paper organization

The rest of the paper is organized as follows: In the section, “Related work”, we report related work in brief. In the section, “EFS-MI: the proposed MI-based ensemble feature selection”, we explain our proposed method and related concepts. The experimental results are analyzed in the section, “Experimental results” followed by conclusions and future work in the section, “Conclusion and future work”.
In the past two decades, a good number of ensemble feature selection methods have been proposed. Bagging [6] and boosting [25] are two most popular examples, which work on bootstrap samples of the training set. A bootstrap sample is a replica of dataset created by randomly selecting k instances, with replacement from the training set. Each of the replica is fed to a filter. Prediction of each classifier is combined using simple voting. On the other hand, the boosting approach samples the instances in proportion to their weights. An instance is weighed heavily if the previous model misclassified it. Olsson et al. [20] combines three commonly used ranker documents such as frequency threshold, information gain and chi-square for text classification problems. Wang et al. [28] present the ensemble of six commonly used filter-based rankers whereas Optiz [21] studies the ensemble feature selection for neural networks called genetic ensemble feature selection. Lee [18] and Rokachetal [24] combine outcomes of various non ranker filter-based feature subset selection techniques.
Moreover, feature selection methods have been applied in the classification problems such as bioinformatics and signal processing [33]. Generally, a cost-based feature selection method is used to maximize the classification performance and minimize the classification cost associated with the features, which is a multi-objective optimization problem. Zhang et al. [32] propose a cost-based feature selection method using multi-objective particle swarm optimization (PSO). The method generates a Pareto front of nondominated solutions, that is, feature subsets, to meet different requirements of decision-makers in real-world applications. In order to enhance the search capability of the proposed algorithm, a probability-based encoding technology and an effective hybrid operator, together with the ideas of the crowding distance, the external archive, and the Pareto domination relationship, are applied to PSO. A binary bare bones particle swarm optimization (BPSO) method is proposed to select an optimal subset of features [31]. In this method, a reinforced memory strategy is designed to update the local leaders of particles for avoiding the degradation of outstanding genes in the particles, and a uniform combination is proposed to balance the local exploitation and the global exploration of algorithm. The experiments show that the proposed algorithm is competitive in terms of both classification accuracy and computational performance. Canedo et al. [5] propose an ensemble of filters and classifiers for microarray data classification. Yu et al. [30] propose an ensemble based on GA wrapper feature selection, wherein they use three real-world data sets to show that the proposed method outperforms a single classifier employing all the features and a best individual classifier is obtained from GA based feature subset selection. Stephen Bay claims that ensemble of multiple classifiers is an effective technique for improving classification accuracy [2] and propose a combining algorithm for nearest neighbor classifiers using multiple feature subsets. Our study reveals that none of the existing ensemble methods are totally free from those limitations, as reported initially in section “Motivation and problem definition”.
Table 1
Dataset result
DN
Dataset name
NoI
NoF
NoSF
SF (DRW)
1
Accut1
120
7
5
5,6,4,1,3
2
Accute2
120
5
5
1,4,7,5,2
3
Abalone
4177
7
4
3,5,6,1
4
Glass
214
9
5
3,8,7,5,6
5
Wine
178
13
5
7,10,1,2,4
6
Iris
150
4
3
3,1,2
7
Monk1
432
6
4
5,2,6,4
8
Monk2
432
6
3
5,2,1
9
Monk3
432
6
4
5,4,6,3
10
Tic-Tac Toe
958
9
5
5,3,9,1,7
11
Zoo
101
16
5
13,4,8,3,1
12
Diabetes
768
9
4
2,8,5,4
13
Car
1728
7
5
6,4,1,2,5
14
Heart-statlog
270
14
5
13,12,8,11,10
15
Breast-cancerW
699
10
5
2,3,7,1,5
16
Colon-cancer
62
2000
5
765,1771,1972,1672,737
17
Lung-cancer
73
325
5
23,243,27,205,12
18
Lymphoma
45
4026
5
734,760,37,853,2659
19
SRBCT
83
2308
5
509,107,1105,1915,1916
20
TUIDS
 
20
5
17,16,19,18,6
21
NSL-KDD
 
43
5
5,3,4,6,42
NoI number of instances, NoF number of features, NoSF number of selected features, SF (DRW) selected features (decreasing rank wise)

EFS-MI: the proposed MI based ensemble feature selection

To overcome the limitations, we introduced an ensemble feature selection method uses mutual information to select an optimal subset of features. In information theory, mutual information I(XY) is the amount of uncertainty in X due to the knowledge of Y [17]. Mathematically, mutual information is defined as
$$\begin{aligned} I(X;Y)=\sum _{x,y} p{(x,y)}\log _2 \frac{p(x,y)}{p(x),p(y)} \end{aligned}$$
(1)
where P(xy) is the joint probability distribution function of X and Y, and P(x) and P(y) are the marginal probability distribution functions for X and Y. We can also say
$$\begin{aligned} I(X;Y) = H(X) - H(X|Y) \end{aligned}$$
(2)
where, H(X) is the marginal entropy, H(X|Y) is the conditional entropy, and H(XY) is the joint entropy of X and Y. Here, if H(X) represents the measure of uncertainty about a random variable, then H(X|Y) measures what Y does not say about X. This is the amount of uncertainty in X after knowing Y and this substantiates the intuitive meaning of mutual information as the amount of information that knowing either variable provides about the other. In our method, a mutual information measure is used to calculate the information gain among features as well as between feature and class attributes. We define the marginal entropy and conditional entropy as follows.
Definition 1
Marginal Entropy For a random variable X, if the marginal distribution is P(X) then the distribution has an associated marginal entropy which is defined as follows:
$$\begin{aligned} H(X)=\sum \limits _{i}P(x_i)\log _2\frac{1}{P(x_i)}.\\ \end{aligned}$$
Definition 2
Conditional Entropy If X and Y are two discrete random variables; P(xy) and P(y|x) are joint and conditional probability distributions respectively, then the conditional entropy associated with these distributions is defined as
$$\begin{aligned} H(Y|X)=-\sum \limits _{x\in X}\sum \limits _{y\in Y} P(x,y)\log _2 P(y|x).\\ \end{aligned}$$
For better understanding let us take an example, assume a language with the following letters and associated probabilities as given in Table 1:
X
p
t
k
a
i
u
p(X)
\(\frac{1}{8}\)
\(\frac{1}{4}\)
\(\frac{1}{8}\)
\(\frac{1}{4}\)
\(\frac{1}{8}\)
\(\frac{1}{8}\)
Here, marginal entropy of X is:
$$\begin{aligned} H(X)= & {} -\sum \limits _{(p,t,k,a,i,u)}P(x) \log _2 P(x)\\ H(X)= & {} -\left( 4\times (1/8)\times \log _2 \frac{1}{8} + 2\times (1/4)\times \log _2 \frac{1}{4}\right) =2.5 \end{aligned}$$
The joint probability of a vowel and a consonant occurring together is given in Table 2:
P(x,y)
p
t
k
P(y)
a
\(\frac{1}{16}\)
\(\frac{3}{8}\)
\(\frac{1}{16}\)
\(\frac{1}{2}\)
i
\(\frac{1}{16}\)
\(\frac{3}{16}\)
0
\(\frac{1}{4}\)
u
0
\(\frac{3}{16}\)
\(\frac{1}{16}\)
\(\frac{1}{4}\)
P(x)
\(\frac{1}{8}\)
\(\frac{3}{4}\)
\(\frac{1}{8}\)
 
Now, the conditional entropy of a vowel and a consonant can be computed as follows:
$$\begin{aligned} H(V|C)= & {} -\sum \limits _{x\in X}\sum \limits _{y\in Y} P(x,y)\log _2 P(y|x)\\ H(V|C)= & {} -\,(P(a,p)\log _2 P(a|p)+P(a,t)\log _2 P(a|t)\ \\&+\,P(a,k)\log _2 P(a|k)+P(i,p)\log _2 P(i|p)\\&+\,P(i,t)\log _2 P(i|t)+P(i,k)\log _2 P(i|k)\\&+\,P(u,p)\log _2 P(u|p)+P(u,t)\log _2 P(u|t)\\&+\,P(u,k)\log _2 P(u|k))\\ H(V|C)= & {} -\left( \frac{1}{16}\log _2\frac{\frac{1}{16}}{\frac{1}{8}}+\frac{3}{8}\log _2\frac{\frac{3}{8}}{\frac{3}{4}}\right. \\&\left. +\,\frac{1}{16}\log _2\frac{\frac{1}{16}}{\frac{1}{8}}+\frac{1}{16}\log _2\frac{\frac{1}{16}}{\frac{1}{8}}+\frac{3}{16}\log _2\frac{\frac{3}{16}}{\frac{3}{4}}+0\right. \\&\left. +\,0+\frac{3}{16}\log _2\frac{\frac{3}{16}}{\frac{3}{4}}+\frac{1}{16}\log _2\frac{\frac{1}{16}}{\frac{1}{8}}\right) \\= & {} \frac{11}{8}=1.375 \end{aligned}$$
Table 2
Classification accuracies and average classification accuracies (ACA) for network intrusion datasets
D
Decision tree
Random forest
KNN
 
Rank
SU
GR
IG
RF
CS
EFS
SU
GR
IG
RF
CS
EFS
SU
GR
IG
RF
CS
EFS
20
\(f=5\)
0.95
0.90
0.95
0.93
0.95
0.95
0.96
0.89
0.96
0.88
0.96
0.92
0.96
0.87
0.95
0.9
0.96
0.97
\(f=3\)
0.96
0.95
0.95
0.9
0.95
0.96
0.97
0.95
0.97
0.95
0.97
0.93
0.93
0.96
0.92
0.87
0.96
0.96
\(f=4\)
0.96
0.97
0.95
0.94
0.96
0.96
0.98
0.97
0.98
0.97
0.97
0.96
0.92
0.96
0.93
0.88
0.93
0.96
\(f=6\)
0.97
0.97
0.97
0.95
0.97
0.97
0.98
0.98
0.98
0.97
0.98
0.97
0.92
0.94
0.93
0.88
0.92
0.95
\(f=42\)
0.97
0.97
0.97
0.95
0.97
0.97
0.98
0.98
0.98
0.97
0.98
0.98
0.92
0.93
0.93
0.88
0.93
0.94
\(f=33\)
0.97
0.97
0.97
0.95
0.97
0.97
0.98
0.98
0.98
0.97
0.98
0.98
0.92
0.92
0.92
0.90
0.92
0.95
\(f=23\)
0.97
0.97
0.97
0.96
0.97
0.97
0.98
0.98
0.98
0.97
0.98
0.98
0.92
0.92
0.92
0.91
0.92
0.96
\(f=34\)
0.97
0.97
0.97
0.97
0.97
0.98
0.98
0.98
0.98
0.98
0.98
0.98
0.92
0.92
0.92
0.92
0.92
0.96
 
ACA
0.95
0.92
0.95
0.94
0.97
0.96
0.97
0.94
0.96
0.91
0.96
0.96
0.92
0.91
0.95
0.88
0.96
0.95
21
\(f=1\)
0.79
0.98
0.98
0.98
0.98
0.99
0.79
0.98
0.98
0.98
0.98
0.98
0.79
0.98
0.98
0.98
0.98
0.98
\(f=17\)
0.99
0.99
0.98
0.98
0.98
0.99
0.99
0.99
0.99
0.99
0.99
0.99
0.98
0.99
0.99
0.99
0.99
0.99
\(f=16\)
0.99
0.98
0.99
0.99
0.99
0.99
0.99
0.99
0.99
0.99
0.99
0.99
0.98
0.98
0.98
0.98
0.98
0.99
\(f=19\)
0.99
0.99
0.99
0.99
0.99
0.99
0.99
0.99
0.99
0.99
0.99
0.99
0.98
0.98
0.98
0.98
0.98
0.99
\(f=18\)
0.99
0.99
0.99
0.99
0.99
0.99
0.99
0.99
0.99
0.99
0.99
0.99
0.98
0.98
0.98
0.98
0.98
0.99
 
ACA
0.95
0.98
0.98
0.98
0.98
0.99
0.95
0.98
0.98
0.98
0.98
0.98
0.94
0.98
0.98
0.98
0.98
0.98

EFS-MI: Algorithm

Our proposal is an ensemble approach that combines the subsets obtained from various filters using feature-class and feature-feature mutual information as shown in Fig. 1. The method combines the subsets of features selected by different feature selection methods using greedy search approach. For a particular rank, if a common feature is chosen by all the selectors, then that feature is selected without using greedy search technique and put into the optimal subset. Otherwise, the method computes feature–class and feature-feature mutual information and selects a feature that has maximum feature–class mutual information but minimum feature-feature mutual information. It removes redundancy among the selected features using feature-feature mutual information and selects relevant features using the feature–class mutual information and hence removes the biasness induced by the individual feature-selection methods. The steps of the algorithm are shown in Algorithm 1.

Function of a combiner

The combiner combines the selected subset of features based on feature–class and feature-feature mutual information. First, the combiner considers the first raked features from all the selected subsets and if all the first-ranked features are same then without computing feature–class and feature-feature mutual information, we pick up that common feature as an optimal feature. But, if the features are different then first we compute feature–class mutual information for every feature and consider the feature that has the highest feature–class mutual information. Now, for this feature we compute feature-feature mutual information with all the features that are already been selected as optimal features and if the feature-feature mutual information of that feature with all other selected features is less than a user defined threshold say, \(\alpha \) then the feature will be selected. The feature-feature mutual information is used to measure the feature-relevance of a non-selected feature with selected features. We introduce an effective value for \(\alpha \) (\(\alpha =0.75\)) based on our exhaustive experimental study. In ensemble approach, a ’combiner’ plays an important role in ensembling various feature selection methods. People use different methods such as majority voting, weighted voting, sum rule, mean rule, product rule, maximum rule, minimum rule, correlation and mutual information to build the combiner. However, the combiner for the proposed ensemble feature selection method emphasizes on reducing the redundancy among the selected subset of features by incorporating feature–class as well as feature-feature mutual information. To explain our method, following definitions are useful.
Definition 3
Feature–class relevance: It is defined as the degree of feature–class mutual information between a feature f and a class label C.
Definition 4
Feature-feature relevance: It is defined as the degree of feature-feature mutual information between any two features, say \(f_i\) and \(f_j\) where \(f_i, f_j \in F\).
Definition 5
Redundant feature: Two non-selected features say \(f_i\) and \(f_j\) are defined as redundant with respect to a given selected feature say \(f_k, if \, feature\_feature\_MI(f_i,f_k)=feature\_feature\_MI(f_j,f_k)\).
Lemma The subset of features identified by EFS-MI is non-redundant and relevant.
Proof
The proposed EFS-MI selects a feature \(f_i\) as relevant only when the feature–class relevance between \(f_i\) and a given class, say C is high (as per Definition 1). Again, for a given selected feature \(f_k\), EFS-MI considers any two features say \(f_i\) and \(f_j\) as redundant, if \(feature\_feature\_MI(f_i,f_k)=feature\_feature\_MI(f_j,f_k)\) and exclude from further consideration. Hence the proof. \(\square \)
A feature \(f_i \in F\) identified by EFS-MI, if \(f_i\) has a high feature–class relevance based on mutual information with the class label C (as per Definition 1). Further, for any two given features, \(f_j\) and \(f_k\), our method EFS-MI shall include only \(f_i\) in the final subset of features if the feature-feature mutual information of \(f_i\) is smaller than both \(f_j\) and \(f_k\) , i.e., shall discard both the redundant features \(f_j\) and \(f_k\) (as per Definition 3).
Lemma EFS-MI removes biasness of individual feature selection method during ensemble.
Proof
The proposed method considers the following two situations to remove biasness of each individual feature selection method.
(i)
if a feature \(f_i\) is selected by all the individual feature selection methods say \(M_j\), for rank \(R_k\), the feature is intuitively placed in the optimal subset at rank \(R_k\).
 
(ii)
if the features \(f_i\) is not same for all the individual feature selection methods \(M_j\), for a particular rank \(R_k\), the biasness of a feature selection methods is removed using \(feature\_feature\_MI()\) and \(feature\_class\_MI()\). EFS-MI selects a feature \(f_i\) if its feature-class mutual information is comparatively high but the feature-feature mutual information with all other features selected already are comapratively low. \(\square \)
 
In algorithm 1, the value of K is an user input that represents the number of selected features in the optimal set. The value of K is decided heuristically based on empirical study. The proposed method first considers the highest ranked feature and computes the classification accuracy using a classifier. In the next iteration, it considers a subset of features that includes both the first-ranked feature as well as the second-ranked feature, and then the first-, second- and third-ranked features and so on. For each subset of features, the method computes classification accuracy. If a subset of features that includes K high ranked features gives the highest classification accuracy compared to another subset that includes K + 1 high ranked features, then the subset of K high ranked features is considered as an optimal subset of features. From the empirical analysis, we found that the number of selected features is in the range 3–5.

Complexity analysis

Let size of each feature subset is K and the number of subsets obtained is n, then for the outer loop complexity is \(O(K \times n)\). If at any time total number of features included in F is m, the function \(feature\_feature\_MI()\) loop iterates for O(m) times. Thus, the overall complexity of the above algorithm is \(O(K\times n)\) + O(m) approximately. If K is very large then complexity will be almost linear because the number of filters employed will be less as compared to the value of K. If the value of K and m are also very large, overall complexity will be linear.

Experimental results

The proposed EFS-MI feature selection method is implemented in MATLAB 2008 software. We carried out the experiment on a workstation having 12 GB main memory, 2.26 Intel(R) Xeon processor and 64-bit Windows 7 operating system. Also, we use a freely available toolbox called Weka [10] where many feature-selection algorithms are available.

Result analysis

The proposed EFS-MI feature selection method is evaluated on three different applications, viz., network security, UCI and gene expression datasets. A brief description of the datasets and the selected features are given in Table 1. We consider five different rank-based feature-selection methods, viz., Symmetric Uncertainty (SU), Gain Ratio (GR), Info Gain (IG), ReliefF (RF) and Chi Square (CS) and four different classifiers to find classification accuracy. We compare classification accuracies found on the selected subset of features by the proposed method, i.e., EFS-MI with all other subsets of selected features for each feature selection method using decision trees, random forest, KNN and SVM. To establish the effectiveness of our method, we also consider average classification accuracy (ACA) found for a given dataset for each of the classifiers. The value of \(\alpha \) is considered as 0.75 and the value of K depends on accuracy obtained from the features set.

Analysis on network intrusion datasets

We use NSL-KDD and TUIDS dataset to validate classification accuracy of our method using three classifiers, namely decision trees, random forest and KNN. We have chosen these three classifiers because of their (i) performance and (ii) diverse nature. On NSL-KDD, the proposed ensemble feature selection method gives 92–98% classification accuracy. As shown in Table 2, the EFS-MI gives higher classification accuracy than all other competing feature selection methods with KNN and decision trees classifiers. However, the classification accuracy given by the proposed method with random forest classifier is almost similar to other compared feature selection methods. Similarly, on TUIDS dataset, classification accuracy of our method is higher than all the competing feature selection methods with decision trees, random forest and KNN classifiers.

Analysis on gene expression datasets

As shown in Table 3, in the breast cancer and colon cancer datasets, the proposed method yields high classification accuracy with decision trees, random forest, KNN and SVM classifiers than all the compared feature-selection methods in both the datasets. In the colon cancer dataset, all the classifiers give 65–88% classification accuracy, whereas EFS-MI yields high classification accuracy with decision trees classifier for all the compared individual feature selection methods. But, in case random forests and KNN, the EFS-MI gives high classification accuracy. However, the method gives a bit of poor classification accuracy with SVM classifier.
In the lung cancer dataset, our method gives better classification accuracy with decision trees than other competing methods except reliefF. However, with random forest, the method yields almost equivalent classification accuracy, whereas KNN yields better accuracy than all other competing methods. But, we found a bit low classification accuracy with SVM classifier for our method.
We observed a wide range of classification accuracy i.e, 49–97% on different classifiers using various feature selection methods on Lymphoma dataset. From the average classification accuracy found on different feature selection methods using different classifiers, we observed that decision tree, random forest, KNN and SVM yield high classification accuracy than other feature selection methods.
On SRBCT dataset, all the classifiers give excellent classification accuracy in the range between 76 and 100%. Accuracy with decision trees is found better on the proposed algorithm compared to other feature selection algorithms except reliefF. However, Random forest gives highest classification accuracy than all other compared feature selection methods. Similarly, KNN and SVM give higher accuracy for all the competing feature selection methods except chisquare.
The proposed EFS-MI gives high classification accuracy on high dimensional datasets like TUIDS, NSL-KDD, SRBCT, Lymphoma, Lung Cancer and Colon Cancer datasets. We plot the maximum accuracy found against number of features for various datasets as shown in Fig. 2. From the plotted graph, we observe that the maximum accuracy is obtained for a subset of features that contains 3–7 features.
Table 3
Classification accuracies and average classification accuracies (ACA) for gene expression datasets
D
Decision tree
Random forest
KNN
SVM
 
Rank
SU
GR
IG
RF
CS
EFS
SU
GR
IG
RF
CS
EFS
SU
GR
IG
RF
CS
EFS
SU
GR
IG
RF
CS
EFS
15
\(f=2\)
0.94
0.93
0.94
0.94
0.93
0.95
0.97
0.96
0.96
0.97
0.96
0.97
0.95
0.95
0.95
0.95
0.95
0.96
0.96
0.96
0.96
0.96
0.96
0.97
\(f=3\)
0.94
0.93
0.94
0.94
0.94
0.94
0.97
0.96
0.97
0.96
0.97
0.98
0.95
0.95
0.95
0.95
0.95
0.96
0.96
0.96
0.96
0.96
0.96
0.97
\(f=7\)
0.94
0.93
0.94
0.94
0.94
0.95
0.97
0.96
0.96
0.97
0.96
0.97
0.95
0.95
0.95
0.95
0.95
0.96
0.96
0.96
0.96
0.96
0.96
0.97
\(f=1\)
0.94
0.94
0.94
0.94
0.94
0.94
0.96
0.97
0.96
0.97
0.97
0.97
0.95
0.95
0.95
0.95
0.95
0.96
0.96
0.96
0.96
0.96
0.96
0.97
\(f=5\)
0.94
0.94
0.94
0.94
0.94
0.94
0.96
0.97
0.96
0.97
0.97
0.97
0.95
0.95
0.95
0.95
0.95
0.95
0.96
0.96
0.96
0.96
0.96
0.97
ACA
0.94
0.93
0.94
0.94
0.93
0.94
0.96
0.96
0.96
0.96
0.96
0.97
0.95
0.95
0.95
0.95
0.95
0.95
0.96
0.96
0.96
0.96
0.96
0.97
16
\(f=1\)
0.84
0.83
0.84
0.83
0.84
0.84
0.84
0.84
0.83
0.83
0.83
0.84
0.76
0.66
0.64
0.64
0.68
0.77
0.65
0.64
0.65
0.64
0.73
0.83
\(f=11\)
0.79
0.79
0.76
0.75
0.76
0.78
0.82
0.81
0.82
0.83
0.84
0.85
0.79
0.80
0.75
0.77
0.79
0.80
0.85
0.84
0.87
0.87
0.84
0.85
\(f=21\)
0.73
0.79
0.76
0.75
0.74
0.76
0.83
0.85
0.85
0.83
0.84
0.86
0.82
0.81
0.79
0.81
0.79
0.78
0.86
0.83
0.87
0.86
0.88
0.86
\(f=51\)
0.74
0.76
0.79
0.80
0.80
0.77
0.84
0.83
0.85
0.86
0.86
0.86
0.84
0.85
0.82
0.83
0.80
0.85
0.84
0.83
0.86
0.87
0.85
0.86
\(f=81\)
0.80
0.76
0.78
0.77
0.78
0.76
0.85
0.86
0.86
0.85
0.87
0.87
0.83
0.84
0.83
0.82
0.80
0.85
0.86
0.84
0.86
0.88
0.86
0.86
ACA
0.78
0.78
0.78
0.78
0.78
0.78
0.83
0.83
0.84
0.84
0.84
0.85
0.80
0.79
0.76
0.77
0.77
0.81
0.81
0.79
0.82
0.82
0.83
0.85
17
\(f=1\)
0.48
0.42
0.49
0.47
0.38
0.48
0.47
0.38
0.47
0.45
0.37
0.45
0.34
0.32
0.34
0.35
0.30
0.40
0.45
0.33
0.41
0.45
0.45
0.44
\(f=11\)
0.57
0.59
0.59
0.63
0.59
0.60
0.85
0.87
0.82
0.77
0.75
0.79
0.81
0.84
0.80
0.75
0.70
0.75
0.89
0.85
0.84
0.83
0.74
0.70
\(f=31\)
0.57
0.54
0.58
0.57
0.57
0.60
0.84
0.85
0.84
0.86
0.87
0.87
0.81
0.82
0.82
0.81
0.83
0.87
0.87
0.86
0.88
0.87
0.86
0.73
\(f=51\)
0.57
0.53
0.57
0.54
0.59
0.55
0.84
0.85
0.85
0.87
0.84
0.87
0.82
0.84
0.82
0.85
0.83
0.84
0.87
0.87
0.87
0.87
0.86
0.80
\(f=91\)
0.60
0.57
0.55
0.60
0.58
0.58
0.86
0.87
0.85
0.86
0.85
0.87
0.84
0.84
0.86
0.87
0.84
0.86
0.87
0.88
0.86
0.85
0.86
0.86
ACA
0.55
0.53
0.55
0.56
0.54
0.56
0.77
0.76
0.76
0.76
0.73
0.77
0.72
0.73
0.72
0.72
0.7
0.74
0.79
0.75
0.77
0.77
0.75
0.70
18
\(f=1\)
0.49
0.52
0.60
0.64
0.48
0.64
0.47
0.49
0.59
0.57
0.51
0.61
0.44
0.47
0.57
0.59
0.52
0.56
0.34
0.36
0.58
0.42
0.57
0.38
\(f=101\)
0.64
0.57
0.61
0.61
0.66
0.69
0.69
0.67
0.63
0.70
0.67
0.81
0.65
0.65
0.58
0.56
0.67
0.70
0.76
0.78
0.75
0.79
0.76
0.83
\(f=301\)
0.64
0.57
0.64
0.83
0.70
0.75
0.76
0.71
0.71
0.76
0.71
0.87
0.69
0.63
0.63
0.64
0.63
0.80
0.76
0.78
0.78
0.79
0.84
0.90
\(f=401\)
0.74
0.71
0.84
0.80
0.67
0.70
0.79
0.78
0.79
0.77
0.76
0.91
0.63
0.67
0.66
0.66
0.64
0.87
0.84
0.81
0.81
0.76
0.86
0.94
\(f=1201\)
0.82
0.63
0.77
0.68
0.71
0.78
0.88
0.88
0.86
0.87
0.88
0.95
0.72
0.73
0.74
0.70
0.70
0.85
0.84
0.88
0.88
0.88
0.86
0.97
ACA
0.66
0.6
0.69
0.71
0.64
0.71
0.71
0.70
0.71
0.73
0.70
0.83
0.62
0.63
0.63
0.63
0.63
0.75
0.70
0.72
0.76
0.72
0.77
0.80
19
\(f=1\)
0.55
0.45
0.48
0.55
0.63
0.58
0.52
0.45
0.51
0.51
0.60
0.62
0.49
0.43
0.52
0.51
0.61
0.52
0.51
0.44
0.47
0.49
0.68
0.62
\(f=21\)
0.85
0.83
0.84
0.89
0.85
0.84
0.99
0.99
0.98
0.97
0.98
0.98
0.98
0.99
0.97
0.98
0.96
0.98
0.99
0.96
0.97
0.97
0.98
0.98
\(f=51\)
0.86
0.86
0.83
0.85
0.84
0.84
0.99
1.0
0.99
0.97
0.99
0.99
0.99
0.97
0.99
0.99
0.98
0.99
1.0
0.98
1.0
0.98
1.0
1.0
\(f=71\)
0.86
0.84
0.84
0.84
0.83
0.84
1.0
0.99
0.99
0.99
0.99
1.0
0.99
0.98
0.99
0.99
1.0
1.0
1.0
0.98
1.0
0.99
1.0
1.0
\(f=91\)
0.82
0.82
0.81
0.85
0.83
0.83
0.99
0.99
0.99
0.99
0.99
1.0
0.99
0.99
0.99
0.98
1.0
1.0
1.0
0.99
1.0
1.0
1.0
1.0
ACA
0.78
0.76
0.76
0.79
0.79
0.78
0.89
0.88
0.89
0.88
0.91
0.91
0.88
0.87
0.89
0.89
0.91
0.89
0.9
0.87
0.88
0.88
0.93
0.92

Analysis on UCI datasets

The proposed EFS-MI is compared with five filter-based feature selection methods as shown in Table 4. In case of Accute1, Accute2 and Abalone datasets classification accuracy of EFS-MI is 100% for features numbered 4, 4 and 5, respectively for the classifiers viz. decision trees, random forests, KNN and SVM. For Accute1 and Accute2, we get similar classification accuracy in comparison to all the feature selection methods for decision trees, random forest, KNN and SVM. The SVM gives 100% classification accuracy for all the feature selection methods. In case of Glass dataset, our method yields around 60–70% classification accuracy for decision trees, random forest and KNN classifiers whereas classification accuracy with SVM is only 22–24%.
In Wine dataset, EFS-MI gives better classification accuracy with decision trees, KNN and SVM. However, the random forest yields a slightly low classification accuracy on the proposed method. In the iris dataset, classification accuracy for the proposed method is slightly low with decision trees, random forest, KNN and SVM classifiers. Similarly, on Monk1 dataset, we observe that the proposed method yields higher accuracy than info gain and reliefF only with decision trees and random forest, whereas KNN gives a slightly low classification accuracy on the proposed method. However, the SVM classifier gives better accuracy on the proposed method than all other competing feature-selection methods. Moving onto the Monk2 dataset, the proposed method achieves 67% accuracy with decision trees, random forests and SVM classifiers which is higher that the accuracy produced by other feature-selection methods. However, KNN yields a slightly low classification accuracy for the proposed method. In case of Monk3 dataset, classification accuracy is around 78% for decision trees and random forests whereas accuracy found with KNN and SVM is 66–72% which is lower than the accuracy achieved using other employed methods. The EFS-MI gives lower classification accuracy on Iris, Monk1, Monk2 and Monk3 datasets because the dimensionality of the dataset is only 4, i.e, relatively lower, hence produces less scope to established the effectiveness of an ensemble approach.
Table 4
Classification accuracies and average classification accuracies (ACA) for UCI datasets
D
Decision tree
Random forest
KNN
SVM
R
SU
GR
IG
RF
CS
EFS
SU
GR
IG
RF
CS
EFS
SU
GR
IG
RF
CS
EFS
SU
GR
IG
RF
CS
EFS
1
\(f=5\)
0.82
0.82
0.82
0.82
0.82
0.82
0.82
0.82
0.82
0.82
0.82
0.82
0.69
0.63
0.81
0.71
0.73
0.73
1.0
1.0
1.0
1.0
1.0
1.0
\(f=6\)
0.91
0.91
0.91
0.78
0.91
0.91
0.91
0.91
0.91
0.80
0.91
0.91
0.87
0.87
0.86
0.83
0.89
0.89
1.0
1.0
1.0
1.0
1.0
1.0
\(f=4\)
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
\(f=1\)
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
\(f=3\)
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
ACA
0.94
0.94
0.94
0.92
0.94
0.94
0.94
0.94
0.94
0.92
0.94
0.94
0.91
0.9
0.93
0.90
0.92
0.92
1
1
1
1
1
1
2
\(f=1\)
0.91
0.91
0.91
0.83
0.91
0.91
0.91
0.91
0.91
0.83
0.91
0.91
0.82
0.86
0.87
0.74
0.86
0.86
1.0
1.0
1.0
1.0
1.0
1.0
\(f=4\)
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
\(f=7\)
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
\(f=5\)
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
\(f=2\)
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
ACA
0.97
0.97
0.97
0.95
0.97
0.97
0.97
0.97
0.97
0.95
0.97
0.97
0.95
0.96
0.96
0.93
0.96
0.96
1
1
1
1
1
1
3
\(f=3\)
0.21
0.21
0.21
0.21
0.21
0.24
0.25
0.25
0.25
0.24
0.24
0.25
0.20
0.20
0.20
0.20
0.20
0.13
0.14
0.14
0.14
0.14
0.14
0.15
\(f=5\)
0.21
0.21
0.21
0.21
0.21
0.20
0.24
0.24
0.25
0.25
0.24
0.24
0.20
0.20
0.20
0.20
0.20
0.19
0.14
0.14
0.15
0.15
0.14
0.15
\(f=6\)
0.21
0.21
0.21
0.21
0.21
0.23
0.24
0.24
0.24
0.24
0.25
0.25
0.20
0.20
0.20
0.20
0.20
0.21
0.14
0.15
0.15
0.14
0.14
0.15
\(f=1\)
0.21
0.21
0.21
0.21
0.21
0.23
0.24
0.24
0.24
0.24
0.25
0.25
0.20
0.20
0.20
0.20
0.20
0.20
0.15
0.15
0.15
0.14
0.15
0.15
\(f=4\)
0.21
0.21
0.21
0.21
0.21
0.21
0.25
0.24
0.24
0.24
0.24
0.24
0.20
0.20
0.20
0.20
0.20
0.20
0.14
0.15
0.14
0.14
0.14
0.15
ACA
0.21
0.21
0.21
0.21
0.21
0.22
0.24
0.24
0.24
0.24
0.24
0.24
0.2
0.2
0.2
0.2
0.2
0.18
0.14
0.14
0.14
0.14
0.14
0.15
4
\(f=3\)
0.42
0.42
0.55
0.42
0.68
0.43
0.42
0.38
0.51
0.42
0.78
0.41
0.36
0.43
0.57
0.37
0.72
0.34
0.33
0.15
0.35
0.34
0.37
0.24
\(f=8\)
0.64
0.46
0.62
0.63
0.67
0.45
0.65
0.49
0.65
0.64
0.78
0.49
0.65
0.46
0.65
0.65
0.72
0.42
0.46
0.35
0.45
0.45
0.36
0.23
\(f=7\)
0.65
0.54
0.63
0.65
0.68
0.59
0.69
0.54
0.67
0.69
0.79
0.65
0.66
0.53
0.68
0.66
0.73
0.61
0.49
0.46
0.50
0.49
0.36
0.24
\(f=5\)
0.65
0.63
0.62
0.64
0.67
0.63
0.70
0.71
0.71
0.68
0.79
0.67
0.69
0.67
0.68
0.64
0.72
0.65
0.51
0.54
0.45
0.44
0.38
0.23
\(f=6\)
0.65
0.64
0.65
0.66
0.68
0.64
0.73
0.72
0.72
0.71
0.79
0.71
0.67
0.66
0.67
0.69
0.72
0.66
0.48
0.53
0.48
0.52
0.36
0.23
ACA
0.60
0.53
0.61
0.6
0.67
0.59
0.63
0.56
0.65
0.62
0.78
0.58
0.60
0.55
0.65
0.60
0.72
0.53
0.45
0.40
0.44
0.44
0.36
0.23
5
\(f=7\)
0.74
0.61
0.72
0.62
0.90
0.72
0.71
0.60
0.72
0.59
0.98
0.71
0.75
0.66
0.76
0.67
0.74
0.69
0.79
0.63
0.79
0.63
0.97
0.91
\(f=10\)
0.77
0.78
0.86
0.77
0.90
0.89
0.81
0.80
0.88
0.80
0.98
0.94
0.82
0.82
0.69
0.82
0.76
0.92
0.83
0.83
0.89
0.83
0.97
0.90
\(f=1\)
0.90
0.89
0.92
0.89
0.89
0.92
0.92
0.92
0.96
0.92
0.98
0.96
0.92
0.92
0.70
0.69
0.75
0.93
0.88
0.88
0.94
0.90
0.97
0.89
\(f=2\)
0.92
0.92
0.92
0.89
0.89
0.92
0.96
0.96
0.96
0.92
0.97
0.92
0.71
0.71
0.71
0.69
0.74
0.92
0.94
0.94
0.94
0.93
0.97
0.89
\(f=4\)
0.92
0.91
0.92
0.92
0.90
0.92
0.97
0.97
0.97
0.97
0.97
0.96
0.70
0.71
0.71
0.70
0.74
0.91
0.94
0.94
0.94
0.95
0.97
0.90
\(f=5\)
0.91
0.91
0.91
0.91
0.90
0.92
0.97
0.97
0.97
0.97
0.98
0.97
0.71
0.71
0.71
0.71
0.74
0.82
0.95
0.94
0.95
0.94
0.97
0.90
ACA
0.86
0.83
0.87
0.83
0.89
0.88
0.89
0.87
0.91
0.86
0.97
0.91
0.76
0.75
0.71
0.71
0.74
0.86
0.88
0.86
0.90
0.86
0.97
0.89
6
\(f=3\)
0.95
0.95
0.93
0.95
0.93
0.93
0.95
0.95
0.92
0.95
0.92
0.91
0.95
0.95
0.94
0.95
0.94
0.92
0.96
0.96
0.94
0.96
0.94
0.72
\(f=1\)
0.95
0.95
0.95
0.94
0.95
0.93
0.95
0.96
0.96
0.96
0.96
0.93
0.96
0.96
0.96
0.96
0.96
0.91
0.95
0.95
0.95
0.95
0.95
0.72
\(f=2\)
0.94
0.94
0.93
0.94
0.93
0.94
0.95
0.95
0.95
0.94
0.95
0.92
0.96
0.96
0.96
0.96
0.96
0.93
0.95
0.95
0.96
0.95
0.96
0.72
ACA
0.94
0.94
0.93
0.94
0.93
0.93
0.93
0.95
0.94
0.95
0.94
0.92
0.95
0.95
0.95
0.95
0.95
0.92
0.95
0.95
0.95
0.95
0.95
0.72
7
\(f=5\)
0.75
0.75
0.75
0.40
0.85
0.75
0.75
0.75
0.75
0.40
0.95
0.75
0.69
0.66
0.69
0.49
0.78
0.68
0.66
0.66
0.66
0.45
0.66
0.67
\(f=2\)
0.75
0.75
0.75
0.83
0.84
0.75
0.75
0.74
0.75
0.83
0.95
0.75
0.69
0.68
0.69
0.78
0.78
0.65
0.66
0.66
0.66
0.41
0.67
0.67
\(f=6\)
1.0
1.0
0.75
1.0
0.84
0.75
0.98
0.98
0.75
0.98
0.96
0.75
1.0
1.0
0.67
1.0
0.78
0.69
0.66
0.66
0.66
0.66
0.67
0.67
ACA
0.83
0.83
0.75
0.74
0.84
0.75
0.82
0.82
0.75
0.73
0.95
0.75
0.79
0.78
0.68
0.75
0.78
0.67
0.66
0.66
0.66
0.50
0.66
0.67
8
\(f=5\)
0.67
0.67
0.67
0.67
0.90
0.67
0.67
0.67
0.67
0.67
0.75
0.67
0.59
0.61
0.62
0.59
0.61
0.56
0.52
0.56
0.52
0.50
0.67
0.67
\(f=2\)
0.67
0.67
0.67
0.67
0.91
0.67
0.67
0.67
0.67
0.67
0.74
0.67
0.59
0.61
0.59
0.61
0.62
0.58
0.51
0.52
0.51
0.48
0.67
0.67
\(f=1\)
0.65
0.63
0.64
0.66
0.91
0.65
0.67
0.67
0.67
0.67
0.75
0.67
0.56
0.58
0.56
0.61
0.62
0.54
0.52
0.52
0.52
0.48
0.67
0.67
ACA
0.66
0.65
0.66
0.66
0.90
0.66
0.67
0.67
0.67
0.67
0.74
0.67
0.58
0.6
0.59
0.60
0.6
0.56
0.51
0.53
0.51
0.48
0.67
0.67
9
\(f=5\)
0.80
0.80
0.77
0.77
1.0
0.77
0.80
0.80
0.77
0.77
0.98
0.77
0.78
0.80
0.72
0.71
0.87
0.68
0.79
0.80
0.63
0.63
0.80
0.71
\(f=4\)
0.97
0.97
0.97
0.97
1.0
0.77
0.97
0.97
0.97
0.97
0.98
0.78
0.96
0.96
0.96
0.96
0.87
0.72
0.80
0.80
0.80
0.80
0.80
0.72
\(f=6\)
1.0
1.0
1.0
0.97
1.0
0.78
0.97
0.97
0.97
0.97
0.98
0.78
1.0
1.0
1.0
0.96
0.87
0.68
0.80
0.80
0.80
0.80
0.80
0.73
\(f=3\)
1.0
1.0
1.0
1.0
1.0
0.78
1.0
1.0
1.0
1.0
0.98
0.78
0.99
0.99
0.99
1.0
0.87
0.66
0.80
0.80
0.80
0.80
0.80
0.72
ACA
0.94
0.94
0.93
0.92
1
0.77
0.93
0.93
0.92
0.92
0.98
0.77
0.93
0.93
0.91
0.90
0.87
0.68
0.79
0.8
0.75
0.75
0.8
0.72
10
\(f=5\)
0.85
0.85
0.84
0.85
0.85
0.70
0.95
0.95
0.95
0.95
0.95
0.70
0.74
0.75
0.75
0.75
0.75
0.53
0.65
0.65
0.65
0.65
0.65
0.65
\(f=3\)
0.84
0.85
0.85
0.85
0.85
0.68
0.95
0.95
0.95
0.95
0.95
0.71
0.74
0.75
0.76
0.75
0.75
0.56
0.65
0.65
0.65
0.65
0.65
0.65
\(f=9\)
0.85
0.85
0.85
0.84
0.85
0.72
0.95
0.95
0.95
0.95
0.95
0.72
0.75
0.76
0.75
0.75
0.76
0.64
0.65
0.65
0.65
0.65
0.65
0.65
\(f=1\)
0.85
0.84
0.85
0.84
0.84
0.79
0.95
0.95
0.95
0.95
0.95
0.76
0.74
0.74
0.75
0.75
0.74
0.72
0.65
0.55
0.64
0.55
0.55
0.64
\(f=7\)
0.85
0.85
0.85
0.85
0.85
0.81
0.95
0.95
0.95
0.95
0.95
0.82
0.75
0.75
0.74
0.75
0.75
0.81
0.65
0.65
0.65
0.65
0.65
0.65
ACA
0.84
0.84
0.84
0.84
0.84
0.74
0.95
0.95
0.95
0.95
0.95
0.74
0.74
0.75
0.75
0.75
0.75
0.65
0.65
0.63
0.64
0.63
0.63
0.64
11
\(f=13\)
0.87
0.88
0.87
0.88
0.88
0.73
0.95
0.95
0.95
0.95
0.95
0.73
0.96
0.96
0.97
0.97
0.97
0.52
0.95
0.94
0.94
0.95
0.94
0.96
\(f=4\)
0.87
0.87
0.88
0.87
0.88
0.87
0.95
0.94
0.95
0.95
0.95
0.82
0.97
0.97
0.97
0.97
0.96
0.82
0.94
0.94
0.94
0.94
0.93
0.96
\(f=8\)
0.87
0.87
0.87
0.88
0.88
0.91
0.95
0.95
0.96
0.95
0.95
0.80
0.97
0.97
0.97
0.97
0.97
0.87
0.94
0.94
0.94
0.94
0.93
0.95
\(f=3\)
0.88
0.88
0.87
0.88
0.87
0.91
0.95
0.95
0.96
0.95
0.95
0.85
0.97
0.96
0.97
0.97
0.96
0.87
0.93
0.94
0.95
0.94
0.94
0.95
\(f=1\)
0.88
0.87
0.88
0.88
0.87
0.88
0.95
0.95
0.96
0.96
0.94
0.82
0.96
0.96
0.97
0.97
0.97
0.88
0.95
0.95
0.94
0.94
0.94
0.95
ACA
0.87
0.87
0.87
0.87
0.87
0.86
0.95
0.94
0.95
0.95
0.94
0.80
0.96
0.96
0.97
0.97
0.96
0.79
0.94
0.94
0.94
0.94
0.93
0.95
12
\(f=2\)
0.70
0.70
0.70
0.70
0.70
0.71
0.76
0.76
0.76
0.76
0.76
0.71
0.67
0.67
0.67
0.67
0.67
0.64
0.77
0.77
0.77
0.77
0.77
0.75
\(f=8\)
0.71
0.70
0.71
0.70
0.71
0.71
0.76
0.76
0.76
0.76
0.76
0.72
0.67
0.67
0.67
0.67
0.67
0.68
0.76
0.77
0.76
0.76
0.76
0.74
\(f=5\)
0.70
0.70
0.70
0.70
0.70
0.71
0.76
0.76
0.76
0.76
0.76
0.75
0.68
0.67
0.68
0.68
0.68
0.68
0.76
0.76
0.76
0.76
0.76
0.74
\(f=4\)
0.70
0.70
0.70
0.70
0.70
0.71
0.76
0.76
0.76
0.76
0.76
0.73
0.67
0.67
0.67
0.67
0.67
0.69
0.76
0.77
0.76
0.77
0.76
0.74
ACA
0.70
0.7
0.70
0.7
0.70
0.71
0.76
0.76
0.76
0.76
0.76
0.72
0.67
0.67
0.67
0.67
0.67
0.67
0.76
0.76
0.76
0.76
0.76
0.74
13
\(f=6\)
0.95
0.951
0.95
0.95
0.95
0.95
0.96
0.96
0.96
0.96
0.96
0.97
0.78
0.78
0.78
0.78
0.78
0.79
0.65
0.65
0.65
0.65
0.65
0.70
\(f=4\)
0.95
0.95
0.95
0.95
0.95
0.95
0.96
0.96
0.96
0.96
0.96
0.96
0.78
0.78
0.78
0.78
0.78
0.77
0.65
0.65
0.65
0.65
0.65
0.64
\(f=1\)
0.95
0.95
0.95
0.95
0.95
0.95
0.96
0.96
0.96
0.96
0.96
0.96
0.78
0.78
0.78
0.78
0.78
0.79
0.65
0.65
0.65
0.65
0.65
0.65
\(f=2\)
0.95
0.95
0.95
0.95
0.95
0.95
0.96
0.96
0.96
0.96
0.96
0.96
0.78
0.78
0.78
0.78
0.78
0.88
0.65
0.65
0.65
0.65
0.65
0.65
\(f=5\)
0.95
0.95
0.95
0.95
0.95
0.95
0.96
0.96
0.96
0.96
0.96
0.96
0.78
0.78
0.78
0.78
0.78
0.90
0.65
0.65
0.65
0.65
0.65
0.93
ACA
0.95
0.95
0.95
0.95
0.95
0.95
0.96
0.96
0.96
0.96
0.96
0.96
0.78
0.78
0.78
0.78
0.78
0.82
0.65
0.65
0.65
0.65
0.65
0.71
14
\(f=13\)
0.76
0.76
0.75
0.75
0.76
0.77
0.75
0.75
0.75
0.75
0.82
0.75
0.75
0.68
0.69
0.74
0.58
0.61
0.76
0.76
0.76
0.75
0.83
0.76
\(f=12\)
0.78
0.79
0.74
0.74
0.74
0.79
0.79
0.79
0.73
0.73
0.83
0.79
0.77
0.75
0.72
0.74
0.58
0.69
0.79
0.79
0.77
0.77
0.82
0.76
\(f=8\)
0.85
0.77
0.85
0.75
0.76
0.77
0.85
0.80
0.85
0.75
0.82
0.72
0.84
0.77
0.83
0.73
0.57
0.69
0.80
0.80
0.80
0.77
0.83
0.76
\(f=11\)
0.84
0.78
0.85
0.85
0.74
0.76
0.83
0.79
0.83
0.84
0.83
0.78
0.83
0.79
0.83
0.82
0.58
0.71
0.83
0.83
0.82
0.83
0.83
0.81
\(f=10\)
0.80
0.73
0.80
0.83
0.76
0.77
0.81
0.79
0.81
0.82
0.83
0.81
0.79
0.75
0.77
0.80
0.58
0.72
0.84
0.83
0.84
0.84
0.83
0.81
ACA
0.80
0.76
0.79
0.78
0.75
0.77
0.80
0.78
0.79
0.77
0.82
0.77
0.79
0.74
0.76
0.76
0.57
0.68
0.80
0.80
0.79
0.79
0.82
0.78
As shown in Table 6, on the Tic-Tac-Toe dataset, experimental result establishes that the performance of the proposed method is almost similar with other feature selection methods using SVM classifier. But, decision trees, random forest and KNN show low classification accuracy on the proposed method. In the similar fashion, on Zoo dataset, decision trees yield around 87% accuracy which is almost same to the accuracy found on other feature selection methods. Similarly, SVM approximately yields 95% accuracy, which is better than the results obtained by other feature selection methods employed in ensemble. Random forests and KNN yield a slightly lower classification accuracy on our proposed method.
Moving onto the diabetes dataset, we see that decision trees and KNN yield almost equivalent classification accuracy for the proposed method compared to other feature selection methods. However, the proposed method shows a slightly low classification accuracy with random forest and SVM classifiers. With the Car dataset, decision tree achieves 95% classification accuracy which is equivalent to the results obtained for other feature selection methods. Similarly, with random forest our method gives around 96% accuracy which is almost similar to the accuracy found on other feature selection methods. Moreover, KNN and SVM give higher classification accuracy on the proposed method compared to other feature selection methods. On Heart dataset, classification accuracy found on the proposed method is comparatively low with random forest, KNN and SVM classifier. However, decision tree gives better result on the proposed method compared to chisquare feature selection method only.

Analysis based on average accuracy

From the analysis of average accuracy as plotted in Figs. 3 and 4, we found that on Accute1 and Accute2, our method gives similar classification accuracy for the competing feature selection methods with all the four applied classifiers. On Abalone dataset, EFS-MI yields better average classification accuracy than other feature selection methods with decision trees, random forest and SVM. However, KNN gives lower average accuracy than other methods. In Wine dataset, the average classification accuracy found on EFS-MI with decision trees, random forest and SVM is better than all the compared feature selection methods except for chisqure. However, the KNN classifier shows higher accuracy than other methods. Similarly, in case of Iris, Monk1, Monk2, Monk3, Tic-Tac-Toe and Zoo datasets, average accuracy found on the proposed method with different classifiers is lower than other compared feature selection methods. However, on Monk1, Monk2, Tic-Tac-Toe and Zoo, only the SVM classifier gives comparatively better accuracy on EFS-MI.
On gene expression and network datasets, all the four classifiers show better average classification accuracy than the competing feature selection methods. But, the random forest classifier shows a slightly low classification accuracy on NSL-KDD dataset. Similarly, on Lung Cancer dataset, the SVM classifier yields lower classification than other feature selection methods.
Although the average classification accuracy on EFS-MI is better than other feature selection methods, however as shown in the Figs. 3 and 4, the advantage of EFS-MI is not obvious. This is because if the selected subset of features contains a single redundant feature then the accuracy of the classifier may decrease. However, average classification accuracy is computed from the accuracy obtained from each individual feature. Therefore, there is a gap between average classification accuracy obtained from each individual feature and the accuracy obtained from the selected subset of features together.

Conclusion and future work

In this paper, we have introduced an ensemble of feature selection methods called Ensemble Feature Selection using Mutual Information (EFS-MI), which combines subsets of features selected by different filters, viz., InfoGain, GainRatio, ReliefF, Chi-square and SymmetricUncertainity and yields an optimal subset of features. To evaluate the performance of the ensemble method, we have used different classifiers, viz., Decision Trees, Random Forests, KNN and SVM on the UCI, network and gene expression datasets. The overall performance has been found to be excellent for all these datasets. From our average classification accuracy (ACA) analysis, it can be observed that the proposed EFS-MI mostly overcomes the local optimal problem of the individual filters especially for high dimensional datasets. The quality of features identified by EFS-MI in terms of relevance and non-redundancy also has been established theoretically. To remove the biasness induced by a single classifier, an ensemble of classifiers using soft computing approach is underway.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Literatur
1.
Zurück zum Zitat Abdullah S, Sabar NR, Nazri MZA, Ayob M (2014) An exponential monte-carlo algorithm for feature selection problems. Comput Ind Eng 67:160–167CrossRef Abdullah S, Sabar NR, Nazri MZA, Ayob M (2014) An exponential monte-carlo algorithm for feature selection problems. Comput Ind Eng 67:160–167CrossRef
2.
Zurück zum Zitat Bay SD (1998) Combining nearest neighbor classifiers through multiple feature subsets. In: ICML, vol. 98, pp 37–45. Citeseer Bay SD (1998) Combining nearest neighbor classifiers through multiple feature subsets. In: ICML, vol. 98, pp 37–45. Citeseer
3.
Zurück zum Zitat Bhattacharyya DK, Kalita JK (2013) Network anomaly detection: a machine learning perspective. CRC Press, Boca Raton Bhattacharyya DK, Kalita JK (2013) Network anomaly detection: a machine learning perspective. CRC Press, Boca Raton
4.
5.
Zurück zum Zitat Bolón-Canedo V, Sánchez-Maroño N, Alonso-Betanzos A (2012) An ensemble of filters and classifiers for microarray data classification. Pattern Recogn 45(1):531–539CrossRef Bolón-Canedo V, Sánchez-Maroño N, Alonso-Betanzos A (2012) An ensemble of filters and classifiers for microarray data classification. Pattern Recogn 45(1):531–539CrossRef
6.
Zurück zum Zitat Breiman L (1996) Bagging predictors. Mach Learn 24(2):123–140MATH Breiman L (1996) Bagging predictors. Mach Learn 24(2):123–140MATH
7.
Zurück zum Zitat Das D, Bhattacharyya DK (2012) Decomposition+: Improving-diversity for multiple sensitive attributes. In: Advances in computer science and information technology. Computer science and engineering, vol 131, pp 403–412 Das D, Bhattacharyya DK (2012) Decomposition+: Improving-diversity for multiple sensitive attributes. In: Advances in computer science and information technology. Computer science and engineering, vol 131, pp 403–412
8.
Zurück zum Zitat Fernández A, del Río S, Chawla NV, Herrera F (2017) An insight into imbalanced big data classification: outcomes and challenges. Complex Intell Syst 3(2):105–120 Fernández A, del Río S, Chawla NV, Herrera F (2017) An insight into imbalanced big data classification: outcomes and challenges. Complex Intell Syst 3(2):105–120
9.
Zurück zum Zitat Guyon I, Elisseeff A (2003) An introduction to variable and feature selection. J Mach Learn Res 3:1157–1182MATH Guyon I, Elisseeff A (2003) An introduction to variable and feature selection. J Mach Learn Res 3:1157–1182MATH
10.
Zurück zum Zitat Hall M, Frank E, Holmes G, Pfahringer B, Reutemann P, Witten IH (2009) The weka data mining software: an update. ACM SIGKDD Explor Newslett 11(1):10–18CrossRef Hall M, Frank E, Holmes G, Pfahringer B, Reutemann P, Witten IH (2009) The weka data mining software: an update. ACM SIGKDD Explor Newslett 11(1):10–18CrossRef
11.
Zurück zum Zitat Hoque N, Bhattacharyya D, Kalita J (2014) Mifs-nd: a mutual information-based feature selection method. Expert Syst Appl 41(14):6371–6385CrossRef Hoque N, Bhattacharyya D, Kalita J (2014) Mifs-nd: a mutual information-based feature selection method. Expert Syst Appl 41(14):6371–6385CrossRef
12.
Zurück zum Zitat Hoque N, Ahmed H, Bhattacharyya D, Kalita J (2016) A fuzzy mutual information-based feature selection method for classification. Fuzzy Inf Eng 8(3):355–384MathSciNetCrossRef Hoque N, Ahmed H, Bhattacharyya D, Kalita J (2016) A fuzzy mutual information-based feature selection method for classification. Fuzzy Inf Eng 8(3):355–384MathSciNetCrossRef
13.
Zurück zum Zitat Hoque N, Bhattacharyya DK, Kalita JK (2016) Ffsc: a novel measure for low-rate and high-rate ddos attack detection using multivariate data analysis. Secur Commun Netw 9(13):2032–2041 Hoque N, Bhattacharyya DK, Kalita JK (2016) Ffsc: a novel measure for low-rate and high-rate ddos attack detection using multivariate data analysis. Secur Commun Netw 9(13):2032–2041
14.
Zurück zum Zitat Hsu HH, Hsieh CW, Lu MD (2011) Hybrid feature selection by combining filters and wrappers. Expert Syst Appl 38(7):8144–8150CrossRef Hsu HH, Hsieh CW, Lu MD (2011) Hybrid feature selection by combining filters and wrappers. Expert Syst Appl 38(7):8144–8150CrossRef
15.
Zurück zum Zitat Hu W, Choi KS, Gu Y, Wang S (2013) Minimum–maximum local structure information for feature selection. Pattern Recogn Lett 34(5):527–535 Hu W, Choi KS, Gu Y, Wang S (2013) Minimum–maximum local structure information for feature selection. Pattern Recogn Lett 34(5):527–535
16.
Zurück zum Zitat Kashyap H, Ahmed HA, Hoque N, Roy S, Bhattacharyya DK (2015) Big data analytics in bioinformatics: a machine learning perspective. arXiv preprint arXiv:1506.05101 Kashyap H, Ahmed HA, Hoque N, Roy S, Bhattacharyya DK (2015) Big data analytics in bioinformatics: a machine learning perspective. arXiv preprint arXiv:​1506.​05101
17.
18.
Zurück zum Zitat Lee K (2002) Combining multiple feature selection methods. In: Proceedings of MASPLAS’02 The Mid-Atlantic Student Workshop on Programming Languages and Systems Pace University, April 19 Lee K (2002) Combining multiple feature selection methods. In: Proceedings of MASPLAS’02 The Mid-Atlantic Student Workshop on Programming Languages and Systems Pace University, April 19
19.
Zurück zum Zitat Mira A, Bhattacharyya DK, Saharia S (2012) Rodha: robust outlier detection using hybrid approach. Am J Intell Syst 2(5):129–140CrossRef Mira A, Bhattacharyya DK, Saharia S (2012) Rodha: robust outlier detection using hybrid approach. Am J Intell Syst 2(5):129–140CrossRef
20.
Zurück zum Zitat Olsson J, Oard DW (2006) Combining feature selectors for text classification. In: Proceedings of the 15th ACM international conference on information and knowledge management, pp 798–799 Olsson J, Oard DW (2006) Combining feature selectors for text classification. In: Proceedings of the 15th ACM international conference on information and knowledge management, pp 798–799
21.
Zurück zum Zitat Opitz DW (1999) Feature selection for ensembles. In: AAAI/IAAI, pp 379–384 Opitz DW (1999) Feature selection for ensembles. In: AAAI/IAAI, pp 379–384
22.
Zurück zum Zitat Pudil P, Novovičová J, Kittler J (1994) Floating search methods in feature selection. Pattern Recogn Lett 15(11):1119–1125CrossRef Pudil P, Novovičová J, Kittler J (1994) Floating search methods in feature selection. Pattern Recogn Lett 15(11):1119–1125CrossRef
23.
Zurück zum Zitat Rodríguez D, Ruiz R, Cuadrado-Gallego J, Aguilar-Ruiz J (2007) Detecting fault modules applying feature selection to classifiers. In: IEEE international conference on information reuse and integration, 2007, pp 667–672 Rodríguez D, Ruiz R, Cuadrado-Gallego J, Aguilar-Ruiz J (2007) Detecting fault modules applying feature selection to classifiers. In: IEEE international conference on information reuse and integration, 2007, pp 667–672
24.
Zurück zum Zitat Rokach L, Chizi B, Maimon O (2006) Feature selection by combining multiple methods. Springer, New YorkCrossRef Rokach L, Chizi B, Maimon O (2006) Feature selection by combining multiple methods. Springer, New YorkCrossRef
25.
Zurück zum Zitat Schapire RE (1999) A brief introduction to boosting. IJCAI 99:1401–1406 Schapire RE (1999) A brief introduction to boosting. IJCAI 99:1401–1406
26.
Zurück zum Zitat Swiniarski RW, Skowron A (2003) Rough set methods in feature selection and recognition. Pattern Recogn Lett 24(6):833–849CrossRefMATH Swiniarski RW, Skowron A (2003) Rough set methods in feature selection and recognition. Pattern Recogn Lett 24(6):833–849CrossRefMATH
27.
Zurück zum Zitat Wang H, Khoshgoftaar TM, Napolitano A (2012) Software measurement data reduction using ensemble techniques. Neurocomputing 92:124–132CrossRef Wang H, Khoshgoftaar TM, Napolitano A (2012) Software measurement data reduction using ensemble techniques. Neurocomputing 92:124–132CrossRef
28.
Zurück zum Zitat Wang H, Khoshgoftaar TM, Napolitano A (2010) A comparative study of ensemble feature selection techniques for software defect prediction. In: 2010 Ninth International Conference on Machine Learning and Applications (ICMLA), pp 135–140 Wang H, Khoshgoftaar TM, Napolitano A (2010) A comparative study of ensemble feature selection techniques for software defect prediction. In: 2010 Ninth International Conference on Machine Learning and Applications (ICMLA), pp 135–140
29.
Zurück zum Zitat Xuhua Y, Furuhashi T, Obata K, Uchikawa Y (1996) Selection of features for signature verification using the genetic algorithm. Comput Ind Eng 30(4):1037–1045CrossRef Xuhua Y, Furuhashi T, Obata K, Uchikawa Y (1996) Selection of features for signature verification using the genetic algorithm. Comput Ind Eng 30(4):1037–1045CrossRef
30.
Zurück zum Zitat Yu E, Cho S (2006) Ensemble based on ga wrapper feature selection. Comput Ind Eng 51(1):111–116CrossRef Yu E, Cho S (2006) Ensemble based on ga wrapper feature selection. Comput Ind Eng 51(1):111–116CrossRef
31.
Zurück zum Zitat Zhang Y, Gong D, Hu Y, Zhang W (2015) Feature selection algorithm based on bare bones particle swarm optimization. Neurocomputing 148:150–157CrossRef Zhang Y, Gong D, Hu Y, Zhang W (2015) Feature selection algorithm based on bare bones particle swarm optimization. Neurocomputing 148:150–157CrossRef
32.
Zurück zum Zitat Zhang Y, Gong DW, Cheng J (2017) Multi-objective particle swarm optimization approach for cost-based feature selection in classification. IEEE/ACM Trans Comput Biol Bioinf 14(1):64–75CrossRef Zhang Y, Gong DW, Cheng J (2017) Multi-objective particle swarm optimization approach for cost-based feature selection in classification. IEEE/ACM Trans Comput Biol Bioinf 14(1):64–75CrossRef
33.
Zurück zum Zitat Zhang L, Shan L, Wang J Optimal feature selection using distance-based discrete firefly algorithm with mutual information criterion. Neural Comput Appl 28(9):2795–2808 Zhang L, Shan L, Wang J Optimal feature selection using distance-based discrete firefly algorithm with mutual information criterion. Neural Comput Appl 28(9):2795–2808
Metadaten
Titel
EFS-MI: an ensemble feature selection method for classification
An ensemble feature selection method
verfasst von
Nazrul Hoque
Mihir Singh
Dhruba K. Bhattacharyya
Publikationsdatum
10.10.2017
Verlag
Springer Berlin Heidelberg
Erschienen in
Complex & Intelligent Systems / Ausgabe 2/2018
Print ISSN: 2199-4536
Elektronische ISSN: 2198-6053
DOI
https://doi.org/10.1007/s40747-017-0060-x

Weitere Artikel der Ausgabe 2/2018

Complex & Intelligent Systems 2/2018 Zur Ausgabe