Abstract

Deep autoencoder neural networks have been widely used in several image classification and recognition problems, including hand-writing recognition, medical imaging, and face recognition. The overall performance of deep autoencoder neural networks mainly depends on the number of parameters used, structure of neural networks, and the compatibility of the transfer functions. However, an inappropriate structure design can cause a reduction in the performance of deep autoencoder neural networks. A novel framework, which primarily integrates the Taguchi Method to a deep autoencoder based system without considering to modify the overall structure of the network, is presented. Several experiments are performed using various data sets from different fields, i.e., network security and medicine. The results show that the proposed method is more robust than some of the well-known methods in the literature as most of the time our method performed better. Therefore, the results are quite encouraging and verified the overall performance of the proposed framework.

1. Introduction

Machine learning (ML) is a popular branch of artificial intelligence (AI) that does not need to be explicitly programmed but allows machines to obtain new skills and predict results with high accuracy. Deep learning (DL) is a new version of ML which recently have been applied in many fields from computer vision to high dimension data processing. DL achieved the state-of-the-art results [1, 2]. Essentially, DL achieves great improvement in solving problems that have resisted the trials of the AI society for more than three decades. It should be noted that DL can predict comprehensive outcomes by requiring little engineering, which cannot be compared by the conventional AI based approaches. DL will be applied to different fields in the near future due to its flexible and generic structure. Development of innovative learning algorithms and new structures for deep neural networks will merely speed up this progress [3]. Recently, deep autoencoders have shown state-of-the-art achievement on different machine learning tasks which relies on unsupervised learning algorithms [4]. Deep autoencoders have been widely used in different fields from image recognition to computer network, etc. Lore et al. (2017) proposed a deep autoencoder-based method to separate features of signal from images having low light and also modify glare images without over saturating the lighter accessories in images with a high variety [5]. K. Sun et al. proposes that, a divergence of the stacked sparse denoising autoencoder, synthetic data used for training it, the new proposed extreme learning machine autoencoder (ELM-AE) called generalized extreme learning machine autoencoder (GELM-AE) adds the forked regularization to the aim of ELM-AE [6]. In [7] Yihui Xiong et al. trained an autoencoder network to encode and remodel a geochemical pattern population with strange complex multivariate probability division. In [8] Lyle D. Burgoonet et al. trained the autoencoder to predict estrogenic chemical substances (APECS). APECS consists of two deep autoencoder models which is less convoluted than the USEPA’s method and performs at least the same achievement. However, proposed idea implements accuracies of 91% versus 86% and 93% versus 93% on the in vitro and in vivo datasets used in validating the US EPA method. Chaoqun Hong et al. proposed a new pose retrieval technique which focuses on multimodal integration feature extraction and backpropagation deep neural network by using multilayered deep neural network with nonlinear mapping [9]. In [10] Tzu-Hsi Song et al. focused on bone marrow trepan biopsy images and proposed a hybrid deep autoencoder (HDA) network with Curvature Gaussian method for active and exact bone marrow hematopoietic stem cell detection via related high-level feature correspondence. In [11] Yosuke Suzuki et al. proposed a collaborative filtering based recommendation algorithm that employs the variation of similarities among users derived from different layers in stacked denoising autoencoders. Yu-Dong Zhang et al. presented a novel system counting on susceptibility-weighted imaging as computer-aided detection application which increased in the last years. Unsupervised feature learning was done by using SAE. Then, a deep autoencoder neural network was formed using the learned features and stacked autoencoders for training all of them together as supervised learning. The proposed approach produced a sensitivity of “93.20±1.37%”, a specificity of “93.25±1.38%”, and an accuracy of “93.22±1.37%”, the results obtained over “10x10-fold” cross validation [12]. As presented above, deep autoencoders have gathered lots of attention from researchers recently.

Taguchi Method is a statistical technique proposed by Taguchi and Konishi, which was essentially proposed for optimizing the quality manufacturing process development [13]. Especially in recent years, this method is used in number of critical studies to design experiment with best performance by different disciplines such as Engineering, Biotechnology, and Computer Science. For instance, Mei-Ling Huang et al. (2014) combined a feature selection technique with SVM recursive feature elimination approach to validate the classification accuracy for Dermatology and Zoo databases [14]. In this study, the Taguchi Method was adapted and combined with a SVM classifier so as to increase the overall classification accuracy by optimizing ‘’ and ‘’ parameters respectively. Authors claim that the proposed method can produce more than 95% accuracy for Dermatology and Zoo databases. A study includes multistage metal forming process by considering workability and also employs Taguchi Method for optimization [15]. For this study, the Taguchi Method is combined with artificial neural network to minimize the objective functions with respect to the forming process that the combinations of parameters used in finite element simulation are determined by orthogonal array in statistical design of experiments. The train data for artificial neural networks are obtained from orthogonal array and the result of simulation process. Huimin Wang et al. (2014) adopted the Taguchi Method to analyze the effect of “inertia weight”, “acceleration coefficients”, “population size”, “fitness evaluations”, and population topology on particle swarm optimization algorithm (PSO) and to determine the best mix of them for various optimization problems. The experimental results illustrate that all the benchmark functions have their optimum solutions after the tuning process. Furthermore, acceptable results are also presented by the article when dealing with the optimization design of a “Halbach permanent magnet” motor. The paper concludes that the PSO based Taguchi Method is quite appropriate for such popular engineering problems [16]. A recent study published in (2016) proposed a new predictive modelling of material removal rate (MRR) by employing Taguchi-entropy weight based GRA to optimize an artificial neural network [17]. Further recent studies using Taguchi Method can be also seen in the corresponding articles [1822].

This paper introduces a novel deep autoencoder based architecture optimized by Taguchi Method (see Section 2.1). The proposed architecture was employed in four different fields to show its performance, the presented architecture shows satisfactory results, and this encouraged authors to employ this framework in other different fields. The structure of paper consists of the proposed framework, experimental results, and conclusion.

2. The Proposed Framework

This study proposes a new method for optimizing deep autoencoders structure for processing data. The proposed deep learning architecture employs stacked autoencoders supported by Taguchi Method for parameter optimization in a reasonable amount of time. First a brief explanation of the stacked sparse autoencoder and Taguchi Method is presented respectively. Afterwards the proposed architecture, shown in Figure 2, is discussed.

2.1. Stacked Sparse Autoencoder

Supervised learning is one of the most powerful tools of AI. The stacked sparse autoencoder (SSAE) is essentially a neural network consisting of multiple layers of sparse autoencoders and mainly used as an unsupervised feature extraction method that automatically learns from unlabelled data. Output of each layer is wired to the inputs of the succeeding layer. Having a trained autoencoder essentially refers to estimate optimal parameters by reducing the divergence between input and output An example autoencoder is illustrated in Figure 1. The mapping between input and output . is given following equations: where M () is an activation using sigmoid logistic function.

The final expression can be shown as follows:The discrepancy between the input and output is defined by using a cost function. This functions’ first term refers to the MSE whereas the second one is the regularization term. Different algorithms are preferred to solve the optimal parameters of the network; the details can be seen in [45].

2.2. Taguchi Method

Taguchi Method is a statistical robust design method that was first proposed to improve the quality of manufactured product and more recently also applied to a variety of fields from engineering to marketing [13, 22, 46]. Three concepts were considered by the Taguchi concepts, namely, Taguchi loss function, offline quality control, and orthogonal arrays for experimental design. Taguchi Method offers a methodology for designing of experiments. For instance, if an experiment is aimed at heating of wire by passing the electricity through it, then different control parameters from material type to diameter of wire are considered. Those parameters may have various values. DOE allows you to obtain the parameters and their values in an efficient manner. An example orthogonal selection table is illustrated in Table 1.

Essentially those arrays tend to adopt a methodical way to permute and combine the collaboration among different parameters. Besides, unlike the full factorial experiment, there is no need to carry out each experiment respectively. To obtain the objective value or best accuracy, Taguchi Method decreases the number of necessary experiments by using orthogonal arrays (OA). This reduces the number of experiments to be performed and also reduces the overall cost. This arrays are essentially predefined matrices, including control parameters and number of experiments. The purpose of the Taguchi Method is to design an experiment that reduces the effect of the operator that cannot be controlled with a least amount of experiments [46, 47]. The selection of an appropriate orthogonal array is mainly based on the number of control parameters and corresponding levels. Orthogonal arrays are varied from L4 to L50 (see Table 1). The more numbers of control parameters yield the higher the numbers after “L”. Design of experiments is performed by employing the defined orthogonal array [47]. The iterations of experiments can be performed once the OA is carefully chosen. The number of iterations is then confirmed based on the complexity of the experiments. As aforementioned, the purpose of Taguchi Method to design an experiment is to reduce the effect of the operator that cannot be controlled with a least amount of experiments [47]. Taguchi Method is a powerful technique for supplying the best set among different stages of various parameters. The measure used in Taguchi Method is signal-to-noise (S/N) ratio to measure and esteem the superiority features that is the ratio of signal (S) to the operator of noise (N). Various S/N ratios were presented but three of them are considered standard [48]. The first standard is “smaller-is-better”, when the objective account of the quality variable y is zero. In this case, the S/N ratio can be defined as follows:In (1), x is the account of the experimental control and k is the number of experiments. The second standard is “larger-is-better” when the zero account of the quality variable y is unlimited and in this case, the S/N ratio can be realized as follows:Here, x is experimental surveillance account and k is the number of experiments. The last standard is “nominal-is-best”: in these styles of problems, the objective account of the quality variable x is specific. According to which, the S/N ratio can be realized as follows:Here, x is the average account for the experimental surveillance and σ is the criterion variation of the experimental surveillance [49, 50].

Overall, the average values of the signal-to-noise (S/N) ratio for each level of each of the parameter are calculated. The maximum and minimum values of differences are presented that the appropriate S/N ratio is decided based on the experimental strategy. This principally has a great influence on assessing the experiments.

2.3. Deep Learning Framework Combining Sparse Autoencoder and Taguchi Method

As illustrated in Figure 1, deep neural network is designed from two autoencoders and SoftMax layers, each one of them was trained alone as unsupervised training without using labelled data; the purpose of these first two layers is essentially to extract appropriate features; automatic feature extraction is one of the powerful characteristics of deep learning based architectures. The following section will briefly introduce the Taguchi Method whereas the following subsection will introduce the proposed method and the corresponding deep learning based architecture. The third layer is the SoftMax layer, which is one of the leading feature classifiers and is responsible for classifying the features that are extracted from the previous layers. The final layer is to stack all layers and train them together by using labelled data in supervised fashion. This basically allows converting an unsupervised learning architecture into a supervised learning architecture. To obtain the best performance from the first autoencoder, Taguchi Method is integrated into the model aiming to estimate optimized combination of five parameters of first autoencoder, namely, L2 Weight Regularization, Sparsity Regularization, Sparsity Proportion, Hidden Size, and Max Epochs. The effect of an L2 regularizer for the weights of the network is controlled by L2 Weight Regularization but not control the biases.

L2 Weight Regularization parameter should be very small and is represented in the following: where number of hidden layers is represented by , the number of observations is represented by , and the training data variables number is represented by .

The sparsity regularizer effect is controlled by a Sparsity Regularization parameter, dealing to force a chain on the sparsity of the output from the hidden layers. This is different from applying a sparsity regularizer to the weights that Sparsity Regularization term can be the Kullback-Leibler divergence (KL) function as illustrated in the following:where represents the desired value, represents the average output activation of a neuron , and KL is the function that measures the variation between two probabilities distribution through the same data. As it can be inferred that the equation result value gets close to zero between and when input and output data resemble each other. On the other hand, when those values are not close to each other, the sparsity will take a larger value [20].

Alternatively, sparsity regularizer parameter is controlled by Sparsity Proportion (SP) parameter. The sparsity of the output from each hidden layer is controlled by the Proportion parameter. A low value for SP normally leads all neurons in the hidden layer specialized by only producing a high output value for a small amount of training examples. For instance, if SP value is selected as “0.2”, an average output for each neuron becomes “0.2” in the hidden layer over the training examples. The optimum value of SP varies depending on the nature of the problem between 0 and 1. Therefore, the technique for selecting the optimal value is very significant to improve the overall performance of the sparse autoencoder [21]. In addition, Hidden Size (HS) is a parameter which controls the size of the feature on each layer so; it affects the performance of the autoencoder. The last parameter is Maximum Epochs; one epoch represents one entire training cycle on the training data. Every sample in the training data is seen once, you start with the epoch. However, the Maximum Epochs mean, for example, if maximum epoch equals 10, this means the weights will be updated at least 10 times.

All previously defined parameters are employed in the training phase and directly influence the success of the training process. The cost function of training sparse autoencoder is also illustrated in (10). The training algorithm tries to reduce the cost function by finding the optimal parameters that essentially aims to reduce the value of Here, represented the loss rate (error rate), is represented the input features, is the reconstructed features, is the coefficient for the L2 Weight Regularization, and is coefficient for the Sparsity Regularization.

The given problem includes two autoencoders. Each of those autoencoders has 5 parameters and each parameter can be defined with 5 different levels. Consequently, the traditional method for finding best combination of parameters for two autoencoders requires 55+55 = 3125 +3125 = 6250 trails so as to test all parameter combinations by using full factorial design. This means that each autoencoder entails 55 trails to obtain the best combination of parameters. Hence, a more optimized approach has been proposed in this study. According to which Taguchi Method was utilized for finding the optimal parameters for the system by performing only 25 experiments, select L25 orthogonal index (5 parameters and 5 levels in each parameter); see Section 2.2. As the first autoencoder is performed by doing 25 experiments, the most optimum parameters were also determined by Taguchi Method and best performance for the second autoencoder as well. This means that the total experiments for the first two layers in our system are 25+25= 50. As mentioned above, at the last step, all three components are stacked and trained in a supervised fashion by using backpropagation on multilayer network for improving the network performance. In order to validate the performance of the proposed system, a series of experiments were conducted.

3. Experimental Results

A computer with Intel Core i7–6700 CPU @ 2.60-GHz and 8-GB RAM is used for running the proposed framework which is used in several applications to detect computer network attacks including DDoS and IDS attacks and Epileptic Seizure Recognition and Handwritten Digit classification. The results obtained with the proposed method are compared to a number of studies in the respective field. In addition, some of the techniques implemented in this paper to compare the results with our proposed method are SVM, neural network, SoftMax and stacked sparse autoencoder based support vector machine (SSAE-SVM). Each dataset and corresponding result will be detailed in the following subsections respectively.

3.1. DDoS Detection Using the Proposed Framework

Distributed Denial of Service attack is an offensive and threatening intrusive threats to online servers, websites, networks, and clouds. The purpose of DDoS attack is to exhaust exchequer and to expend bandwidth of a network system. Due to the harmonious nature of DDoS attack, an attacker can generate massive amount of attack traffic using a huge number of compromised machines to smash a system or website [51, 52]. Many organizations such as Amazon, eBay, CNN, and Yahoo were the victims of DDoS attacks in the recent past. In this paper, our new framework was used to detect DDoS attack proposed in [23], which presented four attacks types (Smurf, UDP Flood, SIDDOS, HTTP Flood, and normal). This dataset consists of 27 features (SRC ADD, DES ADD, PKT ID, FROM NODE, TO NODE, PKT TYPE, PKT SIZE, FLAGS, FID, SEQ NUMBER, NUMBER OF PKT, NUMBER OF BYTE, NODE NAME FROM, NODE NAME TO, PKT IN, PKTOUT, PKTR, PKT DELAY NODE, PKTRATE, BYTE RATE, PKT AVG SIZE, UTILIZATION, PKT DELAY, PKT SEND TIME, PKT RESEVED TIME, FIRST PKT SENT, LAST PKT RESEVED). In Table 2, parameters are classified into 5 classes and we recognize the upper and lower boundaries of the parameters. The upper and lower boundaries of these parameters are determined by using trial and error approach. This approach considers the results of the predefined experiments and studies.

As mentioned above, dataset consists of five classes that each class consists of 800 samples. 50% of them were used for training, and also the other % 50 were used for testing. Consequently, the proposed framework was trained by employing 2000 samples and then it was tested by employing another 2000 samples. Moreover, in Table 3, the operators’ level values are presented. Minitab program experiments results are presented in Table 4. The error accounts obtained from stratifying the parameters to the autoencoder 1 are represented in Table 5. Root mean square error (RMSE) is used to measure the performance of the autoencoder 1, the smallest value which is closed to zero means that the performance is well. where spotted rate is represented by and modelled rate represented by at time/place ‘i’. The experiment results acquired by using the Taguchi experimental design were estimated by transforming them into S/N ratios. The results acquired by using the Taguchi experimental design were predestined by transforming the results into signal/noise (S/N) ratios (See Table 5).

Now, Table 6 presents the best parameters for autoencoder 1 which represented the first layer in the deep autoencoder neural network. The parameters of autoencoder 2 which represented the second layer can be obtained by using the same steps in different ranges for each parameter Table 7.

By following Tables 8, 9, and 10, the best parameters are obtained in Figure 4 and Table 11; the same procedures in Table 3, Table 4, Table 5, respectively, were used to find the best parameters that are represented in Figure 3 and Table 5. This means that the best parameters of each autoencoder were determined in minimum number of tests.

After finding the best parameters, for each autoencoder, this leads to obtaining the best performance for training each autoencoder by using the best parameters. On the other hand, the results that were obtained from the system presented by using confusion matrix for detailed analysis for each type of DDoS attack is seen in Figure 5. The experimental results show that proposed method has satisfactory results when compared to other methods.

Detection accuracy of 99.6% makes the proposed method slightly better than the other methods as shown in Table 12. The other feature of proposed method is that this system can learn effectively by using only 2000 samples which is very little when compared to previous methods. Data collection is very difficult and expensive procedure so that the system that learns faster by using less number of data sample is more practical from others. The confusion matrix notation is used to present results in a more detailed fashion and to be more understandable. The proposed framework results is compared with number of methods proposed in [23], also with number of methods proposed by us to detect DDoS attacks such as SSAE-SVM [24], SVM, and SoftMax classifiers. Table 12 illustrates that the proposed framework produces the best results compared with the state-of-the-art methods for this problem.

3.2. IDS Attack

In computer security systems, Intrusion Detection Systems (IDS) have become a necessity because of the growing demand in unlawful access and attacks. In computer security systems, IDS is a prime part that can be classified as Host-based Intrusion Detection System (HIDS) which superheats a confirmed host or system and Network-based Intrusion detection system (NIDS), which superheats a network of hosts and systems. In this paper, our framework is used to detect IDS attack by using new dataset [53], which consists of 47 features and 10 attack types. We will examine the UNSW-NB15 intrusion dataset in our research, as well as real-time captured dataset. This dataset is a hybrid of intrusion data collected from real modernistic normal and abnormal activities of the network traffic. This dataset is newer and more efficient than KDD98, KDDCUP99, and NSLKDD which are the common and older features datasets because they were generated two decades ago. By following the same procedures in the Figure 1, and the tables such as in the DDoS detection procedures, the best parameters were determined as shown in the Tables 13 and 14 for each autoencoders 1 and 2 to find the best parameters that produces the best performance to detect IDS attacks. 10000 data points were used to train and test the system (5000 data used for training and 5000 for testing). Dividing half of the data for testing is also a challenging issue that previous studies employ more than %50 data for training. However, in order to reduce overall training time, training data percentage is pulled down. Experimental results for this dataset and configuration is illustrated in Figure 5. According to those results, the framework detection rate reaches 99.70% success rate which is satisfactory when compared with previous studies, as illustrated in Table 15. This proves that even such a small percentage training set is employed for this problem. Satisfactory results can be obtained. Figure 6 also demonstrates results based on the corresponding confusion matrix of the output results.

3.3. Epileptic Seizure Recognition

According to the latest results, 1-2% inhabitancies of the world suffer from epilepsy which is a neurological trouble [54]. It is distinguished by surprised frequent and evanescent troubles of perception or demeaning our produce from immoderate coincidence of cortical neural networks. Epileptic Seizure is a neurologic status which is caused by detonation of electrical discharges in the brain. The epileptic seizures mean lineament of epilepsy is recurrent seizures. Observation of brain performance over the EEG has become a serious agent in the detection of epilepsy [55]. There are two kinds of abnormal actions: interictal, abnormal EEG recorded between epileptic crisis and ictal that occurs in the patient’s EEG records. The EEG subscription of an interictal action is accidental passing waveforms, as either separated trainer, sharp waves, or spike wave complexes [56]. Commonly, veteran physicians by visual surveying of EEG records for interictal and ictal actions can detect the epilepsy crises. However, visual survey of the huge size of EEG data has business-like disadvantages and weaknesses. Visual search is very time-consuming and inactive, essentially in the situation of long size of data [57]. In addition, contention among physicians on the many EEG results in some time leading to individual decision of the analysis due to the set of interictal spikes morphology. Therefore, computer-aided systems are developed to detect blood diseases [58], heart disease recognition [59], and epilepsy detection systems which are listed in Table 15. Epileptic dataset [60] is used to train and test in the proposed method. Two parts vector matrices are generated with the size of (100 × 4096) datasets, A representing (healthy) and E representing the (epileptic activity condition). A, E are divided into two parts, each of them is 50% of the vector matrices, and then two (50 × 4096) vector matrices are generated for training and another one for testing. Epileptic Seizure dataset consists of 4096 features by using 2 autoencoders, the first one reduces the number of features to 2004 and 103 in the second autoencoder which means reducing the time consumption. The best parameters for autoencoder 1 and autoencoder 2 that were obtained from our system are listed in Tables 16 and 17. This leads to obtaining the best results for Epileptic Seizure Recognition which is represented in Figure 7. The proposed method results compared with previous results in Epileptic Seizure Recognition are presented in Table 17. SVM, Nlp, and SoftMax were implemented by us to obtain results that are compared with our proposed method.

The comparison in Table 18 shows that there are a number of methods that have same accuracies with proposed method such as Tzallas et al. [28] and Srinivasan [30], but our proposed method has a good feature which uses deep learning techniques that give advantage when there are huge numbers of instances of epilepsy data for classification and uses only 50% of data in training when other methods used 60%.

3.4. Handwritten Digit Classification

The proposed framework is finally tested by employing MNSIT dataset which was proposed for handwritten digit classification problem [40]. The framework is trained by using “5000” images that is “500” for each example. Each image consists of “28x28” pixels, meaning there are “784” values for each image when converted to vectors to build the matrices of vectors. In the second stage, the matrix of arrays becomes input to the first autoencoder in which parameters are also optimized by using Taguchi Method, as illustrated in Table 19. Besides Table 20 illustrates the optimized parameters for the second autoencoder. According to the characteristics of the proposed framework, extracted features from the second autoencoder are conveyed to the SoftMax layer that classify them into ten separate classes. Overall, the two autoencoders and SoftMax layer are stacked and trained in a supervised manner. The confusion matrix of the system obtained according to the experimental results is illustrated in Figure 8. These results are compared with the state-of-the-art studies regarding this problem and satisfactory results are obtained, as illustrated in Table 21.

4. Conclusion

This paper proposes a new deep learning framework that essentially combines sparse autoencoder and Taguchi Method, which is an organized approach for parameter optimization in a reasonable amount of time. Experimental results reveal that applying this method allows the proposed framework to optimize numerous factors and extract more quantitative data from fewer experimental trials simultaneously. This novel framework is tested with different experimental data sets and compared to state-of-the-art methods and studies in terms of overall accuracy. For instance, proposed framework achieves satisfactory results: 99.6% in DDoS Detection, 99.7% for IDS Attack, 100% in Epileptic Seizure Recognition, and finally 99.8% precision result for handwritten digit classification problem. The results verify the validity of the proposed framework. Also authors are encouraged to improve overall performance of this architecture for more complex problems such as 3D image processing and real-time robotic system. Accordingly, different heuristic optimization algorithm, including genetic algorithms, particle swarm optimization, or colony optimization algorithms, will be used to estimate autoencoder parameters and compared with the Taguchi Method in future works. It is also noticed that the proposed architecture can also be employed for comprehensive recognition and estimation problems, including gesture recognition, URL reputation, and SMS spam collection.

Data Availability

The IDS attack data that support the findings of this study are available with “UNSW-NB15” reference name at “https://www.unsw.adfa.edu.au/australian-centre-for-cybersecurity/cybersecurity/ADFA-NB15-Datasets/”. Epilepsy recognition dataset that also support the findings of this study with “SETS A and B” references is available at “http://epileptologiebonn.de/cms/front_content.php?idcat=193&lang=3”. The Digit Classification dataset with “MNIST” reference is available at “http://yann.lecun.com/exdb/mnist/”. The DDoS detection dataset that support this study are available at “https://www.researchgate.net/publication/292967044_Dataset-_Detecting_Distributed_Denial_of_Service_Attacks_Using_Data_Mining_Technique”.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

The authors gratefully acknowledge the support to this work by Ankara Yıldırım Beyazıt University.