Skip to main content

2006 | Buch

Multi-Objective Machine Learning

insite
SUCHEN

Über dieses Buch

Recently, increasing interest has been shown in applying the concept of Pareto-optimality to machine learning, particularly inspired by the successful developments in evolutionary multi-objective optimization. It has been shown that the multi-objective approach to machine learning is particularly successful to improve the performance of the traditional single objective machine learning methods, to generate highly diverse multiple Pareto-optimal models for constructing ensembles models and, and to achieve a desired trade-off between accuracy and interpretability of neural networks or fuzzy systems. This monograph presents a selected collection of research work on multi-objective approach to machine learning, including multi-objective feature selection, multi-objective model selection in training multi-layer perceptrons, radial-basis-function networks, support vector machines, decision trees, and intelligent systems.

Inhaltsverzeichnis

Frontmatter

Multi-Objective Clustering, Feature Extraction and Feature Selection

Frontmatter
Feature Selection Using Rough Sets
Abstract
Feature selection refers to the selection of input attributes that are most predictive of a given outcome. This is a problem encountered in many areas such as machine learning, signal processing, and recently bioinformatics/computational biology. Feature selection is one of the most important and challenging tasks, when it comes to dealing with large datasets with tens or hundreds of thousands of variables. Areas of web-mining and gene expression array analysis provide examples, where selection of interesting and useful features determines the performance of subsequent analysis. The intrinsic nature of noise, uncertainty, incompleteness of data makes extraction of hidden and useful information very difficult. Capability of handling imprecision, inexactness and noise, has attracted researchers to use rough sets for feature selection. This article provides an overview on recent literature in this direction.
Mohua Banerjee, Sushmita Mitra, Ashish Anand
Multi-Objective Clustering and Cluster Validation
Abstract
This chapter is concerned with unsupervised classification, that is, the analysis of data sets for which no (or very little) training data is available. The main goals in this data-driven type of analysis are the discovery of a data set’s underlying structure, and the identification of groups (or clusters) of homogeneous data items — a process commonly referred to as cluster analysis.
Julia Handl, Joshua Knowles
Feature Selection for Ensembles Using the Multi-Objective Optimization Approach
Abstract
Feature selection for ensembles has shown to be an effective strategy for ensemble creation due to its ability of producing good subsets of features, which make the classifiers of the ensemble disagree on difficult cases. In this paper we present an ensemble feature selection approach based on a hierarchical multi-objective genetic algorithm. The underpinning paradigm is the “overproduce and choose”. The algorithm operates in two levels. Firstly, it performs feature selection in order to generate a set of classifiers and then it chooses the best team of classifiers. In order to show its robustness, the method is evaluated in two different contexts: supervised and unsupervised feature selection. In the former, we have considered the problem of handwritten digit recognition and used three different feature sets and multi-layer perceptron neural networks as classifiers. In the latter, we took into account the problem of handwritten month word recognition and used three different feature sets and hidden Markov models as classifiers. Experiments and comparisons with classical methods, such as Bagging and Boosting, demonstrated that the proposed methodology brings compelling improvements when classifiers have to work with very low error rates.
Luiz S. Oliveira, Marisa Morita, Robert Sabourin
Feature Extraction Using Multi-Objective Genetic Programming
Abstract
A generic, optimal feature extraction method using multi-objective genetic programming (MOGP) is presented. This methodology has been applied to the well-known edge detection problem in image processing and detailed comparisons made with the Canny edge detector. We show that the superior performance from MOGP in terms of minimizing the misclassification is due to its effective optimal feature extraction. Furthermore, to compare different evolutionary approaches, two popular techniques - PCGA and SPGA - have been extended to genetic programming as PCGP and SPGP, and applied to five datasets from the UCI database. Both of these evolutionary approaches provide comparable misclassification errors within the present framework but PCGP produces more compact transformations.
Yang Zhang, Peter I Rockett

Multi-Objective Learning for Accuracy Improvement

Frontmatter
Regression Error Characteristic Optimisation of Non-Linear Models
Abstract
In this chapter recent research in the area of multi-objective optimisation of regression models is presented and combined. Evolutionary multi-objective optimisation techniques are described for training a population of regression models to optimise the recently defined Regression Error Characteristic Curves (REC). A method which meaningfully compares across regressors and against benchmark models (i.e. ‘random walk’ and maximum a posteriori approaches) for varying error rates. Through bootstrapping training data, degrees of confident out-performance are also highlighted.
Jonathan E. Fieldsend
Regularization for Parameter Identification Using Multi-Objective Optimization
Abstract
Regularization is a technique used in finding a stable solution when a parameter identification problem is exposed to considerable errors. However a significant difficulty associated with it is that the solution depends upon the choice of the value assigned onto the weighting regularization parameter participating in the corresponding formulation. This chapter initially and briefly describes the weighted regularization method. It continues by introducing a weightless regularization approach that reduces the parameter identification problem to multi-objective optimization. Subsequently, a gradient-based multi-objective optimization method with Lagrange multipliers, is presented. Comparative numerical results with explicitly de- fined objective functions demonstrate that the technique can search for appropriate solutions more efficiently than other existing techniques. Finally, the technique was successfully applied for the parameter identification of a material model1.
Tomonari Furukawa, Chen Jian Ken Lee, John G. Michopoulos
Multi-Objective Algorithms for Neural Networks Learning
Abstract
Most supervised learning algorithms for Artificial Neural Networks (ANN)aim at minimizing the sum of the squared error of the training data [12, 11, 5, 10]. It is well known that learning algorithms that are based only on error minimization do not guarantee good generalization performance models. In addition to the training set error, some other network-related parameters should be adapted in the learning phase in order to control generalization performance. The need for more than a single objective function paves the way for treating the supervised learning problem with multi-objective optimization techniques. Although the learning problem is multi-objective by nature, only recently it has been given a formal multi-objective optimization treatment [16]. The problem has been treated from different points of view along the last two decades.
Antônio Pádua Braga, Ricardo H. C. Takahashi, Marcelo Azevedo Costa, Roselito de Albuquerque Teixeira
Generating Support Vector Machines Using Multi-Objective Optimization and Goal Programming
Abstract
Support Vector Machine (SVM) is gaining much popularity as one of effective methods for machine learning in recent years. In pattern classification problems with two class sets, it generalizes linear classifiers into high dimensional feature spaces through nonlinear mappings defined implicitly by kernels in the Hilbert space so that it may produce nonlinear classifiers in the original data space. Linear classi- fiers then are optimized to give the maximal margin separation between the classes. This task is performed by solving some type of mathematical programming such as quadratic programming (QP) or linear programming (LP). On the other hand, from a viewpoint of mathematical programming for machine learning, the idea of maximal margin separation was employed in the multi-surface method (MSM) suggested by Mangasarian in 1960’s. Also, linear classifiers using goal programming were developed extensively in 1980’s. This chapter introduces a new family of SVM using multi-objective programming and goal programming (MOP/GP) techniques, and discusses its effectiveness throughout several numerical experiments.
Hirotaka Nakayama, Yeboon Yun
Multi-Objective Optimization of Support Vector Machines
Abstract
Designing supervised learning systems is in general a multi-objective optimization problem. It requires finding appropriate trade-offs between several objectives, for example between model complexity and accuracy or sensitivity and specificity. We consider the adaptation of kernel and regularization parameters of support vector machines (SVMs) by means of multi-objective evolutionary optimization. Support vector machines are reviewed from the multi-objective perspective, and different encodings and model selection criteria are described. The optimization of split modified radius-margin model selection criteria is demonstrated on benchmark problems. The MOO approach to SVM design is evaluated on a real-world pattern recognition task, namely the real-time detection of pedestrians in infrared images for driver assistance systems. Here the three objectives are the minimization of the false positive rate, the false negative rate, and the number of support vectors to reduce the computational complexity.
Thorsten Suttorp, Christian Igel
Multi-Objective Evolutionary Algorithm for Radial Basis Function Neural Network Design
Abstract
In this chapter, we present a multiobjective evolutionary algorithm based design procedure for radial-basis function neural networks. A Hierarchical Rank Density Genetic Algorithm (HRDGA) is proposed to evolve the neural network’s topology and parameters simultaneously. Compared with traditional genetic algorithm based designs for neural networks, the hierarchical approach addresses several deficiencies highlighted in literature. In addition, the rank-density based fitness assignment technique is used to optimize the performance and topology of the evolved neural network to tradeoff between the training performance and network complexity. Instead of producing a single optimal solution, HRDGA provides a set of near-optimal neural networks to the designers so that they can have more flexibility for the final decision-making based on certain preferences. In terms of searching for a near-complete set of candidate networks with high performances, the networks designed by the proposed algorithm prove to be competitive, or even superior, to three state-of-the-art designs for radial-basis function neural networks to predict Mackey-Glass chaotic time series.
Gary G. Yen
Minimizing Structural Risk on Decision Tree Classification
Abstract
Tree induction algorithms use heuristic information to obtain decision tree classification. However, there has been little research on how many rules are appropriate for a given set of data, that is, how we can find the best structure leading to desirable generalization performance. In this chapter, an evolutionary multi-objective optimization approach with genetic programming will be applied to the data classification problem in order to find the minimum error rate or the best pattern classifier for each size of decision trees. As a result, we can evaluate the classification performance under various structural complexity of decision trees. Following structural risk minimization suggested by Vapnik, we can determine a desirable number of rules with the best generalization performance. The suggested method is compared with C4.5 application for machine learning data.
DaeEun Kim
Multi-objective Learning Classifier Systems
Abstract
Learning concept descriptions from data is a complex multiobjective task. The model induced by the learner should be accurate so that it can represent precisely the data instances, complete, which means it can be generalizable to new instances, and minimum, or easily readable. Learning Classifier Systems (LCSs) are a family of learners whose primary search mechanism is a genetic algorithm. Along the intense history of the field, the efforts of the community have been centered on the design of LCSs that solved these goals efficiently, resulting in the proposal of multiple systems. This paper revises the main LCS approaches and focuses on the analysis of the different mechanisms designed to fulfill the learning goals. Some of these mechanisms include implicit multiobjective learning mechanisms, while others use explicit multiobjective evolutionary algorithms. The paper analyses the advantages of using multiobjective evolutionary algorithms, especially in Pittsburgh LCSs, such as controlling the so-called bloat effect, and offering the human expert a set of concept description alternatives.
Ester Bernadó-Mansilla, Xavier Llorà, Ivan Traus

Multi-Objective Learning for Interpretability Improvement

Frontmatter
Simultaneous Generation of Accurate and Interpretable Neural Network Classifiers
Abstract
Generating machine learning models is inherently a multi-objective optimization problem. Two most common objectives are accuracy and interpretability, which are very likely conflicting with each other. While in most cases we are interested only in the model accuracy, interpretability of the model becomes the major concern if the model is used for data mining or if the model is applied to critical applications. In this chapter, we present a method for simultaneously generating accurate and interpretable neural network models for classification using an evolutionary multi-objective optimization algorithm. Lifetime learning is embedded to fine-tune the weights in the evolution that mutates the structure and weights of the neural networks. The efficiency of Baldwin effect and Lamarckian evolution are compared. It is found that the Lamarckian evolution outperforms the Baldwin effect in evolutionary multi-objective optimization of neural networks. Simulation results on two benchmark problems demonstrate that the evolutionary multi-objective approach is able to generate both accurate and understandable neural network models, which can be used for different purpose.
Yaochu Jin, Bernhard Sendhoff, Edgar Körner
GA-Based Pareto Optimization for Rule Extraction from Neural Networks
Abstract
The chapter presents a new method of rule extraction from trained neural networks, based on a hierarchical multiobjective genetic algorithm. The problems associated with rule extraction, especially its multiobjective nature, are described in detail, and techniques used when approaching them with genetic algorithms are presented. The main part of the chapter contains a thorough description of the proposed method. It is followed by a discussion of the results of experimental study performed on popular benchmark datasets that confirm the method’s effectiveness.
Urszula Markowska-Kaczmar, Krystyna Mularczyk
Agent Based Multi-Objective Approach to Generating Interpretable Fuzzy Systems
Abstract
Interpretable fuzzy systems are very desirable for human users to study complex systems. To meet this end, an agent based multi-objective approach is proposed to generate interpretable fuzzy systems from experimental data. The proposed approach can not only generate interpretable fuzzy rule bases, but also optimize the number and distribution of fuzzy sets. The trade-off between accuracy and interpretability of fuzzy systems derived from our agent based approach is studied on some benchmark classification problems in the literature.
Hanli Wang, Sam Kwong, Yaochu Jin, Chi-Ho Tsang
Multi-objective Evolutionary Algorithm for Temporal Linguistic Rule Extraction
Abstract
Autonomous temporal linguistic rule extraction is an application of growing interest due to its relevance to both decision support systems and fuzzy controllers. In the presented work, rules are evaluated using three qualitative metrics based on their representation on the truth space diagram. Performance metrics are then treated as competing objectives and Multiple Objective Evolutionary Algorithm is used to search for an optimal set of non-dominated rules. Novel techniques for data pre-processing and rule set post-processing are developed that deal directly with the delays involved in dynamic systems. Data collected from a simulated hot and cold water mixer and a two-phase vertical column is used to validate the proposed procedure.
Gary G. Yen
Multiple Objective Learning for Constructing Interpretable Takagi-Sugeno Fuzzy Model
Abstract
This chapter discusses the interpretability of Takagi-Sugeno (TS) fuzzy systems. A new TS fuzzy model, whose membership functions are characterized by linguistic modifiers, is presented. The tradeoff between global approximation and local model interpretation has been achieved by minimizing a multiple objective performance measure. In the proposed model, the local models match the global model well and the erratic behaviors of local models are remedied effectively. Furthermore, the transparency of partitioning of input space has been improved during parameter adaptation.
Shang-Ming Zhou, John Q. Gan

Multi-Objective Ensemble Generation

Frontmatter
Pareto-Optimal Approaches to Neuro-Ensemble Learning
Abstract
The whole is greater than the sum of the parts; this is the essence of using a mixture of classifiers instead of a single classifier. In particular, an ensemble of neural networks (we call neuro-ensemble) has attracted special attention in the machine learning literature. A set of trained neural networks are combined using a post-gate to form a single super-network. The three main challenges facing researchers in neuro-ensemble are:(1) which network to include in, or exclude from the ensemble; (2) how to define the size of the ensemble; (3) how to define diversity within the ensemble.
Hussein Abbass
Trade-Off Between Diversity and Accuracy in Ensemble Generation
Abstract
Ensembles of learning machines have been formally and empirically shown to outperform (generalise better than) single learners in many cases. Evidence suggests that ensembles generalise better when they constitute members which form a diverse and accurate set. Diversity and accuracy are hence two factors that should be taken care of while designing ensembles in order for them to generalise better. There exists a trade-off between diversity and accuracy. Multi-objective evolutionary algorithms can be employed to tackle this issue to good effect. This chapter includes a brief overview of ensemble learning in general and presents a critique on the utility of multi-objective evolutionary algorithms for their design. Theoretical aspects of a committee of learners viz. the bias-variance-covariance decomposition and ambiguity decomposition are further discussed in order to support the importance of having both diversity and accuracy in ensembles. Some recent work and experimental results, considering classification tasks in particular, based on multi-objective learning of ensembles are then presented as we examine ensemble formation using neural networks and kernel machines.
Arjun Chandra, Huanhuan Chen, Xin Yao
Cooperative Coevolution of Neural Networks and Ensembles of Neural Networks
Abstract
Cooperative coevolution is a recent paradigm in the area of evolutionary computation focused on the evolution of coadapted subcomponents without external interaction. In cooperative coevolution a number of species are evolved together. The cooperation among the individuals is encouraged by rewarding the individuals according to their degree of cooperation in solving a target problem. The work on this paradigm has shown that cooperative coevolutionary models present many interesting features, such as specialization through genetic isolation, generalization and efficiency. Cooperative coevolution approaches the design of modular systems in a natural way, as the modularity is part of the model. Other models need some a priori knowledge to decompose the problem by hand. In most cases, either this knowledge is not available or it is not clear how to decompose the problem.
Nicolás García-Pedrajas
Multi-Objective Structure Selection for RBF Networks and Its Application to Nonlinear System Identification
Abstract
Evolutionary multiobjective optimization approach to RBF networks structure determination is discussed in this chapter. The candidates of RBF network structure are encoded into the chromosomes in GAs and they evolve toward the Pareto optimal front defined by the several objective functions with regard to model accuracy and model complexity. Then, an ensemble of networks is constructed by using the Pareto optimal networks. We discuss its application to nonlinear system identification. Numerical simulation results indicate that the ensemble network is much more robust for the case of existence of outliers or lack of data, than the one selected based on information criteria.
Toshiharu Hatanaka, Nobuhiko Kondo, Katsuji Uosaki
Fuzzy Ensemble Design through Multi-Objective Fuzzy Rule Selection
Abstract
The main advantage of evolutionary multi-objective optimization (EMO) over classical approaches is that a variety of non-dominated solutions with a wide range of objective values can be simultaneously obtained by a single run of an EMO algorithm. In this chapter, we show how this advantage can be utilized in the design of fuzzy ensemble classifiers. First we explain three objectives in multi-objective formulations of fuzzy rule selection. One is accuracy maximization and the others are complexity minimization. Next we demonstrate that a number of non-dominated rule sets (i.e., fuzzy classifiers) are obtained along the accuracy-complexity tradeoff surface from multi-objective fuzzy rule selection problems. Then we examine the effect of combining multiple non-dominated fuzzy classifiers into a single ensemble classifier. Experimental results clearly show that the combination into ensemble classifiers improves the classification ability of individual fuzzy classifiers for some data sets.
Hisao Ishibuchi, Yusuke Nojima

Applications of Multi-Objective Machine Learning

Frontmatter
Multi-Objective Optimisation for Receiver Operating Characteristic Analysis
Abstract
Receiver operating characteristic (ROC) analysis is now a standard tool for the comparison of binary classifiers and the selection operating parameters when the costs of misclassification are unknown.
Richard M. Everson, Jonathan E. Fieldsend
Multi-Objective Design of Neuro-Fuzzy Controllers for Robot Behavior Coordination
Abstract
This chapter discusses the behavioral learning of robots from the viewpoint of multiobjective design. Various coordination methods for multiple behaviors have been proposed to improve the control performance and to manage conflicting objectives. We proposed various learning methods for neuro-fuzzy controllers based on evolutionary computation and reinforcement learning. First, we introduce the supervised learning method and evolutionary learning method for multiobjective design of robot behaviors. Then, the multiobjective design of fuzzy spiking neural networks for robot behaviors is presented. The key point behind these methods is to realize the adaptability and reusability of behaviors through interactions with the environment.
Naoyuki Kubota
Fuzzy Tuning for the Docking Maneuver Controller of an Automated Guided Vehicle
Abstract
In some environments, mobile robots need to perform tasks in a precise manner. For this reason, we require obtaining good controllers in charge of these control tasks. In this work, we present a real-world application in the domain of multi-objective machine learning, which consists of an Automated Guided Vehicle (AGV), specifically, a fork-lift truck must often perform docking maneuvers to load pallets in conveyor belts. The main purpose is to improve some features of docking task as its duration, accuracy and stability, satisfying determined constraints. We propose a machine learning technique based on a multi-objective evolutionary algorithm in order to find multiple fuzzy logic controllers which optimize specific objectives and satisfy imposed constraints for docking task in charge of following up an online generated trajectory.
J.M. Lucas, H. Martinez, F. Jimenez
A Multi-Objective Genetic Algorithm for Learning Linguistic Persistent Queries in Text Retrieval Environments
Abstract
Persistent queries are a specific kind of queries used in information retrieval systems to represent a user’s long-term standing information need. These queries can present many different structures, being the “bag of words” that most commonly used. They can be sometimes formulated by the user, although this task is usually difficult for him and the persistent query is then automatically derived from a set of sample documents he provides.
María Luque, Oscar Cordón, Enrique Herrera-Viedma
Multi-Objective Neural Network Optimization for Visual Object Detection
Abstract
In real-time computer vision, there is a need for classifiers that detect patterns fast and reliably. We apply multi-objective optimization (MOO) to the design of feed-forward neural networks for real-world object recognition tasks, where computational complexity and accuracy define partially conflicting objectives. Evolutionary structure optimization and pruning are compared for the adaptation of the network topology. In addition, the results of MOO are contrasted to those of a single-objective evolutionary algorithm. As a part of the evolutionary algorithm, the automatic adaptation of operator probabilities in MOO is described.
Stefan Roth, Alexander Gepperth, Christian Igel
Backmatter
Metadaten
Titel
Multi-Objective Machine Learning
herausgegeben von
Yaochu Jin, Dr.
Copyright-Jahr
2006
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-33019-6
Print ISBN
978-3-540-30676-4
DOI
https://doi.org/10.1007/3-540-33019-4

    Marktübersichten

    Die im Laufe eines Jahres in der „adhäsion“ veröffentlichten Marktübersichten helfen Anwendern verschiedenster Branchen, sich einen gezielten Überblick über Lieferantenangebote zu verschaffen.