Skip to main content

2013 | Buch

Artificial Intelligence, Evolutionary Computing and Metaheuristics

In the Footsteps of Alan Turing

insite
SUCHEN

Über dieses Buch

Alan Turing pioneered many research areas such as artificial intelligence, computability, heuristics and pattern formation. Nowadays at the information age, it is hard to imagine how the world would be without computers and the Internet. Without Turing's work, especially the core concept of Turing Machine at the heart of every computer, mobile phone and microchip today, so many things on which we are so dependent would be impossible.

2012 is the Alan Turing year -- a centenary celebration of the life and work of Alan Turing. To celebrate Turing's legacy and follow the footsteps of this brilliant mind, we take this golden opportunity to review the latest developments in areas of artificial intelligence, evolutionary computation and metaheuristics, and all these areas can be traced back to Turing's pioneer work. Topics include Turing test, Turing machine, artificial intelligence, cryptography, software testing, image processing, neural networks, nature-inspired algorithms such as bat algorithm and cuckoo search, and multiobjective optimization and many applications. These reviews and chapters not only provide a timely snapshot of the state-of-art developments, but also provide inspiration for young researchers to carry out potentially ground-breaking research in the active, diverse research areas in artificial intelligence, cryptography, machine learning, evolutionary computation, and nature-inspired metaheuristics.

This edited book can serve as a timely reference for graduates, researchers and engineers in artificial intelligence, computer sciences, computational intelligence, soft computing, optimization, and applied sciences.

Inhaltsverzeichnis

Frontmatter

Artificial Intelligence and Cryptography

Frontmatter
Turing Test as a Defining Feature of AI-Completeness
Abstract
The paper contributes to the development of the theory of AI-Completeness by formalizing the notion of AI-Complete, C-Complete and AI-Hard problems. The intended goal is to provide a classification of problems in the field of Artificial General Intelligence. We prove Turing Test to be an instance of an AI-Complete problem and further show certain AI problems to be AI-Complete or AI-Hard via polynomial time reductions. Finally, the paper suggests some directions for future work on the theory of AI-Completeness.
Roman V. Yampolskiy
Artificial Intelligence Evolved from Random Behaviour: Departure from the State of the Art
Abstract
Since John McCarthy at MIT coined the term artificial intelligence in 1956 aiming to make a machine have a human-like intelligence in a visible future, we have had lots of discussions whether it is possible in a true sense, and lots of intelligent machines have been reported. Nowadays, the term is ubiquitous in our community. In this chapter we discuss how those proposed machine intelligences are actually intelligent. Starting with how we define intelligence, how can we measure it, how those measurements really represent intelligence and so on, by surveying the Legg and Hutter’s seminal paper on formal definition of machine intelligence, we name a few others, taking a brief look at our own too. We also consider a modern interpretation of the Turing test originally proposed in 1950. Then we argue a benchmark to test how an application is intelligent by means of an algorithm for stock market investment as an example. Finally we take a consideration of how we can achieve a human intelligence in a real sense in a real visible future, including an analysis of IT changes stimulating artificial intelligence development.
Wiesłw Pietruszkiewicz, Akira Imada
Turing: Then, Now and Still Key
Abstract
This paper looks at Turing’s postulations about Artificial Intelligence in his paper ‘Computing Machinery and Intelligence’, published in 1950. It notes how accurate they were and how relevant they still are today. This paper notes the arguments and mechanisms that he suggested and tries to expand on them further. The paper however is mostly about describing the essential ingredients for building an intelligent model and the problems related with that. The discussion includes recent work by the author himself, who adds his own thoughts on the matter that come from a purely technical investigation into the problem. These are personal and quite speculative, but provide an interesting insight into the mechanisms that might be used for building an intelligent system.
Kieran Greer
Imitation Programming Unorganised Machines
Abstract
In 1948 Alan Turing presented a general representation scheme by which to achieve artificial intelligence – his unorganised machines. Further, at the same time as also suggesting that natural evolution may provide inspiration for search, he noted that mechanisms inspired by the cultural aspects of learning may prove useful. This chapter presents results from an investigation into using Turing’s dynamical network representation designed by a new imitation-based, i.e., cultural, approach. Moreover, the original synchronous and an asynchronous form of unorganised machines are considered, along with their implementation in memristive hardware.
Larry Bull
Towards Machine Equivalent Consciousness
Abstract
Alan Turing’s fundamental inquiry asking “Can Machines think?” has given rise to a wide variety of contrasting approaches to building intelligent machines. Thinking requires that a computer must know what it processes and form conscious about meaningful concepts based on which subjective mental activities (e.g. seeing, knowing, learning, judging, evaluating, deciding, reasoning, etc.) can be carried on. However, a modern computer runs trillions of operations per second and is capable of performing complex computation, but still lack self-awareness—a basic element for thinking. So, how can a machine gain conscious awareness from bits of electronic signals it processes? This article explores whether generating self-awareness is possible through a mechanical procedure. Specifically, we examine patterns of human perception to identify a) what happens in the course of receiving external information and what the outputs that each sense produces are; b) how such outputs are bound into a meaningful concept; and c) the nature of self-awareness. Our research suggests that conscious awareness is a perceived pattern of physical energy. We show that the process of gaining awareness can be simulated and mechanized.
Amy Wenxuan Ding
Multicriteria Models for Learning Ordinal Data: A Literature Review
Abstract
Operations Research (OR) and Artificial Intelligence (AI) disciplines have been playing major roles on the design of new intelligent systems. Recently, different contributions from both fields have been made on the models design for problems with multi-criteria. The credit scoring problem is an example of that. In this problem, one evaluates how unlikely a client will default with his payments. Client profiles are evaluated, being their results expressed in terms of an ordinal score scale (Excelent Good Fair Poor). Intelligent systems have then to take in consideration different criteria such as payment history, mortgages, wages among others in order to commit their outcome. To achieve this goal, researchers have been delving models capable to render these multiple criteria encompassed on ordinal data.
The literature presents a myriad of different methods either on OR or AI fields for the multi-criteria models. However, a description of ordinal data methods on these two major disciplines and their relations has not been thoroughly conducted yet. It is key for further research to identify the developments made and the present state of the existing methods. It is also important to ascertain current achievements and what the requirements are to attain intelligent systems capable to capture relationships from data. In this chapter one will describe techniques presented for over more than five decades on OR and AI disciplines applied to multi-criteria ordinal problems.
Ricardo Sousa, Iryna Yevseyeva, Joaquim F. Pinto da Costa, Jaime S. Cardoso
Diophantine and Lattice Cryptanalysis of the RSA Cryptosystem
Abstract
The RSA cryptosystem, invented in 1977 is the most popular public cryptosystem for electronic commerce. Its three inventors Rivest, Shamir and Adleman received the Year 2002 Turing Award, the equivalent Nobel Prize in Computer Science. RSA offers both encryption and digital signatures and is deployed in many commercial systems. The security of RSA is based on the assumption that factoring large integers is difficult. However, most successful attacks on RSA are not based on factoring. Rather, they exploit additional information that may be encoded in the parameters of RSA and in the particular way in which RSA is used. In this chapter, we give a survey of the mathematics of the RSA cryptosystem focussing on the cryptanalysis of RSA using a variety of diophantine methods and lattice-reduction based techniques.
Abderrahmane Nitaj
Artificial Intelligence Methods in Early Childhood Education
Abstract
Educational technology constitutes an important aspect in modern education providing unique learning experiences to students and improving their learning. Technological resources (especially computers) have been integrated in education for decades. However, integration of educational technology in early childhood education is a more recent trend compared to the other levels of education. This fact creates the need to develop, apply and study application of resources and methodologies specifically addressed to young children. Artificial Intelligence approaches have been incorporated to educational technology resources providing improved interaction to learners. In this paper, Artificial Intelligence methods exploited in the context of early childhood educational technology are surveyed. The discussion mainly concerns computer-based learning systems incorporating intelligent methods (e.g., Intelligent Tutoring and Adaptive Educational Hypermedia Systems) and educational robots addressed to early childhood. To the best of the author’s knowledge, such issues have not been thoroughly discussed till now in literature.
Jim Prentzas
Recursively Generated Evolutionary Turing Machines and Evolutionary Automata
Abstract
One of the roots of evolutionary computation was the idea of Turing about unorganized machines. The goal of this paper is the development of foundations for evolutionary computations, connecting Turing’s ideas and the contemporary state of art in evolutionary computations. The theory of computation is based on mathematical models of computing automata, such as Turing machines or finite automata. In a similar way, the theory of evolutionary computation is based on mathematical models of evolutionary computing automata, such as evolutionary Turing machines or evolutionary finite automata. The goal of the chapter is to study computability in the context of the theory of evolutionary computation and genetic algorithms. We use basic models of evolutionary computation, such as different types of evolutionary machines, evolutionary automata and evolutionary algorithms, for exploration of the computing and accepting power of various kinds of evolutionary automata. However, we consider not only how evolutionary automata compute but also how they are generated because a rigorous study of construction techniques for computational systems is an urgent demand of information processing technology. Generation schemas for evolutionary automata are studied and applied to computability problems.
Mark Burgin, Eugene Eberbach
On Dynamical Systems of Large Girth or Cycle Indicator and Their Applications to Multivariate Cryptography
Abstract
We are going to observe special algebraic Turing machines designed for different assignments of cryptography such as classical symmetric encryption, public key algorithms, problems of secure key exchange, development of hash functions. The security level of related algorithms is based on the discrete logarithm problem (DLP) in Cremona group of free module over finite commutative ring. In the case of symbolic computations with “sufficiently large number of variables” the order of generator (base of DLP) is impossible to evaluate and we have “hidden discrete logarithm problem”. In the case of subgroups of Cremona group DLP is closely connected with the following classical difficult mathematical problems:
(1) solving the system of nonlinear polynomial equations over finite fields and rings,
(2) problem of finding the inverse map of bijective polynomial multivariable map.
The complexity of Discrete Logarithm Problem depends heavily from the choice of base. Generation of good “pseudorandom” base guarantees the high complexity of (1) and (2) and security of algorithms based on corresponding DLP. We will use methods of theory of special combinatorial time dependent dynamical systems for the construction of special Turing machines for the generation of the nonlinear DLP bases of large (or hidden) order and small degree.
Vasyl Ustimenko, Urszula Romańczuk
On Extremal Graph Theory, Explicit Algebraic Constructions of Extremal Graphs and Corresponding Turing Encryption Machines
Abstract
We observe recent results on the applications of extremal graph theory to cryptography. Classical Extremal Graph Theory contains Erdős Even Circuite Theorem and other remarkable results on the maximal size of graphs without certain cycles. Finite automaton is roughly a directed graph with labels on directed arrows. The most important advantage of Turing machine in comparison with finite automaton is existence of “potentially infinite memory”. In terms of Finite Automata Theory Turing machine is an infinite sequence of directed graphs with colours on arrows. This is a motivation of studies of infinite families of extremal directed graphs without certain commutative diagrams. The explicite constructions of simple and directed graphs of large girth (or large cycle indicator) corresponds to efficient encryption of Turing machines.
Vasyl Ustimenko, Urszula Romańczuk
AIML Knowledge Base Construction from Text Corpora
Abstract
Text mining (TM) and computational linguistics (CL) are computationally intensive fields where many tools are becoming available to study large text corpora and exploit the use of corpora for various purposes. In this chapter we will address the problem of building conversational agents or chatbots from corpora for domain-specific educational purposes. After addressing some linguistic issues relevant to the development of chatbot tools from corpora, a methodology to systematically analyze large text corpora about a limited knowledge domain will be presented. Given the Artificial Intelligence Markup Language as the “assembly language” for the artificial intelligence conversational agents we present a way of using text corpora as seed from which a set of “source files” can be derived. More specifically we will illustrate how to use corpus data to extract relevant keywords, multiword expressions, glossary building and text patterns in order to build an AIML knowledge base that could be later used to build interactive conversational systems. The approach we propose does not require deep understanding techniques for the analysis of text.
As a case study it will be shown how to build the knowledge base of an English conversational agent for educational purpose from a child story that can answer question about characters, facts and episodes of the story. A discussion of the main linguistic and methodological issues and further improvements is offered in the final part of the chapter.
Giovanni De Gasperis, Isabella Chiari, Niva Florio
Multidisciplinary Trends in Modern Artificial Intelligence: Turing’s Way
Abstract
The paper faces the challenge to generalize existing trends and approaches in the field of artificial intelligence. Under consideration are expert systems, dynamic neural networks, probabilistic reasoning, fuzzy logic, genetic algorithms, multi-agent systems, bio-inspired algorithms, distributed nonlinear computing, chaos-driven pattern recognition. Each approach strengths and limitations are stated without exhaustive treatment to involve specialist from adjacent fields in discussion. The most perspective research directions are revealed and analyzed in reference to Turing’s way in artificial intelligence and beyond.
Elena N. Benderskaya, Sofya V. Zhukova
An Overview of Computational Sparse Models and Their Applications in Artificial Intelligence
Abstract
Computational sparse models are drawing more and more attentions in a wide range of scientific communities including statistic signal processing and machine learning. The prominent goal of them aims at revealing the sparse structure or correlation among redundant data in terms of computational approaches, e.g. convex optimization and probability inference. The main scope of this chapter concentrates on reviewing the state-of-the-art sparse models and discussing their applications in the field of artificial intelligence. After a brief introduction to the the general idea of sparse computation, the bulk of the chapter will be split into three core sections on sparse signal optimization, low rank matrix completion and low rank structure learning. These three parts respectively correspond to the sparse models for vector case, matrix case and the combination of both. In order to effectively solve the sparse models reviewed in this chapter, we will unify the solutions to all of them in the general framework of proximal gradient algorithm which is a benchmark method for convex optimization with quadratic term. Besides, in each section, after theoretical discussions, some interesting applications of the model will be presented and introduced. Some of these applications are from other researchers’ and our previous publications and some of them are novelly proposed in this book chapter.
Yue Deng, Qionghai Dai, Zengke Zhang
MiTS in Depth: An Analysis of Distinct Tabu Search Configurations for Constructing Mixed Covering Arrays
Abstract
Alan turing work is related with the first use of heuristic algorithms. His work on broking the Nazi code of the Enigma cipher was oriented by a guided search whose expected result in most of the times would be the deciphering of the codes, even though sometimes it might not work. This idea reflects the modern meaning of an heuristic, and represents the main relationship with this chapter, as it involves the use of metaheuristics to try to guide the search to find a solution faster, or a better solution of a problem. The metaheuristic is Tabu Search (TS), and it is used to solve the Mixed Covering Array Problem (MCAP). This problem focuses on the construction of optimal test sets for software testing. The metaheuristic is designed through a fine tuning process that involves the parameters: initialization function, tabu list size, stop criterion, and neighborhood functions. The contributions are: a) a more robust fine tune process to design a new TS approach; b) the analys is of parameter values of the TS; and, c) new bounds over a benchmark reported in the literature.
Loreto Gonzalez-Hernandez, Jose Torres-Jimenez, Nelson Rangel-Valdez

Evolutionary Computation and Metaheuristics

Frontmatter
Metaheuristic Optimization: Nature-Inspired Algorithms and Applications
Abstract
Turing’s pioneer work in heuristic search has inspired many generations of research in heuristic algorithms. In the last two decades, metaheuristic algorithms have attracted strong attention in scientific communities with significant developments, especially in areas concerning swarm intelligence based algorithms. In this work, we will briefly review some of the important achievements in metaheuristics, and we will also discuss key implications in applications and topics for further research.
Xin-She Yang
Bat Algorithm and Cuckoo Search: A Tutorial
Abstract
Nature-inspired metaheuristic algorithms have attracted much attention in the last decade, and new algorithms have emerged almost every year with a vast, ever-expanding literature. In this chapter, we briefly review two latest metaheuristics: bat algorithm and cuckoo search for global optimization. Bat algorithm was proposed by Xin-She Yang in 2010, inspired by the echolocation of microbats, while cuckoo search was developed by Xin-She Yang and Suash Deb in 2009, inspired by the brood parasitism of some cuckoo species. Both algorithms have shown superiority over many other metaheuristics over a wide range of applications.
Xin-She Yang
Memory and Learning in Metaheuristics
Abstract
The rapid increase of dimensions and complexity of real life problems makes it more difficult to find optimal solutions by traditional optimization methods. This challenge requires intelligent and sophisticated algorithms to make the right decisions given a set of inputs and a variety of possible actions. In the problem solving arena, this definition is transformed into the term of artificial intelligence. Artificial intelligence emerges in metaheuristics via memory and learning in algorithms. Metaheuristics are promising approaches that can find near-optimal solutions in an acceptable amount of time. Many successful metaheuristics employ “intelligent” procedures to obtain high quality solutions for discrete optimization problems. To demonstrate the contribution of memory and learning into metaheuristics, Estimation of Distribution Algorithms will be incorporated as a memory and learning mechanism into Meta-RaPS (Meta-heuristic for Randomized Priority Search) which is classified as a memoryless metaheuristic. The 0-1 multidimensional knapsack problem will be used to evaluate the “intelligence” of the new algorithm.
Arif Arin, Ghaith Rabadi
On Some Aspects of Nature-Based Algorithms to Solve Multi-Objective Problems
Abstract
This chapter presents an overview of various nature-based algorithms to solve multi-objective problems with the particular emphasis on Multi-Objective Evolutionary Algorithms based on Genetic Algorithm. Some of the significant hybridization and the modification of the benchmark algorithms have also been discussed as well. The complexity issues have been outlined and various test problems to show the effectiveness of such algorithms have also been summarized. At the end, a brief discussion on the software packages used to model these type of algorithms are presented.
Susmita Bandyopadhyay, Ranjan Bhattacharya
Image Processing with Spiking Neuron Networks
Abstract
Artificial neural networks have been well developed so far. First two generations of neural networks have had a lot of successful applications. Spiking Neuron Networks (SNNs) are often referred to as the third generation of neural networks which have potential to solve problems related to biological stimuli. They derive their strength and interest from an accurate modeling of synaptic interactions between neurons, taking into account the time of spike emission.
SNNs overcome the computational power of neural networks made of threshold or sigmoidal units. Based on dynamic event-driven processing, they open up new horizons for developing models with an exponential capacity of memorizing and a strong ability to fast adaptation.Moreover, SNNs add a new dimension, the temporal axis, to the representation capacity and the processing abilities of neural networks. In this chapter, we present how SNN can be applied with efficacy in image clustering, segmentation and edge detection. Results obtained confirm the validity of the approach.
Boudjelal Meftah, Olivier Lézoray, Soni Chaturvedi, Aleefia A. Khurshid, Abdelkader Benyettou
Circle Detection on Images Using Learning Automata
Abstract
The outcome of Turing’s seminal work, originally proposed as a simple operational definition of intelligence, delivered several computer applications for solving complex engineering problems such as object detection and pattern recognition. Among such issues, circle detection over digital images has received considerable attention from the computer vision community over the last few years. This chapter presents an algorithm for the automatic detection of circular shapes from complicated and noisy images with no consideration of conventional Hough transform principles. The proposed algorithm is based on Learning Automata (LA) which is a probabilistic optimization method that explores an unknown random environment by progressively improving the performance via a reinforcement signal. The approach uses the encoding of three non-collinear points as a candidate circle over the edge image. A reinforcement signal indicates if such candidate circles are actually present in the edge map. Guided by the values of such reinforcement signal, the probability set of the encoded candidate circles is modified through the LA algorithm so that they can fit to the actual circles on the edge map. Experimental results over several complex synthetic and natural images have validated the efficiency of the proposed technique regarding accuracy, speed and robustness.
Erik Cuevas, Fernando Wario, Daniel Zaldivar, Marco Pérez-Cisneros
Decision Incorporation in Meta-heuristics to Cope with Decision Scheduling Problems
Abstract
The halting problem is one of the most important Turing’s discoveries. It is a decision problem and it consists of reporting whether a given program P with some input data would stop or run forever. This problem was proved by Turing to be undecidable. This means that the relevant algorithm to solve this problem doesn’t exist. In this paper, we will show the application of this problem when the program P is a meta-heuristic technique and the input data is a decision scheduling problem. Further, we will also describe an efficient technique to solve the halting problem in this application case.
Yacine Laalaoui, R. B. Ahmad
Evolutionary Models for Agent-Based Complex Behavior Modeling
Abstract
In this chapter, the essentials of genetic algorithm (GA) following the footsteps of Turing are introduced. We introduce the connection between Turing’s early ideas of organized machines and modern evolutionary computation. We mainly discuss the GA applications to adaptive complex system modeling. We study the agent-based market where collective behaviors are regarded as aggregations of individual behaviors. A complex collective behavior can be decomposed into aggregations of several groups agents following different game theoretic strategies. Complexity emerges from the collaboration and competition of these agents. The parameters governing agent behaviors can be optimized by GA to predict future collective behaviors based on history data. GA can also be used in designing market mechanisms by optimizing agent behavior parameters to obtain the most efficient market. Experimental results show the effectiveness of both models. Using evolutionary models may help us to gain some more insights in understanding the complex adaptive systems.
Zengchang Qin, Yingsai Dong, Tao Wan
Bankruptcy Prediction for Banks: An Artificial Intelligence Approach to Improve Understandability
Abstract
Artificial Intelligence (AI) is a prominent field within Computer Science whose main goal is automatic problem solving. Some of the foundations of this area were established by Alan M. Turing in his two seminal papers about machine intelligence [39] and [40]. Machine Learning (ML) is an important branch within the AI field which currently is on an intensive stage of development due to its wide range of applications. In particular, ML techniques have recently gained recognition in finance, since they are capable to produce useful models. However, the difficulty, and even the impossibility, to interpret these models, has limited the use of ML techniques in some problems where the interpretability is an important issue. Bankruptcy prediction for banks is a task which demands understandability of the solution. Furthermore, the analysis of the features (input variables), to create prediction models, provides better knowledge about the conditions which may trigger bank defaults. The selection of meaningful features before executing the learning process is beneficial since it reduces the dimensionality of the data by decreasing the size of the hypothesis space. As a result, a compact representation is obtained which is easier to interpret. The main contributions of this work are: first, the use of the evolutionary technique called Multi-Population Evolving Decision Rules MP-EDR to determine the relevance of some features from Federal Deposit Insurance Corporation (FDIC) data to predict bank bankruptcy. The second contribution is the representation of the features’ relevance by means of a network which has been built by using the rules and conditions produced by MP-EDR. Such representation is useful to disentangle the relationships between features in the model, this representation is aided by metrics which are used to measure the relevance of such features.
Alma Lilia Garcia-Almanza, Biliana Alexandrova-Kabadjova, Serafin Martinez-Jaramillo
Neural Network Based Approaches for Network Traffic Prediction
Abstract
In this chapter, we review some learning strategies for neural networks such as MLP (Multilayer Perceptron), RBF (Radial Basis Function) and Recurrent Networks applied to computer network traffic prediction. That is, the considered neural networks and training algorithms are used to predict the traffic volume of a computer network. Some methods of improving the prediction performance of neural networks are also considered such as application of Wavelet Transform. We discuss about using the Wavelet Transform in supervised training of neural networks by decomposing the traffic process into approximation and detail processes. We present some results involving the application of the Orthogonal Least Squares (OLS) algorithm in RBF networks for traffic prediction. Regarding the Recurrent neural networks, we verify their traffic prediction performance when trained with the Extended Kalman Filter (EKF) and the RTRL (Real Time Recurrent Learning). Real network traffic traces are used in the simulations in order to verify the prediction performance of the neural network algorithms.
Flávio Henrique Teles Vieira, Victor Hugo Teles Costa, Bruno Henrique Pereira Gonçalves
Application of Bat Algorithm and Fuzzy Systems to Model Exergy Changes in a Gas Turbine
Abstract
Exergy analysis plays a major role in thermal systems. Using exergy, apart from finding components for a potential for further improvement, fault detection and diagnosis, performance optimization, and environmental impact assessment can be conducted. This chapter addresses the use of fuzzy systems for modeling exergy destructions in the main components of an industrial gas turbine. The details include: (i) system description and the challenges in developing first principle models, (ii) thermodynamic models for part load and full load operating conditions, (iii) model identification technique that uses fuzzy systems and a meta-heuristic nature inspired algorithm called Bat Algorithm, (iv) validation graphs for semi-empirical models, and (v) validation test for fuzzy models. In the validation of the fuzzy model, the inputs to the model are considered the same as the inputs as experienced by the gas turbine generator. The comparison tests between actual data and prediction demonstrate how promising the combined method is as compared to separate use of the fuzzy systems trained by a heuristic approach.
A. L. Tamiru, F. M. Hashim
A KBRL Inference Metaheuristic with Applications
Abstract
In this chapter we propose an inference metaheuristic for Kernel-Based Reinforcement Learning (KBRL) agents – agents that operate in a continuous-state MDP. The metaheuristic is proposed in the simplified case of greedy policy RL agents with no receding horizon which perform online learning in an environment where feedback is generated by an ergodic and stationary source. We propose two inference strategies: isotropic discrete choice and anisotropic optimization, the former focused on speed and the latter focused on generalization capability. We cast the problem of classification as a RL problem and test the proposed metaheuristic in two experiments: an image recognition experiment on the Yale Faces database and a synthetic data set experiment. We propose a set of inference filters which increase the vigilance of the agent and show that they can prevent the agent from taking erroneous actions in an unknown environment. Two parallel inference algorithms are tested and illustrated in a cluster and GPU implementation.
Laurentiu Bucur, Adina Florea, Catalin Chera
Multi-objective Simulated Annealing Algorithm for Partner Selection in Virtual Enterprises
Abstract
Virtual Enterprise (VE) is a temporary alliance of autonomous enterprises formed to act together to share skills or core competencies and resources in order to respond to a market opportunity. The success of VE strongly depends on its composition, so partner selection can be considered as the most important problem in VE. This paper presents and solves a model for the partner selection problem in VEs that considers two main evaluation criteria; project completion time and total cost. To do so, the paper uses a multi-objective algorithm, namely Pareto Simulated Annealing (PSA). Results showed improved performance of PSA compared to the Tabu Search algorithm used in a recent study.
Hisham M. Abdelsalam, Amany M. Mohamed
Metaheuristic Approaches for the Winner Determination Problem in Combinatorial Auction
Abstract
Many problems in combinatorial optimization are NP-Hard. This has forced researchers to explore meta-heuristic techniques for dealing with this class of complex problems and finding an acceptable solution in reasonable time.
In this chapter, we are interested in the winner determination problem (WDP) in combinatorial auction (CA). CA is an auction that allocates a set of many goods to bidders in the presence of substitutes and complements. The winner determination problem is a complex problem. It is the problem of finding winning bids that maximize the auctioneers revenue under the constraint that each good can be allocated to at most one bidder.
This chapter presents a computational experience regarding four well-known meta-heuristics (stochastic local search, tabu search, genetic algorithms andmemetic algorithms) for solving the winner determination problem (WDP). The purpose of this study is to evaluatethe performance of each one of the different techniques to solve the WDP in combinatorial auctions. The different methods are evaluated on various benchmark problems, and compared with the hybrid simulated annealing (SAGII) and Casanova. The computational experiments show that in general the metaheuristic approaches provide competitive results and find good quality solutions.
Dalila Boughaci
Backmatter
Metadaten
Titel
Artificial Intelligence, Evolutionary Computing and Metaheuristics
herausgegeben von
Xin-She Yang
Copyright-Jahr
2013
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-29694-9
Print ISBN
978-3-642-29693-2
DOI
https://doi.org/10.1007/978-3-642-29694-9