A sociologically inspired heuristic for optimization algorithms: A case study on ant systems
Highlights
► Social network theory can provide optimization algorithms with social heuristics. ► Social Ant-Q: combine theory from different fields to build social structures. ► Coordination mechanism through autonomous behavior, without a need for central coordination. ► Social interaction used to improve coordination between agents. ► Social interaction is fundamental for the development of collective behavior.
Introduction
The collective behavior of social groups has inspired the development of computational models for generating solutions to optimization problems. This behavior results from patterns of interaction between individuals in a population, and is not merely a property of a single control system but is a consequence of individuals’ behavior. In such a system, each individual’s structure is relatively simple, but complex social structures emerge from their collective behavior.
A social structure is built from social behavior and interactions, and changes in this structure producing effects on all individuals and eliminating their idiosyncratic behavior. Social interaction is the main driver in the construction of a social structure, which shows how individuals are related. The social structure reflects the characteristics and behavior of individuals which in turn can influence or modify the structure. Interaction is necessary because in most social systems there are processes by which individuals adapt, and such processes are needed if the collective behavior of the group as a whole is to evolve.
Social systems are mainly characterized by the relationships between a population’s individuals who interact, giving rise to patterns of behavior that improve the coordination of the group when taking collective decisions (Ribeiro, 2010). Individuals in a multi-agent system, denoted agents, usually learn from their own behavior and from the influence of others, improving their utility whilst interacting with the stronger agents in the group. The social structure can be determined by the superposition of the ‘best’ agents in the group, where the stronger agents influence the others through individual or collective reinforcement, following the principles of the theory of social impact (Latané, 1981) and learning by experience (Sutton & Barto, 1998).
From the behavior of agents and the application of this theory, it is possible to identify coherent social structures and patterns of behavior in a complex system, describing who interacts with whom, and the frequency and intensity of their interactions (Ribeiro, 2010). The social structure emerges without a central system of coordination but through the sociability of agents in their own local behavior patterns.
In a multi-agent system, agents need to interact and to coordinate themselves for carrying out tasks. Coordination between agents can help to avoid problems with redundant solutions, inconsistency of execution, waste of resources and situations of deadlock. In this context, models of coordination based on the collective behavior of social groups, e.g., swarm intelligence (Beni and Wang, 1989, Eberhart and Kennedy, 1995), have been shown to be capable of solving complex problems involving social and individual behavior.
The paradigm of coordination based on swarm intelligence has been studied intensively in recent decades (Annaluru et al., 2004, Bastos Filho et al., 2008, Bean and Costa, 2005, Dorigo and Stützle, 2010, Engelbrecht, 2010, Gambardella and Dorigo, 1995, Guntsch and Middendorf, 2001, Kennedy and Eberhart, 1995, Lim et al., 2006, Shyu et al., 2006, Su et al., 2005). This paradigm was suggested by colonies of social insects, leading to computational systems which reproduce the behavior used to solve collective problems in colonies of ants, bees, fish or wasps.
Social insects in a swarm act locally but must satisfy the global objective of the system, forming a community of agents. The community can be formed by a set of agents whose connectivities show their social relationships. This description is also shown to be the underlying concept of social networks (Wasserman & Faust, 1994). Social networks may consist of a set of individuals, computers, organizations or computational elements connected by some kind of relationship. A group of persons, for example, may be linked together through friendship, acquaintance, parentage or work, just as insects in a swarm can be related by spatial proximity, type of specialization or mutual ability.
Thus methods of swarm intelligence, reinforcement learning and models of social structure are based on principles that are often complementary and which allow the impacts of established relationships to be observed.
It can be seen that each agent in a multi-agent system experiences influences from the others; so far, however, there has been only limited research on how this process can be formalized, and on the construction of dynamic social structures for decision-making for the purpose of improving methods for coordination and distributed learning available in the literature. Another important issue concerns the use of principles of social networks to improve the adaptation of agents who become coordinated through reinforcement. The purpose of the research described here is to answer the following questions which at present are still open: (i) how is a social structure constructed from knowledge acquired by agents through their interactions? (ii) how are principles of social interaction used to improve coordination between agents within the social structure? (iii) how can these influences be formalized? and (iv) how can the relevant agents be identified?
These issues are approached in this paper by developing a method using such principles to establish a social structure from interactions between agents in a multi-agent system, leading to improvement in algorithms based on swarms or reinforcement learning. The method is termed Social Ant-Q (SAnt-Q) (Ribeiro, 2010) and is one of the main contributions of this paper.
The paper is organized as follows: Section 2 describes related work using concepts of social networks to improve methods of coordinating multi-agent systems. Section 3 sets out a method using the theory of social networks to improve the convergence of algorithms based on reinforcement, which integrates and adapts procedures for multi-agent coordination, ant colony, reinforcement learning and social networks. Section 4 presents and discusses results showing the gain through the approach, relative to the Ant-Q algorithm. The paper concludes with Section 5 which lists important aspects requiring further study as well as perspectives for future research.
Section snippets
Related work
A system such as a society of agents can be defined as a cognitive and social entity, having identifiable relationships and links between its agents (Panzarasa & Jennings, 2001). In general, societies are organized according to some structure, such as networks or hierarchies. Unlike hierarchies, networks have been identified as social structures. Social networks are made up of a set of states that can be connected by links representing relationships. These relationships can be defined by
SAnt-Q
Coordinating agents who have different objectives and behavior is an important aspect in the study of coordination (Bastos Filho et al., 2008, Comanici and Precup, 2010, Dorigo et al., 1991, Gambardella and Dorigo, 1995, Knox and Stone, 2010) derived from reinforcement learning have been studied by a number of researchers and described in different applications as a form of self-adaptive behavior brought about by the agent’s own experience. Such agents can acquire new behavior which improves
Experiments (evaluation of SAnt-Q)
Experiments were used to compare the utilities of policies generated by agents interacting in the Ant-Q pheromone network and in the SAnt-Q social structure. The experiments were run using benchmark: eil51 and eil76, found in the online library TSPLIB3 (Reinelt, 1991).
The datasets eil51 and eil76 have 51 and 76 states respectively and were constructed by Christofides and Eilon (1969). Such sets have important characteristics for
Conclusions and future work
This paper presented SAnt-Q, an algorithm that uses social network theory to improve convergence of the algorithm Ant-Q. SAnt-Q is implemented by integrating and adapting methods of multi-agent coordination, ant colony development, reinforcement learning and social networks, so that individuals interact and communicate information within a socially dynamic structure.
Collective behavior, when coordinated, gives to individuals of a system abilities (reinforcements through attitudes) and
Acknowledgements
We thank anonymous reviewers for their comments, together with members of the Network Laboratory of the Federal Technological University of Paraná (UTFPR) Campus Pato Branco-Pr. This research is supported by the Program for Research Support of UTFPR – Campus Pato Branco, DIRPPG (Directorate of Research and Post-Graduation) and by the Brazilian Council of Research (CNPq) under Grants 470325/2008-9 and 305847/2008-2.
References (57)
- et al.
Scale-free characteristics of random networks: The topology of the World Wide Web
Physical A
(2000) - et al.
An analytic modeling approach for network routing algorithms that use ant-like mobile agents
Computer Networks: The International Journal of Computer and Telecommunications Networking
(2005) Getting to know each other – Artificial social intelligence for autonomous robots
Robotics and Autonomous Systems
(1995)- et al.
Information processing and organizational structure
Journal of Economic Behavior and Organization
(1998) - et al.
Ant colony optimization with hill climbing for the bandwidth minimization problem
Applied Soft Computing
(2006) - et al.
Ant colony optimization for the cell assignment problem in PCS networks
Computers & Operations Research
(2006) - et al.
Distribution network reconfiguration for loss reduction by ant colony search algorithm
Electric Power Systems Research
(2005) - Abdallah, S., & Lesser, V. R. (2004). Organization-based cooperative coalition formation. In Proceedings of the...
- et al.
Multi-level ant colony algorithm for optimal placement of capacitors in distribution systems
IEEE Congress on Evolutionary Computation
(2004) - AraÚjo, R. M. & Lamb, L. C. (2008). Memetic networks: Analyzing the effects of network properties in multiagent...
Scale-free networks
Scientific American
An ant colony optimization approach to the probabilistic traveling salesman problem
Agent collaboration and social networks
Integration of Knowledge Intensive Multi-Agent Systems
Local strategy learning in networked multi-agent team formation
Autonomous Agents and Multi-Agent Systems (AAMAS-06)
An algorithm for the vehicle-dispatching problem
Operations Research Quarterly
Socially intelligent robots: Dimensions of human-robot interaction
Philosophical Transactions of the Royal Society B: Biological Sciences
Statistical comparisons of classifiers over multiple data sets
Journal of Machine Learning Research
A class of Hamiltonian regular graphs
Journal of Graph Theory
Cited by (10)
Reinforcement learning with multiple shared rewards
2016, Procedia Computer ScienceAnt system and weighted voting method for multiple classifier systems
2018, International Journal of Electrical and Computer EngineeringUsing particle swarm optimization to enhance PI controller performances for active and reactive power control in wind energy conversion systems
2018, IOP Conference Series: Earth and Environmental ScienceOptimal control of active and reactive powers in wind energy conversion systems using particle swarm optimization and adaptive sliding mode control
2018, International Review of Automatic ControlCombination of interaction models for multi-agents systems
2017, Lecture Notes in Business Information ProcessingAnt system-based feature set partitioning algorithm for classifier ensemble construction
2016, International Journal of Soft Computing
- 1
This research has been developed as doctoral student.