Elsevier

Parallel Computing

Volume 26, Issue 1, January 2000, Pages 91-112
Parallel Computing

Global optimization properties of parallel cooperative search algorithms: A simulation study

https://doi.org/10.1016/S0167-8191(99)00097-6Get rights and content

Abstract

Cooperative search is a parallelization strategy where parallelism is obtained by concurrently executing several search programs for the same optimization problem instance. The programs cooperate by exchanging information on previously explored regions of the solution space. When the sharing of information overlaps among several programs, changes in the search behavior of one program can propagate over time to several other programs; this is a process called diffusion in physical systems. The optimization properties of diffusion dynamics in cooperative algorithms have not been formally established. However, it is generally believed that when the selection of shared information is biased by the cost (objective) function, diffusion dynamics help to improve the search of cooperating programs. In this study, we simulate this aspect of cooperative algorithms using cellular automata (CAs) (these are artificial dynamical systems often used to simulate the dynamics of complex systems). Our results show that the sharing of information based on the cost function does not affect the diffusion dynamics and therefore does not seem to help the optimization strategy of cooperating programs. However, this study increases our understanding of the role played by diffusion processes in cooperative algorithms. We suggest new approaches that can help to subordinate diffusion dynamics to the optimization goals of the search programs.

Introduction

We consider combinatorial optimization problems having the formmin(max){c(x)∣x∈X},where X is a set of feasible solutions (configurations) and c is a cost function. Local search algorithms are commonly used to find near-optimal solutions to this kind of optimization problems. These are simple iterative improvement methods seeking a transition from a current configuration x in the problem's solution space to a configuration y in the neighborhood of x such that c(y)<c(x). The choice of y in the neighborhood of x can be based on selection strategies such as: y is the first improving configuration or y is the best improving configuration. More complex searches such as tabu search (TS), simulated annealing (SA), genetic algorithm (GA) and others, use more sophisticated selection strategies often accepting non-improving transitions; they are therefore capable of a more extensive exploration.

Cooperative search is a parallelization strategy increasingly used for search methods. Procedures such as coarse-grained parallel genetic algorithms (PGAs) [11], fine-grained PGA [9], memetic algorithms [10], parallel Branch-and-Bound [6], parallel TS algorithms [5], parallel simulating annealing procedures [8], and special purpose artificial intelligence cooperative searches [3] are examples of the application of this parallelization strategy to well known search heuristics. Contrary to conventional parallelization strategies, the parallel tasks of cooperative procedures are not obtained from the decomposition of the data or the search program. Most often, these tasks are obtained by executing concurrently several search programs using the same data set. These parallel tasks are said to cooperate with each other because, at run time, search programs exchange information on previously explored regions of the solution space.

Each sequential program explores differently the solution space and can hold unique data about configurations visited. By providing shared information among search programs, cooperative design aims to improve the performance of the individual search programs through the reuse of information collected by other programs. However, to be effective, shared information has to interfere somehow with the control structure (the transitions) of the sequential programs. It is not often clear how these interferences affect the optimization strategy of the search methods executed by each sequential programs (they could mislead the optimization strategy). Furthermore, these interferences have a cumulative impact on the search programs. The situation is similar to rule-based search systems in artificial intelligence. Once a rule is fired (information is shared) it changes the working memory (it causes a different configuration to be visited) which then affects the choice of the next rule to be fired (which information is going to be shared next), and so on. This propagation pattern is a well known process in physical systems and is referred to as diffusion. In most cooperative algorithms, the reused information is at the origin of sequences of correlated interactions among programs corresponding to diffusion processes. For example, the stepping-stone architecture of coarse-grained PGA is an explicit example where the overlap of the neighborhoods gives birth to a diffusion of shared information among the sub-populations.

Currently, diffusion dynamics are not specifically addressed in the design of cooperative algorithms. Rather, the main concern is about finding cooperation (or competition) schemes, and some criteria to select which information should be reused. Selection criteria are defined considering the application of the cooperation scheme at a particular point in time, without considering the cumulative impact that the application of a given selection criterion can have on subsequent applications of a cooperation scheme.

Let us identify transitions which are based on shared information as meta-transitions to differentiate them from the search method transitions originating uniquely from the control structure of the sequential search methods. Actually, in most cases, the criteria used to select the meta-transitions are based on the cost function of the optimization problem, e.g. the selected meta-transition is the one which minimizes (or maximizes) the cost c(y) of the next configuration visited. It is generally assumed (most often implicitly) that this type of selection criteria can take care of the diffusion dynamics. Each search program executes its local optimization strategy on the same problem, but it is assumed that as a result of cooperation (which includes the cumulative impact of diffusion dynamics), the search programs develop global optimization properties, meaning that at least some of the local optimization strategies will be improved by cooperation (i.e. the exchange of information will not completely mislead cooperating programs by driving them to worst solutions than if there were no cooperation).

In the present paper we study the diffusion dynamics of information in cooperative algorithms. Because of the diffusion processes, cooperative algorithms are a special case of space-extended dynamical systems, similar to connectionnist models of computation like cellular automata (CAs) and some artificial neural networks (ANNs). We consider the possibility that the symbolic evaluation of meta-transitions based on the cost function might yield diffusion dynamics which are irrelevant from an optimization point of view. (As for CAs and ANNs, if the interactions are not well designed, local rules can induce dynamics which do not have any optimization properties.) Therefore, we compare the diversity of the configurations visited by a set of search programs which use different cooperation schemes: some based on the cost function, others on the diffusion dynamics of CA, and some others having no diffusion dynamics. We use two measurements to estimate the diversity of the configurations: the Shannon entropy and the Global Hamming Distance (GHD) between configurations. The Shannon entropy measures diversity in terms of the number of different configurations versus the number of configurations visited, while the GHD measures the diversity in terms of different sub-configurations among the configurations visited.

Our results show that cooperation schemes have no significant impact on the diversity of the configurations in terms of the Shannon entropy. However the GHD values are significantly lower when the cooperation schemes involve diffusion processes. Diffusion dynamics change the search behavior of cooperating programs in terms of the frequency of identical sub-configurations appearing in different configurations (or the same configuration). Furthermore, when there are diffusion processes, the mode of evaluation of the meta-transitions has no impact on the diversity of the configurations according to both measurement methods. This observation establishes that the selection mode of the meta-transitions is not a relevant parameter affecting the diffusion dynamics of the cooperative procedures tested. In that case, if any global optimization properties exist for those procedures, they are not triggered by the type of selection criteria applied to the meta-transitions. The cumulative impact of sharing information among search programs does not seem to be very helpful to the optimization goals of cooperating programs when meta-transitions are selected based on the cost function.

The claim made in this paper is not that criteria-based cost function does not improve performance of cooperative algorithms. The assumption according to which the selection of meta-transitions based on the cost function will take care of the diffusion dynamics can be not verified by our results. So on the contrary, assuming that diffusion dynamics have a strong impact on the search behavior of cooperative procedures, our claim is that the lack of influence of the cost function on these diffusion dynamics may explain the rather erratic behavior of some cooperative algorithms (sometimes there is an increase and sometimes there is a decrease in the quality of the solutions of cooperative algorithms compared with independent search programs, or even with sequential search algorithms). In the conclusion, we give an example of a cooperative algorithm design where this problem has been addressed in order to make this specific cooperative algorithm a more efficient optimization method. This example can be generalized as a strategy to design other cooperative algorithms for different optimization problems.

The paper is organized as follows. In Section 2 we describe the methodology used in the present study. In Section 3 we provide some background on the CA theory. In Section 4 we introduce the implementation details of the cooperation schemes based on CA and on randomly generated meta-transitions. In Section 5 we describe the experimentations and in Section 6 we conclude.

Section snippets

Methodology

This study is based on experiments involving the same set of sequential TS programs which cooperate according to different cooperation schemes. We model cooperative algorithms based on TS as discrete-time dynamical systems (see the details of this model in [13]). Assume ψ is a p-dimensional state vector, and f some iterative operator. A discrete-time dynamical system has the structureψ(k+1)=f(ψ(k)),k=0,1,2,…,where the state vector ψ(k+1) depends only on vector ψ(k) and the operator f (we assume

Cellular automata theory

Cellular automata have been introduced by Ulam and von Neumann in the 1940s [2] to provide a formal framework for the investigation of the behavior of complex, space extended systems. Since then, they have been increasingly used as a tool to study the behavior of dynamical systems [1], [7], [15]. In the present research, CA theory is used to study the diffusion dynamics of asynchronous cooperative algorithms where neighborhoods overlap. More specifically, CAs are used to simulate the impact

Cooperation schemes

We have implemented three different cooperative procedures based on cooperation schemes inspired from the CA theory and one cooperative procedure which uses randomly generated configurations as cooperation scheme. In this section we detail the implementation of these procedures. We identify the cooperative procedures based on CA by SIMφ where φ is the cellular automaton transition rule and those based on randomly generated transitions by SIMr where r is the seed of the pseudo- random generator.

Experimentations

Tests have been executed for cooperative algorithms using 6 different cooperation schemes: SIM12, SIM18, SIM22, SIMR, Independent Searches (IS) and Asynchronous Cooperative Search (ACS). “IS” refers to independent searches; this cooperation scheme does not involve any exchange of information. “ACS” refers to asynchronous cooperative search. For ACS procedures, the cooperation scheme consists of taking one configuration from a search program pi and use it as the new current configuration of

Conclusion

For cooperative algorithms, the sharing of information among the search programs creates an inextricable network of dependencies which affects in a very complex manner the search behavior of those programs. The design of most cooperative algorithms does not address directly this issue. Rather it is assumed that by selecting information for sharing according to the cost function of the optimization problem, cooperation, not only will not mislead the optimization strategies, but will help them to

References (16)

  • F Bonfatti et al.

    Simulation of dynamic phenomena by cellular automata

    Computers & Graphics

    (1994)
  • C.G Langton

    Studying artificial life with cellular automata

    Physica D

    (1986)
  • G.Y Vichniac

    Simulating physics with cellular automata

    Physica D

    (1984)
  • A.W Burks

    Essays on Cellular Automata

    (1970)
  • S.H Clearwater et al.

    Cooperative solution of constraint satisfaction problems

    Science

    (1991)
  • T.G Crainic et al.

    A tabu search procedure for multicommodity location/allocation with balancing requirements

    Annals of Operations Research

    (1993)
  • T.G Crainic et al.

    Parallel asynchronous tabu search for multicommodity location–allocation with balancing requirements

    Annals of Operations Research

    (1995)
  • B Gendron et al.

    Parallel branch-and-bound algorithms: survey and synthesis

    Operations Research

    (1994)
There are more references available in the full text version of this article.

Cited by (16)

  • Parallel computational optimization in operations research: A new integrative framework, literature review and research directions

    2020, European Journal of Operational Research
    Citation Excerpt :

    As the interactions of the cooperative search algorithms specify the global search behavior, a new metaheuristic in its own right emerges (Crainic & Toulouse, 2008). While cooperation yields in many cases a collective output with better solutions than a parallel independent search (Crainic, 2019), exchanges should not be too frequent to avoid communication overheads and premature “convergence” to local optima (Toulouse, Crainic, & Sansó, 2004; Toulouse, Crainic, & Thulasiraman, 2000). As in the case of independent multi-search, also cooperative multi-search can be seen as coarse-grained inter-algorithm parallelism.

  • Using memory and fuzzy rules in a co-operative multi-thread strategy for optimization

    2006, Information Sciences
    Citation Excerpt :

    The information exchange within the global strategy plays a crucial role. It has been shown [27,28] that co-operative parallel metaheuristics can be understood to be dynamic systems, the evolution of which may be more strongly influenced by the co-operation scheme than by the optimization process. In our second experiment, however, we showed that under the same co-operation strategy, an adequate definition of the solvers may greatly improve the results.

  • Heuristics and Metaheuristics for Fixed-Charge Network Design

    2021, Network Design with Applications to Transportation and Logistics
  • Parallel metaheuristics and cooperative search

    2019, International Series in Operations Research and Management Science
View all citing articles on Scopus
View full text