Skip to main content
main-content

Inhaltsverzeichnis

Frontmatter

Introduction

Introduction

Abstract
For economists, the concept of network refers both to the structure of agents’ interaction and to the economic property of positive externalities. So networks can be viewed both as a set of links that build the interaction among agents, and as a set of agents that adopt a similar behavior for different economic purposes.
Gisèle Umbhauer

Interaction and Macro-Struture : an Overview

Frontmatter

1. Economies with Interacting Agents

Abstract
This paper discusses economic models in which agents interact directly with each other rather than through the price system as in the standard general equilibrium model. It is suggested that the relationship between micro and macro behaviour is very different from that in the standard model and that the aggregate phenomena that can arise are rich. The models considered include ones with global interaction in which all agents can interact with each other and one in which agents can only interact with their immediate neighbours. Both static and dynamic models are considered and the latter includes the class of evolutionary economic models. Finally, models in which communication networks evolve are discussed.
Alan P. Kirman

2. Spatial Interactions in Dynamic Decentralised Economies: a Review

Abstract
The paper reviews dynamic models of decentralised economies assuming spatially distributed agents who interact directly and locally. Basically, this means that: (i) agents are located in a space such as an integer lattice or a graph; (ii) the current choice of each agent is influenced by past choices of his neighbours, that is agents who are spatially closest to him. It is argued that a key feature concerns the properties displayed by the underlying ‘interaction structure’, that is assumptions about space locations and neighbourhood sets. In particular, different sub-classes of models are singled out according to their relative ability to depict evolving, time-dependent patterns of interactions (‘flexibility’). Markov random fields, cellular automata, stochastic graphs, ‘artificial’ economies and many other classes of spatial models are discussed in detail and their main drawbacks are put forth. Finally, it is pointed out that a stimulating path to follow in the future could be that of modelling spatial ‘open-ended’ economies characterised by an endogenous emergence of novelty.
Giorgio Fagiolo

3. Network, Interactions between Economic Agents and Irreversibilities : the Case of the Choice among Competing Technologies

Abstract
The paper analyses the conditions of emergence and the properties of irreversibility on a network of economic agents facing technological choices among competing technologies. The irreversibility is thus related to the emergence of a single standard. Two broad classes of models are analysed : on the one side, one considers models of “non-localised” agents making their technological choices on networks that are specific markets where the acquisition of goods by an agent cannot be considered independently of the size of the network. The Arthur’s model is the reference model of adoption in such a framework. One the other side, one considers models of “localised” agents, where the neighbourghood structure of a given agent, is actually influencing his technological choice. The reference model of this class is the percolation model.
Patrick Cohendet

4. Rationality and Heterogeneity in Stochastic Aggregation Models

Abstract
Non-price interactions have recently been proved to be relevant, and sometimes crucial, for economic analysis: these problematics have given birth to a new approach to economic modeling, namely the one which makes use of stochastic aggregation models. The core idea is the one of Schelling, according to which order in collective phenomena has to be interpreted as the consequence of many individual actions. We believe that an important question here is then to develop models compatible with the rationality hypothesis: indeed, if it was not so, then the robustness of the results would be very poor since they would only apply to ants or automata and not to human beings. Many models seem to have in fact fallen in this pit, notably due to approximations in the ways they were presented and to their proximity with the hard sciences from which they actually come. But the question of rationality is a true one, and goes beyong presentation errors, since one should not be satisfied with derivations obtained with nonhuman agents, and since these very derivations are therefore very strongly dependent on the eventuality that at least some individuals would be rational: would the results remain the same if one or two rational individuals were added ?
As for the analysis of the question of rationality, it is first of all paradoxically shown that it does not come from taking interactions into account: on the very contrary, the existence of cumulative interactions appears as normal for rational agents. It has much more to do with heterogeneity, first because heterogeneity has often been neglected and all agents have been given too simple decision rules in stochastic aggregation models, and second because rational agents will precisely be led to heterogeneous behaviors by their heterogeneous historically acquired situations. Agents therefore have different and interacting optimization programs: we suggest an alternative methodology for stochastic aggregation models, associated with statistical behavioral functions, and we show that they are then well adapted to the analysis of collective phenomena. Since agents are from now on rational, and sometimes maybe very rational, analytical derivations and simulation results also benefit from increased robustness. As a conclusion, a few key examples of such results are summarized, which all seem to open up quite a few new areas for further research, and notably the core question when and where individual rational actions truly have some influence on collective outcomes: as a matter of fact, and according to stochastic aggregation models with statistical behavioral functions, there might at least sometimes have none.
Jean-Michel Dalle

Local Interaction, Learning and Diversity

Frontmatter

5. Networks Competition under Local Interaction and Behavioral Learning

Abstract
Diffusion is modelled as a repeated coordination game between a large number of locally interacting heterogeneous agents. Agents are represented with stochastic learning algorithms that generate robust path-dependent patterns of behavior. Formal analyses of such locally interacting systems encounter many technical difficulties, hence we run numerical simulations. We find that lock-in is positively correlated to the interaction distance. Diversity, i.e. simultaneous coexistence of networks, appears for small interaction distances but vanishes as the size of neighborhoods increases. We also find an inverse relationship between the interaction distance and the speed of standardization.
Nicolas Jonard, Patrick Llerena, Babak Mehmanpazir

6. Can Neighborhood Protect Diversity

Abstract
The paper deals with network diversity within a local and global neighborhood framework. By means of evolutionary game theory processes, one studies the dynamic behavior of a population of agents who repeatedly choose to join one among N competing networks. Within this context, diversity means the survival of several different networks. The aim is to establish that the results on diversity depend on the size of the direct neighborhood of the agents. Small neighborhoods favor diversity, but the relationship between diversity and size of neighborhood is not obvious.
Gisèle Umbhauer

7. Interaction of Local Interactions: Localized Learning and Network Externalities

Abstract
We extend the model of Nelson and Winter with a lattice-based spatial structure in order to study the effects of localized learning and increasing returns to adoption on the emergence and persistence of technological diversity. We break down the global effects analyzed in Jonard and Yildizoglu (1998) in order to understand the mechanism that is behind the complementarity between localized learning and externalities in the creation and persistence of diversity. We show that two configurations, which are commonly studied in the literature, with very localized learning on the one hand, and global externalities on the other hand, are actually very peculiar cases. They correspond to very stylized dynamics which cannot be obtained as limits of more general cases.
Nicolas Jonard, Murat Yıldızoğlu

8. Evolution of Cooperation with Local Interactions and Imitation

Abstract
This article investigates the evolution of cooperation in a simulation model, in which agents play a one-shot Prisoner’s Dilemma game against their neighbours. We consider agents located on a two-dimensional neighbourhood structure, and we introduce two types of imitative behaviours. We show that if agents choose their strategies according to imitation rules, and if the neighbourhood structures overlappes, then cooperation can persist and diffuse in the long run. The survival of cooperation depends on the hostility of the environment, which is linked to the payoff structure and to the initial proportion of C-players.
Vanessa Oltra, Eric Schenk

9. On the Frontier: Structural Effects in a Diffusion Model based on Influence Matrixes

Abstract
The aim of this paper is to lay the foundation of an approach of the diffusion-adoption problem of an innovation or a technological standard, based on the building of influence matrices. This means that agents are to be considered as participating in social networks that provide the support and the framework of the adoption-diffusion process.
Alexandre Steyer, Jean-Benoît Zimmermann

Behaviors, Externalities and the Emergence of Networks

Frontmatter

10. Networks, Specialization and Trust

Abstract
This paper addresses the issue of economic viability of Information Intensive Production Systems from a long-run perspective. The relationship between capital, division of labor, and economic development is analyzed in the context of new opportunities for product variety which are created by wide dissemination of information technologies on the supply side, and by demand for such variety in countries with high per capita income. It is argued that a new form of industrial organization, network structures, emerges to deal with problems of resource allocation in a “monopolistic-competition” form of economic coordination, just as oligopolistic structures coordinated the capital-intensive industries of the post World War II era. Because of their ability to internalize externalities, networks offer a flexible organizational solution for joint specialization, provide some protection for property rights of intangible assets, and more importantly, alleviate the capital accumulation constraint on the division of labor and growth. As in other types of imperfect competition, the social loss of welfare from coalition behavior may be offset by networks’ increased ability to generate and diffuse innovative activities
Ehud Zuscovitch

11. Network Externalities, Cost Functions and Standardization

Abstract
In market structures with network externalities, it is often asserted that there is a natural tendency toward standardization. Several incompatible systems only survive, if the decision to produce these goods is a part of an intertemporal strategy. In this paper it is argued that in­compatible products may survive in static models. The model developed in this paper is close to the one used by Katz and Shapiro (1985). I develop a simple multi-product oligopoly in which the demand for one of these commodities increases with the number of agents consuming this good and in which cost functions are explicitly introduced. Apart from the issues of standardization, I also explicitly address the problems related to the exis­tence and the uniqueness of a rational expectation Cournot equilibrium
Hubert Stahn

12. The Emergence of Network Organizations in Processes of Technological Choice: a Viability Approach

Abstract
Traditionally, the analysis of emerging structures (technologies, conventions, etc.) starts from the specification of micro-behaviors through local rules (such as “agent i chooses a technology x if the majority of agents has chosen this technology”) in order to study the macroscopic evolution of the system. In this context, the network organization is given. The general purpose of this approach is to derive the collective consequences which cannot be extrapolated from any kind of representative individual behavior. See for instance [12, David, Foray & Dalle] and its bibliography. In this paper, we follow the opposite track: instead of deriving some indeterminism from deterministic systems, we rather detect some regularities from indeterministic micro-mechanisms.
Jean-Pierre Aubin, Dominique Foray

13. Are more Informed Agents able to shatter Information Cascades in the Lab ?

Abstract
An information cascade occurs when agents ignore their own information completely and simply take the same action as predecessor agents have taken. This rational imitation has been empirically revealed by the Anderson and Holt experiment. We replicate this experiment, with different parameter values, and we test the possibility for more informed agents to shatter potential information cascades. Our results corroborate those observed by Anderson and Holt and the theoretical framework on which the experimental design is based.
Marc Willinger, Anthony Ziegelmeyer

14. Information Externalities and Learning with Sequential Interactions

Abstract
We develop a model of technological choice with social learning in which the timing of adoption of new technologies is endogenous. Social learning occurs through information externalities, because the decision of an early adopter reveals his private information to the other agent. In our model, not only the decision to adopt a new technology is informative for the agent who is in a waiting position, but also the decision to wait. We show that more informed agents act as “leaders” by adopting a new technology earlier than less informed agents. The latter act as “followers” by rationally imitating the adoption choices they observed previously. Although pure imitation is usually a source of inefficiency, in our context, imitation of early technological choices is socially efficient. This happens in our context because of complete revelation of useful information despite imitation. As a consequence there is a very low probability of making errors. We compare our context of social learning to other contexts, in which the revelation of useful information is incomplete and the probability of errors is larger. Although our context is informationally more efficient, the social surplus is not necessarily higher than in other contexts. Indeed, the process of information revelation can be highly time consuming, leading to large costs of delay, which may offset the information advantage.
Kene Boumny, Jean-Christophe Vergnaud, Marc Willinger, Anthony Ziegelmeyer

15. The Evolution of Imitation

Abstract
The model developed in this paper considers a population in which a proportion m is constituted of pure imitators and a proportion (1 − m) is constituted of informed agents who take their decisions on the sole basis of their private but noisy signal. We study the evolution of this population. We show that m follows a cyclical dynamics. In a first phase, imitative agents obtain better payoffs than informed agents and m increases. But when m reaches a certain threshold, imitation gives rise to a “bubble”: the collective opinion does not reflect the economic fundamentals anymore; it reflects others’ opinion. When this situation is revealed, m rapidly decreases: the bubble bursts out and a new cycle begins. In this model herd behavior is analyzed as the consequence of informational influences.
André Orléan
Weitere Informationen