Skip to main content

2009 | Buch

The Mathematics of Preference, Choice and Order

Essays in Honor of Peter C. Fishburn

herausgegeben von: Professor Steven J. Brams, Professor William V. Gehrlein, Professor Fred S. Roberts

Verlag: Springer Berlin Heidelberg

Buchreihe : Studies in Choice and Welfare

insite
SUCHEN

Über dieses Buch

Peter Fishburn has had a splendidly productive career that led to path-breaking c- tributions in a remarkable variety of areas of research. His contributions have been published in a vast literature, ranging through journals of social choice and welfare, decision theory, operations research, economic theory, political science, mathema- cal psychology, and discrete mathematics. This work was done both on an individual basis and with a very long list of coauthors. The contributions that Fishburn made can roughly be divided into three major topical areas, and contributions to each of these areas are identi?ed by sections of this monograph. Section 1 deals with topics that are included in the general areas of utility, preference, individual choice, subjective probability, and measurement t- ory. Section 2 covers social choice theory, voting models, and social welfare. S- tion 3 deals with more purely mathematical topics that are related to combinatorics, graph theory, and ordered sets. The common theme of Fishburn’s contributions to all of these areas is his ability to bring rigorous mathematical analysis to bear on a wide range of dif?cult problems.

Inhaltsverzeichnis

Frontmatter

Utility, Preference, Individual Choice, Subjective Probability, and Measurement

Utility Theory

Entropy-Related Measures of the Utility of Gambling
The first author has known Peter for a very long time, dating back some 45 years to when we met at a colloquium he gave at the University of Pennsylvania. After that our paths crossed fairly often. For example, in the early 1970s, he spent a year at the Institute for Advanced Study where Luce spent three years until the attempt to establish a program in scientific social science was abandoned for a more literary approach favored by the humanists and, surprisingly, the mathematicians then at the Institute. The second author has learnt a tremendous amount about both substantive and technical issues from Peter's work, beginning with Peter's book “Utility Theory for Decision Making” (Fishburn, 1970), which he reviewed for Contemporary Psychology (see Marley, 1972).
Peter's volume on interval orders (Fishburn, 1985) was a marvelous development of various ideas related to the algebra of imperfect discrimination that elaborated the first author's initial work on semiorders (Luce, 1956).
R. Duncan Luce, Anthony J. Marley, Che Tat Ng
Altruistic Utility Functions for Joint Decisions
All of us make decisions that are not entirely self-centered; we voluntarily anticipate what we think to be the preferences of others and incorporate them into our decision making. We do this, not because of legal requirements or social norms, but because we are altruistic; we care intrinsically about the welfare of others. In this paper, we illustrate for these types of decisions how confusion may arise because the distinction between our personal (egotistical) preferences and our altruistic concerns is not carefully distinguished. We first define the distinction between personal and altruistic preferences, and then show how to use both of these kinds of preferences in prescriptive decision making methodologies.
We confine ourselves to the class of problems where two or more people must select a common course of action. The following story illustrates a simple example. Joan and Dan have decided to have dinner and must choose a restaurant. They quickly specify three possibilities: a Brazilian restaurant, a French restaurant, and a Thai restaurant. Joan is thoughtful and wishes to choose a restaurant that Dan will really like. Similarly, Dan wants to choose a restaurant that pleases Joan. So what happens? Joan, thinking about what might be Dan's preferences, decides that Dan would like the French restaurant, followed by the Brazilian restaurant, followed by the Thai restaurant. Dan, thinking about what Joan would like, also decides that the French restaurant would be best, followed by the Brazilian restaurant, and then the Thai restaurant. Joan speaks first and suggests the French restaurant. Dan, thinking that this is what Joan wants, agrees and off they go. During dinner discussion, Joan mentions that she would have preferred the Thai restaurant to the French restaurant. Somewhat surprised, Dan then says that he also would have preferred the Thai restaurant. They wonder how this state of affairs came about.
David E. Bell, Ralph L. Keeney
SSB Preferences: Nonseparable Utilities or Nonseparable Beliefs
It is around 1980 that new era for decision making under risk/uncertainty began to uncover numerous alternative representations which generalize the traditional (subjective) expected utility maximization. The initial major contributors include Chew and MacCrimmon (1979), Chew (1983), Fishburn (1982), Kahneman and Tversky (1979), Machina (1982), Quiggin (1981), and Schmeidler (1988) (first appeared in 1981 as a discussion paper). One of Fishburn's works in this area is the discovery of an axiomatic structure of SSB (skew-symmetric bilinear) preferences in decision making under risk, and its numerical representation, dubbed SSB utility (see Fishburn, 1982). Since then, he published a series of papers which study SSB preferences and their numerical representations in various contexts in decision making under risk/uncertainty (see a survey, Fishburn, 1988b).
This paper further explores representational aspects of SSB preferences particularly in decision making under uncertainty and discusses their necessary and sufficient axiomatizations. Three representational forms will be examined. One of them is known as an SSA (skew-symmetric additive) representation first explored by Fishburn, 1984a. The other two are new in the literature, one of which seems to be a more natural application of SSB utility to decision making under uncertainty than SSA representation. A characteristic feature of the first two representations is nonseparability of utilities for decision outcomes. The last one is a generalization of subjective expected utility (SEU) which replaces subjective probabilities with non-separable representation of comparative beliefs first discovered by Fishburn (1983a and b).
Yutaka Nakamura

Decision Theory

Decision Making Based on Risk-Value Tradeoffs
This essay provides a review for measures of risk and risk-value models that we have developed for the past ten years. Risk-value models are a new class of decision making models based on the idea of risk-value tradeoffs. Intuitively, individuals may consider their choices over risky alternatives by trading off between risk and return, where return is typically measured as the mean (or expected return) and risk is measured by some indicator of dispersion or possible losses. This notion is prevalent in the literatures in finance, marketing and other areas.
Markowitz (1959, 1987, 1991) proposed variance as a measure of risk, and a mean-variance model for portfolio selection based on minimizing variance subject to a given level of mean return. But arguments have been made that mean-variance models are appropriate only if the investor's utility function is quadratic or the joint distribution of returns is normal. However, these conditions are rarely satisfied in practice.
Jianmin Jia, James S. Dyer
Normally Distributed Admissible Choices are Optimal
Generally accepted observable behavior has led to the following classes of continuously differentiable utility functions, u (•):
I.
Nonsatiation axiom: u′ > 0
 
II.
Risk aversion: u′ > 0, u″ < 0
 
Adopting the notation of (Bawa, 1975), let the uncertain prospects be characterized by random variables xi,i= 1,2,…,n+ 1, with known continuous probability distribution functions defined over an open interval R 1 given by (a, b),a >b.
James N. Bodurtha Jr, Qi Shen
A Conjoint Measurement Approach to the Discrete Sugeno Integral
In the area of decision-making under uncertainty, the use of fuzzy integrals, most notably the Choquet integral and its variants, has attracted much attention in recent years. It is a powerful and elegant way to extend the traditional model of (subjective) expected utility (on this model, see Fishburn, 1970, 1982). Indeed, integrating with respect to a non-necessarily additive measure allows to weaken the independence hypotheses embodied in the additive representation of preferences underlying the expected utility model that have often been shown to be violated in experiments (see the pioneering experimental findings of Allais, 1953; Ellsberg, 1961). Models based on Choquet integrals have been axiomatized in a variety of ways (see Gilboa, 1987; Schmeidler, 1989; or Wakker, 1989, Chap. 6. For related works in the area of decision-making under risk, see Quiggin, 1982; and Yaari, 1987). Recent reviews of this research trend can be found in Chateauneuf and Cohen (2000), Schmidt (2004), Starmer (2000) and Sugden (2004).
More recently, still in the area of decision-making under uncertainty, Dubois, Prade, and Sabbadin (2000b) have suggested to replace the Choquet integral by a Sugeno integral (see Sugeno, 1974, 1977), the latter being a kind of “ordinal counterpart” of the former, and provided an axiomatic analysis of this model (special cases of the Sugeno integral are analyzed in Dubois, Prade, & Sabbadin 2001b. For a related analysis in the area of decision-making under risk, see Hougaard & Keiding, 1996). Dubois, Marichal, Prade, Roubens, and Sabbadin (2001a) offer a lucid survey of these developments.
Denis Bouyssou, Thierry Marchant, Marc Pirlot

Measurement Theory

Additive Representability of Finite Measurement Structures
The theory of additive conjoint measurement takes its roots in the papers by Debreu, (1960) and Luce and Tukey (1964). It is presented in books (Pfanzagl (1968); Fishburn (1970); Krantz, Luce, Suppes, & Tversky, 1971; Luce, Krantz, Suppes, & Tversky, 1998; Suppes, Krantz, Luce, & Tversky, 1988; Roberts, 1979; Narens, 1985) and excellent surveys, of which Fishburn's survey (1999) is the most recent. The goal of the present paper is twofold: we would like to describe some recent developments that took place after Fishburn's survey was published, and to attract attention to several questions posed by Fishburn that remain unanswered.
Arkadii Slinko

Social Choice, Voting, and Social Welfare

Condorcet Domains and Probabilities

Acyclic Domains of Linear Orders: A Survey
A = {1,2…i, j,k…n} is a finite set of n elements that I will generally call alternatives (but which could also be called issues, decisions, outcomes, candidates, objects, etc.). The elements of A will be also denoted by letters like x,y, z etc. A subset of cardinality p of A will be called a p-set.
A 2 (respectively, A 3) denotes the set of all ordered pairs (x,y) (respectively, ordered triples (x,y, z) written for convenience as xyz) of A. When the elements of A are denoted by the n first integers, P 2(n) denotes the set of the n(n- 1)/2 ordered pairs (i < j).
Bernard Monjardet
Condorcet Domains: A Geometric Perspective
One of the several topics in which Fishburn (1997, 2002) has made basic contributions involves finding maximal Condorcet Domains. In this current paper, I introduce a geometric approach that identifies all such domains and, at least for four and five alternatives, captures Fishburn's clever alternating scheme (described below), which has advanced our understanding of the area.
To explain “Condorcet Domains” and why they are of interest, start with the fact that when making decisions by comparing pairs of alternatives with majority votes, the hope is to have decisive outcomes where one candidate always is victorious when compared with any other candidate. Such a candidate is called the Condorcet winner. The attractiveness of this notion, where someone beats everyone else in head-to-head comparisons, is why the Condorcet winner remains a central concept in voting theory. For a comprehensive, modern description of the Condorcet solution concept, see Gehrlein's recent book (2006).
Donald G. Saari
Condorcet's Paradox with Three Candidates
Condorcet formally developed the notion of cyclical majorities over two centuries ago (Condorcet, 1785), and Peter Fishburn introduced me to that phenomenon in 1971. When Peter first described the idea behind Condorcet's Paradox during a course in Social Choice Theory at Pennsylvania State University, my response was that the phenomenon simply could not happen. When he reproduced the classic example of its existence with three voters and three candidates, my immediate response was that this phenomenon certainly could not be very likely to ever be observed in realistic situations. Peter quickly suggested that I should work on developing some estimates of the probability that the paradox might occur, and very soon afterward that pursuit began. We completed many co-authored papers on related topics over the following Years, but it is only after more than 30 Years of effort that I feel a good answer can be given to the challenge that Peter presented in that classroom in 1971. The following essay can be viewed as a long overdue course project report, and we can finally see a theoretical model that clearly explains why observations of Condorcet's Paradox are so rare in elections on a small number of candidates.
William V. Gehrlein
On the Probability to Act in the European Union
Since its foundation by Arrow in his seminal contribution (Arrow, 1963), one of the main merit of social choice theory has been to provide a coherent framework for the analysis and comparison of different voting rules. First, many normative requirements about voting rules can be expressed precisely in this framework. Then it is possible to check whether a given voting rule satisfies a given property. Ideally, this type of analysis may lead to the axiomatic characterization of a voting rule. At last the propensity of situations for which a voting rule fails to satisfy a condition can be evaluated.
Peter Fishburn's contributions to this research program have been extremely important. For example, he proposed many new normative conditions for the analysis of voting rules (see in particular Fishburn, 1974, 1977; Fishburn & Brams, 1983), and developed axiomatic analysis for binary voting (Fishburn, 1973) and approval voting (Fishburn, 1978). Together with Gehrlein, he launched an important research program on the probabilistic analysis of voting rules. After Guilbauld's paper (Guilbauld, 1952), the use of probability models in voting was limited to the evaluation of the majority voting paradox under the assumption that each voter would pick his preference independently from the others from a uniform distribution. This assumption, today called the Impartial Culture assumption, puts an equal weight on each profile. Fishburn and Gehrlein developed the use of probabilistic models in two directions. First, to analyze the occurrence of Condorcet cycles, they proposed in Gehrlein and Fishburn (1976) a new probability assumption, the Impartial Anonymous Culture assumption, which assumes that each anonymous profile is equally likely to appear. Secondly, they applied these two probability models to a wider range of problems, the relationships between the scoring rules and the Condorcet principle being their favorite issue (see Fishburn & Gehrlein, 1976; Gehrlein & Fishburn, 1978a, 1978b). The results we will present in this paper are clearly a continuation of this research program, as we will compare voting rules suggested for the European Union on their propensity to fulfill a given property according to different probability assumptions.
Marc R. Feix, Dominique Lepelley, Vincent Merlin, Jean-Louis Rouet

Voting Rules

Voting Systems that Combine Approval and Preference
Social choice theory, while postulating that voters have preferences over candidates, does not ask them to stipulate where, in their preference rankings, they would draw the line between acceptable and unacceptable candidates. Approval voting (AV) does ask voters to draw such a line, but it ignores rankings above and below this line.
Rankings and approval, though related, are fundamentally different kinds of information. They cannot necessarily be derived from one another. Both kinds of information are important in the determination of social choices. We propose a way of combining them in two hybrid voting systems, preference approval voting (PAV) and fallback voting (FV), that have several desirable properties.
Steven J. Brams, M. Remzi Sanver
Anonymous Voting Rules with Abstention: Weighted Voting
We consider legislative voting rules that govern collective approval or disapproval of a bill or a motion, and that allow abstention (or absence) as a “middle option” distinct from a yesor novote. In contrast with Peter Fishburn's work on representative systems, or RSs, we do not treat collective approval and disapproval symmetrically; a voting rule may have a built-in bias against passing motions, for example. In this asymmetric case, the additional assumption that a rule is anonymous (all votes count equally) still allows for a significant variety of rules, a number of which are used by real voting bodies (see Freixas & Zwicker (2003)). We provide three characterizations of weighted voting in this context, and discuss potential applications.
In real legislative voting bodies an abstention or absence often does have an effect different from a voter's yesor novote. Yet since the publication of Theory of Games and Economic Behaviorvon Neumann & Morgenstern (1949) the standard mathematical model for a legislative voting system has been the simple game, which by virtue of its structure treats any non-yesvote as a no. Peter Fishburns 1973 work seems to be the earliest to have taken abstention seriously, but others followed: Rubenstein (1980); Bolger (1986, 1993a,b); Felsenthal & Machover (1997, 1998); Amer et al. (1998); Freixas & Zwicker (2003); Côrte-Real & Pereira (2004); Dougherty & Edward (2004); Bilbao, Fernández, Jiménez, & López (2005a,b).
William S. Zwicker

Social Choice

Pareto, Anonymity or Neutrality, but Not IIA: Countably Many Alternatives
We would like to express our indebtedness to Peter for his pioneering work in social choice theory and our pleasure in co-authoring with him.1 More specific to this paper, Peter had early doubts about rules that required preference information on all alternatives in order to socially rank two alternatives. Addressing independence he writes (see Fishburn, 1973):
If in fact the social choice can depend on infeasibles, which infeasibles should be used? For with one set of infeasibles, feasible x might be the social choice, whereas feasible y ≠ x might be the social choice if some other infeasible set were adjoined to [feasible set] Y. Hence, the idea of allowing infeasible alternatives to influence the social choice introduces a potential ambiguity into the choice process that can be at least alleviated by insisting on the independence condition.
Donald E. Campbell, Jerry S. Kelly

Fair Division

Bruhat Orders and the Sequential Selection of Indivisible Items
For two players with identical preferences, cake-cutting procedures, such as Cut-and-Choose (Brams & Taylor, 1996) and the Surplus Procedure (Brams, Jones, & Klamler, 2006), guarantee that each player receives exactly half of the cake, according to their preferences. In essence, receiving exactly half is a worst-case scenario because when their preferences are not identical, the opportunity often exists for both players to receive more than half of the cake, measured by their preferences. This potential reward is balanced by risk, as these differences in preferences provide an incentive for players to misrepresent their preferences in an effort to gain a more valuable piece. In contrast, players may not be able to exploit information about an opponent's preferences when indivisible objects are allocated to two players, even when the players ' preferences are different. Our purpose is to determine the structure of, relationship between, and frequency of two players' preferences for which players receive their worst or best possible outcomes when dividing a finite set of indivisible goods, independent of strategic behavior.
Kohler and Chandrasekaharan (1971) pose and solve three optimization problems in which a finite set of players, with linear preference orders over the items, alternate taking turns selecting a number of items from a set of indivisible items. We adopt their framework, as Brams and Straffin (1979) do, to the case when two players alternate selecting a single item from a set of indivisible items. Although Kohler and Chandrasekaharan (1971) assume that players have values associated with each item and subsets are valued according to the sum of the values of its objects, like Brams and Straffin (1979), we assume that the players ' preferences for subsets of items are partially ordered, induced by the linear orders.
Brian Hopkins, Michael A. Jones

Posets, Graphs, Combinatorics, and Related Applied and Mathematical Topics

Partial Orders and Interval Orders

Fractional Weak Discrepancy of Posets and Certain Forbidden Configurations
A weak order is a poset P = (V, ≺) that can be assigned a real-valued function f : V → R so that ab in P if and only if f(a) < f(b) Bogart (1990). Thus, the elements of a weak order can be ranked by a function that respects the ordering ≺ and issues a tie in ranking between incomparable elements (a ǁ b). When P is not a weak order, it is not possible to resolve ties as fairly. The weak discrepancy of a poset, introduced in Trenk (1998) as the weakness of a poset, is a measure of how far a poset is from being a weak order [Gimbel and Trenk (1998); Tanenbaum, Trenk, & Fishburn (2001)]. In Shuchat, Shull, and Trenk (2007), the problem of determining the weak discrepancy of a poset was formulated as an integer program whose linear relaxation yields a fractional version of weak discrepancy given in Definition 1 below.
Alan Shuchat, Randy Shull, Ann N. Trenk
Interval Order Representation via Shortest Paths
Our goal in this paper is to illustrate how the representation theorems for finite interval orders and semiorders can be seen as special instances of existence results for potentials in digraphs. This viewpoint yields short proofs of the representation theorems and provides a framework for certain types of additional constraints on the intervals. We also use it to obtain a minimax theorem for the minimum number of endpoints in a representation. The techniques are based on techniques used by Peter Fishburn in proving results about bounded representations of interval orders.
Interval orders represent the order structure of a collection of intervals. For example, this can be used to model the relations between a set of events each of which occurs over some time interval. Semiorders are a special case where the intervals have the same length. These can be viewed as representing comparisons of values where a relation is noted only if the difference of values is above a certain threshold. We will not go into more detail here as there are many good references describing the various applications of interval orders and semiorders. See for example Fishburn (1985); Luce, Krantz, and Suppes (1971, 1989, 1990); Pirlot and Vincke (1997). See Fishburn (1997) for a good description of some more general models based on intervals.
Garth Isaak
Probe Interval Orders
A graph G is a probe interval graph if there is a partition of V(G) into P and N and a collection {I v : vV(G)} of intervals of R in one-to-one correspondence with V(G) such that uv ∊ E(G) if and only if I u I v ≠ Ø and at least one of u, v belongs to P. The sets P and N are called the probes and nonprobes, respectively, and {I v = [l(v), r(v)] : vV(G)} together with the partition will be referred to as a representation. An interval graph is a probe interval graph with N = Ø and this class of graphs has been studied extensively; see the texts Fishburn (1985), Golumbic (1980), and Roberts (1976) for introductions and other references.
The probe interval graph model was invented in connection with the task called physical mapping faced in connection with the human genome project (Zhang, 1994, 1997; Zhang et al., 1994). In DNA sequencing projects, a contig is a set of overlapping DNA segments derived from a single genetic source. In order for DNA to be more easily studied, small fragments of it, called clones, are taken from multiple copies of the same genome. Physical mapping is the process of determining how DNA contained in a group of clones overlap without having to sequence all the DNA in the clones. Once the map is determined, the clones can be used as a resource to efficiently contain stretches of genome. If we are interested in overlap information between each pair of clones, we can use an interval graph to model this problem: vertices are clones and adjacency represents overlap. Using the probe interval graph model, we can use any subset of clones, label them as probes, and test for overlap between a pair of clones if and only if at least one of them is a probe. This way there is flexibility, in contrast to the interval graph model, since all DNA fragments need not be known at time of construction of the probe interval graph model. Consequently, the size of the data set, which by nature can be quite large, is reduced.
David E. Brown, Larry J. Langley

Graphs

Mediatic Graphs
The core concept of this paper can occur in the guise of various representations. Four of them are relevant here, the last one being new:
1.
A MEDIUM, that is, a semigroup of transformations on a set of states, constrained by strong axioms (see Eppstein, Falmagne, & Ovchinnikov, 2008; Falmagne, 1997; Falmagne & Ovchinnikov, 2002).
 
2.
An ISOMETRIC SUBGRAPH OF THE HYPERCUBE, OR “PARTIAL CUBE.” By “isometric”, we mean that the distance between any two vertices of the subgraph is identical to the distance between the same two vertices in the hypercube (Djoković, 1973; Graham & Pollak, 1971). Each state of the medium is mapped to a vertex of the graph, and each transformation corresponds to an equivalence class of its arcs. Note that, as will become clear later on, no assumption of finiteness is made in this or in any of the other representation.
 
3.
An ISOMETRIC SUBGRAPH OF THE INTEGER LATTICE. This representation is not exactly interchangeable with the preceding one. While it is true that any isometric subgraph of the hypercube is representable as an isometric subgraph of the integer lattice and vice versa, the latter representation lands in a space equipped with a considerable amount of structure. Notions of “lines”, “hyperplanes”, or “parallelism” can be legitimately defined if one wishes. Moreover, the dimension of the lattice representation is typically much smaller than that of the partial cube representing the same medium and so can be valuable in the representation of large media (see, in particular, Eppstein, 2005, in which an algorithm is described for finding the minimum dimension of a lattice representation of a partial cube).
 
4.
A MEDIATIC GRAPH. Axiomatic definitions are usually regarded as preferable whenever feasible, and that is what is given here.
 
Jean-Claude Falmagne, Sergei Ovchinnikov
An Application of Stahl's Conjecture About the k-Tuple Chromatic Numbers of Kneser Graphs
Graph coloring is an old subject with many important applications. Variants of graph coloring are not only important in their various applications, but they have given rise to some very interesting mathematical challenges and open questions. Our purpose in this mostly expository paper is to draw attention to a conjecture of Saul Stahl's about one variant of graph coloring, k-tuple coloring. Stahl's Conjecture remains one of the long-standing, though not very widely known, conjectures in graph theory. We also apply a special case of the conjecture to answer two questions about k-tuple coloring due to N.V.R. Mahadev.
An interesting and important variant of ordinary graph coloring involves assigning a set of k colors to each vertex of a graph so that the sets of colors assigned to adjacent vertices are disjoint. Such an assignment is called a k-tuple coloring of the graph. k-tuple colorings were introduced by Gilbert (1972) in connection with the mobile radio frequency assignment problem (see Opsut & Roberts, 1981; Roberts, 1978, 1979; Roberts & Tesman, 2005). Other applications of multicolorings include fleet maintenance, task assignment, and traffic phasing. These are discussed in Opsut and Roberts (1981); Roberts (1979); Roberts and Tesman (2005) and elsewhere. Among the early publications on this topic are Chvátal, Garey, and Johnson (1978); Clarke and Jamison (1976); Garey and Johnson (1976); Scott (1975); Stahl (1976). Given a graph G and positive integer k, we seek the smallest number t so that there is a k-tuple coloring of G using colors from the set {1,2,...,t}. This t is called the k-th multichromatic number or k-tuple chromatic number of G and is denoted by ? k(G). Of course, if k = 1,? k(G) is just the ordinary chromatic number ?(G)
Svata Poljak, Fred S. Roberts

Various Applied Mathematics Topics

Optimal Reservation Scheme Routing for Two-Rate Wide-Sense Nonblocking Three-Stage Clos Networks
The well-known Clos network has been widely employed for data communications and parallel computing systems, while the symmetric three-stage Clos network C(n,m,r) is considered the most basic and popular multistage interconnection network. A lot of efforts have been put on the research of the three-stage Clos network. Let us first introduce some related concepts.
The three-stage Clos network C(n,m,r) is a three-stage interconnection network symmetric with respect to the center stage. The network consists of r (n × m)-crossbars (switches) in the first stage (or input stage), m (r × r)-crossbars in the second stage (or central stage), r (m × n)-crossbars in the third stage (or output stage). The n inlets (outlets) on each input (output) crossbar are the inputs (outputs) of the network. Thus the total number the inputs (outputs) of C(n,m,r) is rn. There exists exactly one link between every center crossbar and every input (output) crossbar. These links are the internal links while the inputs and outputs are the external links of the network.
Wenqing Dou, Frank K. Hwang
Correlation Inequalities for Partially Ordered Algebras
The proof of many an inequality in real analysis reduces to the observation that the square of any real number is positive. For example, the AM-GM inequality \(\frac{1}{2}(a + b) \ge \sqrt {ab} \) is a restatement of the fact that \((\sqrt {a - \sqrt b } )^2 \ge 0.\)
On the other hand, there exist useful notions of positivity in rings and algebras, for which this ‘positive squares’ property does not hold, viz. the square of an element is not necessarily positive. An interesting example is provided by the polynomial algebra R [x], where one decrees a polynomial to be positive if all its coefficients are positive. A noncommutative example is furnished by the algebra of n × n matrices, where one declares a matrix to be positive if all its entries are positive. Neither example satisfies the positive squares property, however in each case the product of two positive elements is positive.
Siddhartha Sahi
The Kruskal Count
The Kruskal Count is a card trick invented by Martin D. Kruskal (who is well known for his work on solitons) which is described in Fulves and Gardner (1975) and Gardner (1978, 1988). In this card trick a magician “guesses” one card in a deck of cards which is determined by a subject using a special counting procedure that we call Kruskal's counting procedure. The magician has a strategy which with high probability will identify the correct card, explained below.
Kruskal's counting procedure goes as follows. The subject shuffles a deck of cards as many times as he likes. He mentally chooses a (secret) number between one and ten. The subject turns the cards of the deck face up one at a time, slowly, and places them in a pile. As he turns up each card he decreases his secret number by one and he continues to count this way till he reaches zero. The card just turned up at the point when the count reaches zero is called the first key card and its value is called the first key number. Here the value of an Ace is one, face cards are assigned the value five, and all other cards take their numerical value. The subject now starts the count over, using the first key number to determine where to stop the count at the second key card. He continues in this fashion, obtaining successive key cards until the deck is exhausted. The last key card encountered, which we call the tapped card, is the card to be “guessed” by the magician.
Jeffrey C. Lagarias, Eric Rains, Robert J. Vanderbei
Descending Dungeons and Iterated Base-Changing
David Applegate, Marc Le Brun, N. J. A. Sloane
Updating Hardy, Littlewood and Pólya with Linear Programming
Some of the standard inequalities that mathematicians use can be proven with convexity arguments or linear programming.1 Perhaps others cannot, so we might say that an inequality is “simple” if there is a convexity based proof. The Cauchy-Schwarz inequality, which may be the most famous and useful inequality ever found is simple in this sense Steele (2004), but there are so many proofs of it that it seems that almost any method will give one, so it may be that it is simple in any sense. The Schwarz inequality can be stated for a general measure space but it easily reduces to the statement that
$$EX^2 EY^2 \ge (EXY)^2 $$
where X and Y are any r.v.'s on a common probability space, Ω.. Equality holds if and only if X and Y are proportional.
Larry Shepp
Metadaten
Titel
The Mathematics of Preference, Choice and Order
herausgegeben von
Professor Steven J. Brams
Professor William V. Gehrlein
Professor Fred S. Roberts
Copyright-Jahr
2009
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-79128-7
Print ISBN
978-3-540-79127-0
DOI
https://doi.org/10.1007/978-3-540-79128-7