Weitere Kapitel dieses Buchs durch Wischen aufrufen
People employ both intuitive and deliberate thinking when making critical decisions. Chapter 4 described how our intuitions give rise to cognitive biases that impair critical decision-making and open the door to the Law of Unintended Consequences (LUC). Robert K. Merton also identified constraints on deliberate thinking as equally important contributors to LUC. This chapter examines Herbert Simon’s pivotal cognitive scientific research on these bounds on human rationality. Section 5.1 provides a brief sketch of Simon’s accomplishments. Section 5.2 introduces Simon’s concept of bounded rationality and contrasts it with Tversky and Kahneman’s research on cognitive biases. Sections 5.3 through 5.5 unpack the key elements of Simon’s research: his critique of rational choice theory, his cognitive model for deliberate (System 2) thinking, and his cognitive model of how people reason in order to make critical decisions. Section 5.6 recaps the significance of Simon’s research.
Bitte loggen Sie sich ein, um Zugang zu diesem Inhalt zu erhalten
Sie möchten Zugang zu diesem Inhalt erhalten? Dann informieren Sie sich jetzt über unsere Produkte:
Feigenbaum [ 5].
Simon and his co-workers developed the first symbolic programming language, the precursor to John McCarthy’s List Processing Language (LISP) that dominated AI for many decades. He used it to create a program that automatically derived theorems for set theory and number theory Simon [ 19].
An example cryptarithmetic puzzle is DONALD + GERALD = ROBERT, where the goal is to assign the numbers 0–9 to the letters uniquely, decrypting the code into a correct arithmetic addition problem.
Simon [ 21] viewed intuitions (and cognitive biases) as a mental model, which people inevitably fall back on as a lens for viewing, understanding, and navigating reality due to our cognitive limits in coping with the world. As Heuer [ 7] notes, these intuitions are “not always well adapted to the requirements of the real world.”
Their research methodologies differed as well as their agendas. and Kahneman used questionnaires that elicited judgments and choices. These tests were tailored to highlight and isolate where and how our System 1 intuitions fail, in the sense of deviating from what a rational actor would judge or decide. In contrast, Simon investigated what people do to get things right. In addition to watching and interviewing subjects as they performed tasks in the lab, Simon developed computer-based simulations of problem solving and decision-making to emulate human cognitive structures and processes. In doing so, he helped conceive many of the key ideas and tools that catalyzed AI .
Although Kahneman oriented his research towards understanding System 1, he readily acknowledged the second horn of Merton’s LUC dilemma: “System 2 is not a paragon of rationality. Its abilities are limited as is the knowledge to which it has access. We do not always think straight when we reason, and its errors are not always due to intrusive and incorrect intuitions. Often we make mistakes because we (our System 2) does not know any better.” Kahneman [ 10].
Simon [ 18], pp. xxviii–xxx.
Simon [ 18] offers another rationale for economists’ attachment to rational agent theory: its premises of utility maximization and perfect knowledge “freed economists from any dependence on psychology”. In contrast, bounded rationality entails a deep study of cognitive and behavioral psychology. Bookstaber [ 2] makes a similar point: “Modern neoclassical economics sweeps humanity off the stage. It prefers to use mathematical models of a representative agent with stable preferences …”.
Kahneman [ 10] writes: “…The definition of rationality as coherence is impossibly restrictive; it demands adherence to rules of logic that a finite mind is not able to implement…our research only showed that Humans are not well described by the rational-agent model.” Klein’s studies of first responders (1998), Simon’s studies of chess grandmasters (2011) and other cognitive experiments show that people generally consider only one or at most a few decision options. Finally, economists criticize the procedural feasibility of defining, much less computing a coherent and defensible metric for utility of outcomes that suitably reflects non-financial benefits of decision options. Levin and Milgrom [ 13] and Gollier [ 6].
For example, options to increase growth include: acquiring other companies; licensing products or technologies (from or to others); developing new products or new marketing and sales strategies; and combinations thereof.
Simon [ 21].
In fact, Simon’s argument holds even for decisions where our System 1 intuitions don’t distort our judgments or choices: whether or not our judgments are correct, we cannot identify optimal choices.
Simon’s argument is epistemological, grounded in the finite nature of human knowledge and analytical capacity, whereas Tversky and Kahneman’s depends on behavioral dispositions (System 1) that can sometimes be overridden. Bookstaber [ 2] offers a more recent critique of rational choice theory in line with Simon’s bounded rationality, focusing on arguments relating to complexity such as computational intractability, emergent behaviors, and history dependence (i.e., ergodicity).
The executive control function is an active area of cognitive research. Newell and Bröder [ 16].
Sudoku is a popular logic puzzle that consists of a 9 × 9 grid of cells, divided into 3 × 3 blocks, with a few cells filled in with numbers from 0 to 9. The objective is to fill in all of the remaining cells, subject to the constraint that no number can occur more than once in any row, column or block. Sudoku are solved by applying deductive logic to fill in cell values. For example, if 8 values are already filled in a row, you can deduce the ninth value by elimination. Another useful pattern is to identify two cells in a row or column that can only assume the same two values, which allows you to eliminate those values as candidates for other empty cells in that row or column.
Simon’s architecture for AI systems employed heuristic rules called productions. Here is an example expert production rule for automatically diagnosing infectious blood diseases. (Italic terms are properties, underlined terms are symbolic values. The IF clauses represent tests, the THEN clause an action, namely a diagnostic conclusion Buchanan and Shortliffe [ 3]:
IF: (1) The identity of the organism is not known with certainty and (2) The stain of the organism is gramneg and (3) The morphology of the organism is rod, and (4) The aerobicity of the organism is aerobic
THEN: There is strongly suggestive evidence ( .e8) that the class of the organism is enterobacteriaceae.
Each such individual “chunk” can be quite complex and informationally dense. For example, chess grandmasters employ “condensed” representations of positions chess boards, as a single (complex encoded) pattern. Mnemonic methods such as “memory palaces” also extend this limit. Miller [ 15].
We commonly use external resources to cope with large-scale problems or decisions as solutions are developed, refined, and shared (e.g., blueprints, schematics, word processing documents). These tools allow people to swap chunks of information into and out of short-term memory while preserving an overall context. They scale our capacity for attention and long-term recall somewhat (i.e., by degree, but not by kind).
Simon [ 18] writes: “Administrative theory must be concerned with the limits of rationality and the manner in which organizations affect those limits. Organizations manage complex tasks by leveraging their hierarchical structures to direct work by individuals.
Even when we are forced to make guesses, the key to a reliable process for making judgments is to estimate the level of uncertainty quantitatively, for example using confidence intervals. Hubbard [ 9].
Simon models this process using “productions”—conditional (“if-then”) rules that associate particular conditions and changes in states of the world with actions that will bring such changes about. Conditions can be either goal-driven (e.g., “I am trying to achieve X”) or data-driven (e.g., “Y is already done” or “I know that Y”).
These include variations across current and potential product and service lines, marketing and sales programs and channels, and relevant competitors in all of your markets. Not all of these options are equally plausible, but for Simon, this is a search problem, not one of formulating alternatives.
For critical decisions, a goal such as increasing annual growth by 5% decomposes into sub-goals. The actions required to achieve a strategic goal generally must be decomposed similarly into specific, often interdependent sub-actions. For example, expanding production capacity often resolves into an enormous number of subordinate decisions about plants, equipment, and vendors, involving tradeoffs of quality, speed, cost, financing, taxes, resources, plans, and schedules (Sterman [ 22]). The resulting hierarchies allow fine-grained projections of how sub-actions close gaps between the current state and goal, they also spawn fine-grained assumptions about contingencies that expand the space of possible action-outcome projections that needs to be searched.
Situational complexity would be even worse were it not for the temporal discounting aspect of bounded rationality: “…the events and prospective events that enter into our value systems are all dated, and the importance we attach to them generally drops off sharply with the passage of time. By applying a heavy discount factor to events, attenuating them with their remoteness in time and space, we reduce our problems of choice to a size commensurate with our limited computing capabilities.” Simon [ 20].
Simon [ 20] argues that natural and artificial systems that are adaptive and have robust feedback mechanisms. For example, we cannot predict blizzards with great precision, but we can recover from them in cities by having snowplows ready.
Simon [ 20], p. 30. Simon also asserts that aspirations are incommensurable: “There is no simple mechanism for comparison between dimensions. This means that a large gain along one dimension may be needed to compensate for a small loss along another – hence the system’s net satisfactions are history-dependent, and it is difficult for people to balance compensatory offsets.”
Lo [ 14] notes that economists criticized Simon’s concept of satisficing because it is not clear when to stop analyzing options on this model, whereas optimality is an unambiguous, mathematically precise notion. He offers a defense of Simon in terms of heuristic rules derived by trial and error.
First responders studied by Klein [ 11] apparently consider one or two options at most, a draconian type of satisficing. These decisions are often urgent and life-threatening, but not critical in our sense.
Even powerful AI systems such as IBM’s Big Blue chess program don’t search this enormous space exhaustively; instead, they “prune” their searches for their next best move by restricting how deep into the game they look and with move scoring and heuristics. Human grandmasters employ very different pattern-based heuristics to compensate for their limited speed and focus of attention.
Simon [ 20], p. 29.
This is not to assert that human cognition is identical with or reducible to Simon’s cognitive simulations and AI rule-based reasoning systems (i.e., similar to some neuroscientists’ reductionist claims that consciousness is nothing more than neurological events and processes). Rather, the fact that his models and intelligent systems perform in ways that compare qualitatively to human problem solvers and decision-makers warrants a claim that human cognition likely involves similar kinds of structures and processes.
Augier, Mie, and James G. March. 2001. Remembering Herbert A. Simon. Public Administration Review. 61(4): 396–402. CrossRef
Bookstaber, Richard. 2007. A Demon of Our Own Design: Markets, Hedge Funds, and the Perils of Financial Innovation. Hoboken, NJ: John Wiley & Sons.
Buchanan, Bruce G., and Edward H. Shortliffe. 1985. Rule-based expert systems: the MYCIN experiments of the Stanford heuristic programming project. Reading. MA: Addison-Wesley. Also available at www.aaaipress.org/Classic/Buchanan/Buchanan07.pdf. Accessed 5 Jul 2019.
Dennett, Daniel C. 2017. From Bacteria to Bach and Back: The Evolution of Minds. New York: W. W. Norton & Company.
Feigenbaum, Edward A. 2001. Herbert A. Simon, 1916-2001. Science. 291 (5511): 2107. CrossRef
Gollier, Christian. 2001. The Economics of Risk and Time. Cambridge, MA: MIT Press.
Heuer, Richards J., 1999. The Psychology of Intelligence Analysis. Jr. Langley, VA: Central Intelligence Agency. https://www.cia.gov/library/center-for-the-study-of-intelligence/csi-publications/books-and-monographs/psychology-of-intelligence-analysis/PsychofIntelNew.pdf. Accessed 05 Jul 2019.
Hogarth, Robin M. 1980. Judgement and Choice: The Psychology of Decision. Hoboken, NJ: John Wiley & Sons.
Hubbard, Douglas W. 2007. How to Measure Anything: Finding the Value of Intangibles in Business. Hoboken, NJ: John H Wiley and Sons.
Kahneman, Daniel. 2011. Thinking Fast and Slow. New York: Farrar, Strauss & Giroux.
Klein, Gary A. 1998. Sources of Power: How People Make Decisions. Cambridge, MA: MIT Press.
Krippendorff, Kaihan. 2011. Outthink the Competition: How a New Generation of Strategists Sees Options Others Ignore. Hoboken, NJ: John Wiley and Sons.
Levin, Jonathan and Paul Milgrom. 2004. Introduction to Choice Theory. Available at https://web.stanford.edu/~jdlevin/Econ%20202/Choice%20Theory.pdf. Accessed 05 Jul 2019.
Lo, Andrew W. 2017. Adaptive Markets. Financial Evolution at the Speed of Thought. Princeton: Princeton University Press.
Miller, George A. 1955. The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information. Psychology Review. 101(2): 343-352. CrossRef
Newell, Ben R., and Arndt Bröder. 2008. Cognitive processes, models and metaphors in decision research. Judgment and Decision-making, 3(3): 195–204.
Resnick, Mitchell. 1994. Turtles, Termites, and Traffic Jams, Cambridge, MA: MIT Press.
Simon, Herbert A. 1976 Administrative Behavior (3 rd Edition). New York: Free Press.
___. 1991. Models of My Life. New York: Basic Books.
___. 1998. The Sciences of the Artificial. Cambridge, MA. MIT Press.
___. 2002. From Substantive to Procedural Rationality. In Herbert Simon (Ed). Models of man: social and rational - mathematical essays on rational human behavior in a social setting. New York: John Wiley and Sons.
Sterman, John D. 1989. Misperceptions of feedback in dynamic decision-making. Organizational Behavior and Human Decision Processes. 43(3): 301–335. CrossRef
Zsambok, Caroline E. and Gary Klein (Eds). 1997. Naturalistic Decision Making. New York: Lawrence Erlbaum Associations, Inc.
- Rational Decision-Making
Richard M. Adler
- Chapter 5
Neuer Inhalt/© Stellmach, Neuer Inhalt/© Maturus, Pluta Logo/© Pluta