Weitere Kapitel dieses Buchs durch Wischen aufrufen
Merton’s analysis of the Law of Unintended Consequences (LUC) criticizes rational choice theory, a cornerstone of modern economics. Section 4.1 takes a brief excursion into rational choice theory and will illuminate what Merton (and his successors) were reacting to and why. Following that, we will review the prevailing cognitive explanation for psychological triggers of LUC, in four parts. Section 4.2 provides a brief history of the research collaboration by Amos Tversky and Daniel Kahneman, who pioneered the study of distortions in judgments and choices from a cognitive perspective. Section 4.3 surveys the current “catalog” of cognitive factors that provoke LUC. Section 4.4 provides an overview of parallel research on judgment errors that arise when people make predictive judgments about the dynamics of changing situations. These cognitive factors are underappreciated triggers of LUC. Section 4.5 sketches a dynamic model that explains how these cognitive factors act to distort critical decision-making. Section 4.6 closes by identifying the cognitive biases that most strongly influence the phases of the critical decision-making process presented in Chap. 2.
Bitte loggen Sie sich ein, um Zugang zu diesem Inhalt zu erhalten
Sie möchten Zugang zu diesem Inhalt erhalten? Dann informieren Sie sich jetzt über unsere Produkte:
Ericsson and Simon [ 10] describe techniques for interviewing experts adapted from field sociologists. Secondary sources include internal corporate documents, regulatory filings and annual reports, articles, interviews, case studies and books.
We ignore neuroscientific accounts that go deeper still, trying to identify the brain structures and biochemical processes that ground cognitive activity. In 1936, cognitive psychology, economics, and decision theory had not advanced to a point where Merton could have drawn on them to produce a rigorous scientific explanation a la Mendelian heredity.
Coherence also requires that an individual has a unique preference for each outcome (although there can be ties). Completeness means that for any two preferences, one is identical to, greater than, or less than the other. Consistency hinges on transitivity: if a person’s preference for A is greater than that for B and their preference for B is greater than that for C then their preference for A is greater than for C. Finally, no preference is less than itself (i.e., it is reflexive). Hargreaves Heap and Varoufakis [ 16].
The critique presented here is based on observations of human behavior and cognitive scientific theories. Other objections to rational choice theory have been raises of a more practical, methodological nature, such as determining exactly factors contribute to utility, how should they be estimated, and how should values be confirmed. Levin and Milgrom [ 26].
Taleb [ 45].
As Merton [ 31] notes, rationality does not ensure success: actions that represent the most probable means of attaining desired ends may fail, while irrational actions such as hunches may succeed in achieving desired outcomes.
For example, in criminal law, punishment amounts to a cost to be traded off against ill-gotten gains. Rational choice theory underpins game theory (cf. Sect. 9.3) and is commonly used to analyze problems in ethics, social and political philosophy.
Thaler [ 48] offers a historical account of behavioral economics from a founder’s perspective.
Stanovich and West [ 40].
Lewis [ 28].
Tversky and Kahneman [ 52] draw an analogy between judgments involving similarity and judgments about distance. We intuitively assess distance by the clarity of the objects we see, which is often, but not always a reliable heuristic. For example, the moon or the setting sun on the horizon can be very sharp, but this does not mean they are closer. Inferences based on similarity are equally fallible; besides ignoring base rates, they are insensitive to sample sizes (which relate to variability, confidence levels) and probabilistic judgments about patterns in sequences. See also Tversky [ 50] and Tversky and Kahneman [ 51].
Kahneman and Tversky [ 24].
Kahneman [ 21] observes that human intuitions are also lazy and expedient. They are apt to surreptitiously substitute answers to questions that are simpler than the judgment that is actually needed. For example, we transparently apply the heuristic judgment about the current popularity of the president to answer the question of the president’s projected popularity in 6 months, which requires a more involved predictive estimate.
This fallacy is ubiquitous and well-documented. See, for example Snowden [ 58], Watts [ 53], and Taleb [ 45]. Even expert statisticians and data scientists fall prey to this fallacy, when they “over-fit” a model (i.e., by creating too many parameters relative to the number of observations). Analysts then “chase their own tails” because the model captures random noise rather than genuine relationships in the data. Silver [ 37]. Many neuroscientists believe that our brains are “wired” to look for and match patterns at multiple perceptual and cognitive levels. Hawkins [ 15].
Kahneman [ 22].
Christensen [ 5].
The term “framing” here refers to alternative descriptions of the same choice. It differs from “framing” in the sense used in Chapter 2 of specifying a set of goals, objectives, values, and metrics that provides a shared context for a team to define decision options and compare projected outcomes and risks.
Schrage [ 36].
Kahneman’s account is often called a “Dual Process Theory of Cognition” or DPT. A third system, Perception, is ignored here for simplicity. Note that “system” does not refer to specific neurophysiological structures in the brain but an abstract model for sets of cognitive processes and behaviors. Stanovich [ 38].
The ability to respond rapidly to threats is clearly an advantage for survival, suggesting that some of our generalized intuitions are at least partially “hardwired” into our neuroanatomy and cognitive processes through evolution, and then combined with knowledge acquired from experience, interacting with our physical environments and other people (e.g., recognizing faces and social cues) Dennett [ 8].
Early artificial intelligence (AI) researchers theorized that these experience are gradually condensed into patterns such as decision rules (e.g., if you see symptoms X, Y, and Z, infer disease Q) that the mind can recall and apply rapidly. Once knowledge is distilled in this manner, human experts are often unable to recall formative individual experiences. Researchers adapted special task-oriented elicitation techniques to “trick” experts into describing their intuitive reasoning. Ericsson and Simon [ 10].
Schrage [ 36] and Hogarth [ 19]. The last two conditions entail that the judgments or choices display insufficient uniformity to “build a sense of typicality,” precluding the possibility for effective learning and adjustment. Hogarth cites recruiting as a case where effective learning is not possible, because employers can’t access performance outcomes for candidates who are rejected or turn down offers.
Kahneman [ 21] recounts experiments show that we consume more energy, specifically glucose, when we engage in reflective thought than we do for intuitive (System 1) thought. System 2 not only consumes energy, it draws on a limited budget of attention and effort (which must be shared with our efforts at self-control). And, contrary to claims by many millennials, people generally do not multi-task effectively, particularly when it comes to tasks that require System 2 thinking. Ophira et al. [ 34].
Kahneman et al. [ 22].
The dynamic relationship between System 1 and 2 is probably the least clearly articulated aspect of this cognitive model. System 2 is posited to have two main functions, monitoring and response. Monitoring consists of tracking System 1 activities in preparation for responding suggesting a sequential and hierarchical relationship. Does System 1 raise alerts? However, System 1 doesn’t necessarily know what it doesn’t know. Or does System 2 somehow determine the need to intervene, despite the fact that monitoring is “low-level” to conserve energy? Margolis [ 30] proposes an alternative, less asymmetrical and more cooperative pattern of interaction between System 1 and System 2. See also Stanovich [ 39].
In contrast, System 2 reasoning is conscious and deliberate, so when it engages to respond to a question or solve a problem, and is unable to do so, you are immediately aware of that failure.
Stanovich [ 39] partitions System 2 into two largely independent pieces—algorithmic, which encompasses reasoning skills and coordination of thinking tasks, and reflective, which relates to cognitive styles and thinking “dispositions” such as superstition and dogmatism vs. open-mindedness. The reflective side also determines a person’s ability and willingness to keep System 2 engaged and suppress System 1’s shallow thinking. Roughly, the algorithmic piece equates to intelligence, while Stanovich equates the reflective piece with rationality. He attributes the errors caused by cognitive biases to “lazy thinking”—flaws in the reflective component of System 2, rather than to the intelligence piece.
Kahneman [ 21] characterizes System 1 as the “experiential self” and the System 2 as the “remembering self”. And System 1’s memories are often unreliable. This gives rise to conflicts that compromise System 2’s efforts to reason rigorously. For example, the peak-end rule rates experiences based on peak and final instants, whereas System 2 tries to sum up the impacts from all instances.
Hogarth [ 20].
See, for example, Hogarth [ 20], Kleinmuntz [ 25], Sterman [ 41, 42], Cronin et al. [ 6]. Many of these experiments study individuals making judgments about a continuous process, where initial actions can be adjusted based on observations of interim outcomes: this feedback enables learning and correction of decisions. Of course, critical decisions rarely allow continuous feedback and adjustment. That said, the primary findings concerning judgments about dynamics apply directly to analyzing options for critical decisions.
Cronin et al. [ 6]. The authors call the problem of judging accumulations the “stock-flow failure.”
Solving such problems analytically normally requires formal mathematical tools such as calculus or difference equations. The correlation heuristic is the dynamic equivalent of the representativeness heuristics identified by Tversky and Kahneman that substitute probabilities drawn from limited samples that we observe for more difficult statistical judgments about the general population of interest.
Cronin et al. [ 6].
Sterman [ 43], p. 686.
Sterman [ 41] reports similar results misperceiving dynamics involving feedback loops for an experiment where subjects try to manage an entire model economy (viz., balancing aggregate capital investment with demand) rather than a single supply chain.
Other examples of latency effects due to reservoirs include home heating systems that use thermal masses such as steam or hot water radiators to accumulate and transfer heat by conduction. Thus, much of the heat transfer occurs after the furnace, the primary heat source shuts down. Dorner [ 9]. The heat and CO2 absorbed by the ocean and vegetation play a similar role in global warming. Even if we were to reduce CO2 emissions dramatically, the reservoirs of CO2 and heat would continue to produce further climate change. Cronin et al. [ 6].
Dorner [ 9], pp. 30–33.
Descriptive accounts can go deeper still. Mendel described how genetics worked at a behavioral level. He did not provide a causal explanation at the cellular or biochemical level. Other scientists worked out cell theory, identifying chromosomes as the physical carriers of genetic information, and deciphering the process of cell division (mitosis). Much later, Watson and Crick’s discovery of the double helix structure of DNA completed the causal puzzle at the biochemical level, decoding the DNA molecule and the process that replicates it. An analogously deep account of cognitive biases would identify relevant areas of the brain (e.g., using neuroimaging methods such as fMRI) and describe neurochemical processes in brain cells. This level of causal explanation is unnecessary overkill for our purposes, for which Tversky and Kahneman’s account (at a Mendelian behavioral level) suffices.
Hogarth [ 20] makes a similar point in functional cognitive terms: different biases influence how we acquire information from our environment; process it; devise responses, interpret results; and learn from them.
All URLs Accessed 05 Jul 2019.
Benson, Buster. 2016. Cognitive bias cheat sheet. https://betterhumans.coach.me/cognitive-bias-cheat-sheet-55a472476b18.
Betsch, Tilmann. 2008. The Nature of Intuition and Its Neglect in Research on Judgment and Decision-making. In: Henning Plessner, Cornelia Betsch, and Tilmann Betsch (Eds), Intuition in Judgment and Decision-Making (pp. 3–22). New York: Lawrence Erlbaum Associates.
Bookstaber, Richard. 2017. The End of Theory: Financial Crises, the Failure of Economics, and the Sweep of Human Interaction. Princeton: Princeton University Press.
Brunswik , Egon. 1956. Perception and the Representative Design of Experiments. 2 nd edition. Berkeley, CA: University of California Press.
Christensen, Clayton M. 1997. The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail. Boston, MA: Harvard Business School Press.
Cronin, Matthew A., Cleotilde Gonzalez, and John D. Sterman. 2009. Why don’t well-educated adults understand accumulation? A challenge to researchers, educators, and citizens. Organizational Behavior and Human Decision Processes. 108(1): 116–130. CrossRef
Dawes, Robyn M. 1982. The Robust Beauty of Improper Linear Models in Decision-making. American Psychologist, 34(7): 571–582. Available at https://statmodeling.stat.columbia.edu/2013/08/14/the-robust-beauty-of-improper-linear-models-in-decision-making/. CrossRef
Dennett, Daniel C. 2017. From Bacteria to Bach and Back: The Evolution of Minds. New York: W. W. Norton & Company.
Dorner, Dietrich. 1996. The Logic of Failure: Recognizing and Avoiding Error in Complex Situations. New York: Basic Books. New York, 1996.
Ericsson, K Anders, and Herbert A. Simon. 1993. Protocol Analysis: Verbal Reports as Data. Revised Edition. Cambridge, MA: MIT Press.
Finkelstein, Sydney, Jo Whitehead, and Andrew Campbell. 2008. Think Again: Why Good Leaders Make Bad Decisions and How to Keep It from Happening to You. Boston, MA: Harvard Business School Press.
Gerstein, Marc S. 2008. Flirting with Disaster: Why Accidents are Rarely Accidental. New York: Union Square Press.
Gladwell, Malcolm. 2005. Blink: The Power of Thinking without Thinking. New York: Little, Brown, and Company.
Halvorson, Heidi Grant., and David Rock. 2015. Beyond Bias. Strategy and Business, (80) Available at https://www.strategy-business.com/article/00345?gko=d11ee.
Hawkins, Jeff. 2004. On Intelligence: How a New Understanding of the Brain Will Lead to the Creation of Truly Intelligent Machines. New York: St. Martin’s Griffin.
Hargreaves Heap, Shaun, and Yanis Varoufakis. 2004. Game Theory: A Critical Introduction. 2 nd Edition New York: Routledge.
Heath, Chip, and Dan Heath. 2013. Decisive: How to Make Better Choices in Life and Work. New York: Crown Business.
Hodgson, Geoffrey M. 2012. On the Limits of Rational Choice Theory. Economic Thought. 1(1): 94–108.
Hogarth, Robin M. 1980. Judgement and Choice: The Psychology of Decision. Hoboken, NJ: John Wiley & Sons.
Hogarth, Robin M. 1981. Beyond discrete biases: Functional and dysfunctional aspects of judgmental heuristics. Psychological Bulletin. (90): 187–217. CrossRef
Kahneman, Daniel. 2011. Thinking Fast and Slow. New York: Farrar, Strauss & Giroux.
Kahneman, Daniel, Dan Lovallo, and Olivier Sibony. 2011. Before You Make That Big Decision. Harvard Business Review 89(6):50–60.
Kahneman, Daniel, Paul Slovic, and Amos Tversky (Eds). 1982. Judgment under Uncertainty: Heuristics and Biases. Cambridge: Cambridge University Press.
Kahneman, Daniel, and Amos Tversky. 1979. Prospect Theory: An Analysis of Decision under Risk. Econometrica. 47(2): 263–91. CrossRef
Kleinmuntz, Don N., John W. Mamer, and Stephen A. Smith. 1985. Cognitive Heuristics and Feedback in a Dynamic Decision Environment. Management Science 31(6): 680–702. CrossRef
Levin, Jonathan and Paul Milgrom. 2004. Introduction to Choice Theory. Available at https://web.stanford.edu/~jdlevin/Econ%20202/Choice%20Theory.pdf.
Levitt, Steven D., and Stephen J. Dubner. 2005. Freakonomics: A rogue economist explores the hidden side of everything. New York: William Morrow.
Lewis, Michael. 2017. The Undoing Project: A Friendship That Changed Our Minds. New York: W.W. Norton & Co.
Lovallo, Dan P. and Oliver Sibony. 2006. Distortions and deceptions in strategic decisions. The McKinsey Quarterly. (1): 18–29.
Margolis, Howard. 1987. Patterns, Thinking, and Cognition: A Theory of Judgment. Chicago: University of Chicago Press.
Merton, Robert K. 1936. The Unanticipated Consequences of Purposive Social Action. American Sociological Review. 1(6): 894–904. CrossRef
Meissner, Philip, Olivier Sibony, and Torsten Wulf. 2015. Are you ready to decide? McKinsey Quarterly. Available at https://www.mckinsey.com/business-functions/strategy-and-corporate-finance/our-insights/are-you-ready-to-decide .
Michel-Kerjan, Erwann, and Paul Slovic (Eds). 2010. The Irrational Economist: Making Decisions in a Dangerous World. New York: Perseus Books Group.
Ophira, Eyal, Clifford Nass, and Anthony D. Wagner. 2009. Cognitive control in media multitaskers. Proceedings of the National Academy of Sciences 106(37):15583–15587. CrossRef
Redelmeier, Donald A. and Eldar Shafit. (1995). Medical decision making in situations that offer multiple alternatives. Journal of the American Medical Association. 273: 302–305.
Schrage, Michael. 2003. Daniel Kahneman: The Thought Leader Interview. Strategy + Business. (33). Available at https://www.strategy-business.com/article/03409?gko=7a903.
Silver, Nate. 2012. The Signal and The Noise: why so many predictions fail – but some don’t. New York: Penguin Books.
Stanovich, Keith E. 1999. Who is rational? Studies in individual differences in reasoning. Mahwah NJ: Lawrence Erlbaum Associates.
Stanovich, Keith E. 2011. Rationality and the reflective mind. New York: Oxford University Press.
Stanovich, Keith E., and Richard F. West. 2000. Individual differences in reasoning: Implications for the Rationality Debate? Behavioral and Brain Sciences. 23: 645–726. CrossRef
Sterman, John D. 1989. Misperceptions of feedback in dynamic decision-making. Organizational Behavior and Human Decision Processes. 43(3): 301–335. CrossRef
Sterman, John D. 1989a. Modeling Managerial Behavior: Misperceptions of Feedback in a Dynamic Decision-making Experiment. Management Science. 35(3): 321–339. CrossRef
Sterman, John D. 2000. Business Dynamics: Systems Thinking and Modeling for a Complex World. Boston: Irwin/McGraw-Hill.
Sturm, Thomas. 2014. Intuition in Kahneman and Tversky’s Psychology of Rationality. In: Lisa M. Osbeck, and Barbara S. Held (Eds). Rational Intuition: Philosophical Roots, Scientific Investigations. Cambridge: Cambridge University Press (pp.257–286). doi:10.1017/CBO9781139136419.015.
Taleb, Nassim Nicholas. 2007. The Black Swan: The Impact of the Highly Improbable. New York: Random House.
Tenner, Edward. 1996. Why Things Bite Back: Technology and the Revenge of Unintended Consequences. New York: Random House.
Tetlock, Philip E. 2005 . Expert political judgment: How Good Is It? How Can We Know? Princeton, NJ: Princeton University Press.
Thaler, Richard C. 2015. Misbehaving: The Making of Behavioral Economics. New York: W. W. Norton & Company.
Thaler, Richard C. and Cass R. Sunstein. 2008. Nudge: Improving Decisions about Health, Wealth, and Happiness. New York: Penguin Books.
Tversky, Amos. 1977. “Features of Similarity.” Psychological Review 84(4): 327–52. CrossRef
Tversky, Amos, and Daniel Kahneman. 1971. Belief in the Law of Small Numbers. Psychological Bulletin. 76(2): 105–110. CrossRef
Tversky, Amos, and Daniel Kahneman. 1974. Judgment under Uncertainty: Heuristics and Biases. Science. (185): 1124–31. CrossRef
Watts, Duncan J. 2011. Everything Is Obvious (Once You Know The Answer: How Common Sense Fails Us. New York: Random House.
Weintraub, E. Roy. 2008. Neoclassical Economics. In: David R. Henderson (Ed). The Concise Encyclopedia of Economics. Available at http://www.econlib.org/library/Enc1/NeoclassicalEconomics.html.
Hammond John S., Ralph L. Keeney, and Howard Raiffa. 1998. The hidden traps in decision-making. Harvard Business Review. 76(5); 47–58.
Sibony, Olivier, and Dan Lovallo. 2010. Strategic Decisions: When Can You Trust Your Gut? McKinsey Quarterly. Available at https://www.mckinsey.com/business-functions/strategy-and-corporate-finance/our-insights/strategic-decisions-when-can-you-trust-your-gut.
Simon, Herbert A. 1998. The Sciences of the Artificial. Cambridge, MA. MIT Press.
- Psychology of Critical Decision-Making
Richard M. Adler
- Chapter 4
Neuer Inhalt/© Stellmach, Neuer Inhalt/© Maturus, Pluta Logo/© Pluta