Weitere Kapitel dieses Buchs durch Wischen aufrufen
Chapter 8 described modeling and simulation tools that project decision consequences with rigor, consistency, and scale. Unfortunately, these tools are too narrowly focused to bend LUC significantly on their own. This chapter describes a broader approach that effectively eliminates this obstacle. Sections 9.1 and 9.2 present the motivation for our method of “test driving” critical decisions. Sections 9.3, 9.4, 9.5, and 9.6 describe the three core components of the test drive method and how they work together leading up to the point of decision. Section 9.7 explains how the method is easily adapted to help businesses execute decisions once they commit to them. Part III illustrates the test drive method in practice, applying it to four types of critical decisions.
Bitte loggen Sie sich ein, um Zugang zu diesem Inhalt zu erhalten
Sie möchten Zugang zu diesem Inhalt erhalten? Dann informieren Sie sich jetzt über unsere Produkte:
As argued in Chap. 8, procedural errors in the decision-making process can be reduced through process improvement methods. Flawed intuitions about probabilities, other magnitudes, and dynamic flows can be replaced by appropriate System 2 measurement, estimation, and calculation methods. And economic constraints can be relaxed by deploying technology to improve decision-making productivity.
Simon [ 10] observes: “because the consequences of many actions extend well into the future, correct prediction is essential for objectively rational choice.
This is a contrarian (if not heretical) view. Many simulationists subscribe to Abraham Maslow’s theory: “if all you have is a hammer, everything looks like a nail.” They reject other techniques and attempt to apply their tool of choice (and fluency) to every decision problem. Most techniques can be adapted—or more typically, contorted—beyond their intended purpose. But this begs the question of whether that tool is the easiest or the best approach to analyzing a given problem. We reject this dogmatic attitude and instead embrace a “polytheist” stance, treating modeling and simulation techniques as complementary rather than as competitive and exclusionary.
Economists classify goods such as cars, trucks, and houses as “considered” purchases. This means people are assumed to make decisions as rational agents that calculate purchase utilities rather than act on impulse, buying clothes, food, and other lower cost goods based on immediate wants and needs.
Schoemaker [ 7].
This terminology is specific to scenario planners. We will offer slightly different definitions for forces and trends to describe decision test drive examples in Part III of this book.
In fact, many scenario planners (and business war gamers) adamantly refuse to use modeling and simulation tools, arguing that they are “too constraining.” They do accept software tools such as word processors and graphics packages to store, manage, and display scenarios in digital formats.
Scenario planning consultants may help strategists to explore scenarios and identify how the organization’s strategic competencies and weaknesses relate to driving forces and uncertainties. However these are preparatory tasks that don’t contribute to formulating candidate decisions, much less actively assessing their likely impact on the organization’s positioning within and across scenarios.
For example, suppose a small chain of hardware stores needs to define a strategy. They would want to consider Home Depot and Lowe’s, the dominant “big box” home improvement companies, as well as major chains such as True Value and Ace Hardware. Thousands of independent hardware stores and home centers also operate in the US market. Modeling all of these businesses individually seems neither necessary nor practical. A better approach is to model the major actors explicitly by name, and then define a population of other competitors, and populate it using statistical data. Chapter 11 adopts this approach for modeling decisions about business-to-business (B2B) marketplaces.
For example, customer satisfaction might be estimated on a numeric scale, based on a calculation involving number and size of purchases, number of visits within a time period, social media comments, and so on. Government policy decisions would track performance in terms of cost, benefit, and sentiment metrics such as public acceptance or degree of compliance, rather than profit and loss.
Recall that traditional scenario planners use different terminology, treating trends and uncertainties as disjoint subsets of forces. We use the term “uncertainty” to describe any facet of a situation whose future state is unknown (i.e., matching popular usage of the term). We then treat “trends” as slowly varying changes in metrics of interest and “forces” as broader influences that can change abruptly or non-linearly.
Recall from Chap. 8 that things like or cities and situational trends and forces are dynamic, but not intentional. That said, they may reflect the collective behaviors of intentional entities. For example, increasing prices generate social pressures for increased wages by employees (and unions), intentional actors who respond to the perceived shrinking value of their incomes.
Critical business decisions must often model internal stakeholders explicitly as distinct parties of interest. This is essential, for example for test driving strategies to enable organizational change (cf. Chap. 13). For simplicity, Fig. 9.5 omits internal stakeholders as they tend to be decision- specific.
Related metrics can be combined to form a composite “dimension” using vectors (cf. Sect. 13.3). For example, the value for a dimension might be computed as the sum (w1 ∗ P1 + w2 ∗ P2 + w3 ∗ P3)/3. P1 thru P4 are metrics measured on a scale from 1 to 100 (e.g., profitability, or customer satisfaction), and W1 through W1 represent the relative importance or weights of those metrics to decision-makers.
This approach borrows from classical statistical quality control theory. Deming argued that if you focus on an overly-precise target, you’ll tend to over-react to minor variances from that goal, including natural statistical fluctuations, which actually contributes to moving your process out of the target control range.
This pragmatic heuristic is far from unique. Epidemiologists share the same approach to predicting the severity of flu pandemics in a given year: “It’s stupid to predict [which strains of influenza will dominate] based on three data points (… the flu pandemics in 1918, 1957, and 1968). All you can do is plan for different scenarios.” Silver [ 9], p. 229.
This is a combinatorically complete comparison of the decision options and possible futures that one manages to define in her satisficing process. It isn’t exhaustive as per Simon’s idealized rational actor (cf. Sect. 5.2).
Of course, many executives and managers prefer, if not insist on, simple decisions. For example, one of our clients wanted us to draw a line across a bar chart that displayed total projected risk reduced by a set of candidate investments. He wanted to fund programs that reduced risk more than that threshold and walk away for programs that fell short. Unfortunately, risk management is not that straightforward: total risk reduced must be balanced against ROI (how much “bang for the buck” was generated, how quickly risk was reduced, political concerns (whose oxen would be gored and how seriously), and how program assets contributed to other organizational missions.
Such assignments are subjective and dependent on the specific decision at hand. Our method is agnostic as to how weights are determined. The two leading approaches to assigning weights are utility theory and analytic hierarchy process (AHP). See Keeney [ 4] or Keeney et al. [ 5] for utility theory, Brunelli [ 2] for AHP.
The first modern EWS were ground-based radar systems developed early in the Cold War in the 1950s to detect launches of enemy missiles and give us time to make decisions about deploying our strategic bomber fleet, and launching our own missiles before they could be destroyed on the ground. More recent EWS enhance public health and public safety (e.g., by detecting threats such as viral pandemics, radioactive substances, or bioweapons and raising alarms).
This method amounts to counteracting the cognitive bias of conservatism, or failure to adjust for changes (cf. Sect. 4.3).
Abandonment is the natural conclusion to a decision’s lifecycle. Given continual change, decisions are certain to become obsolete; the only question is how long particular decisions will remain effective.
Our software framework allows what-if” scenarios to be copied in their entirety, and then adapted selectively. This capability promotes rapid generation of new scenarios based on different assumptions about future situational dynamics. It also generates reports that identify differences across scenarios, facilitating analysis of which differences (in assumptions) produce particular differences in outcomes. This expands the decision-making “search space” and allows it to be explored more effectively (cf. Sect. 5.3).
Adler [ 1].
Adler, Richard M. 2008. Knowledge Engines for Critical Decision Support. In: Miltiadis D. Lytras, Meir Russ, Ronald Maier, and Ambjorn Naeve (Eds), Knowledge management strategies: a handbook of applied technologies, (pp. 143-169). New York: Idea Publishing Group.
Brunelli, Mario, 2015, Introduction to the Analytic Hierarchy Process. Available at http://core.ac.uk/download/pdf/80714029.pdf. Accessed 5 Jul 2019. CrossRef
Day, George S., and Paul J. H. Schoemaker. 2006. Peripheral Vision: Detecting the Weak Signals that Will Make or Break Your Company. Boston, MA: Harvard Business School Press.
Keeney, Ralph L., 1992. Value-Focused Thinking: A Path to Creative Decisionmaking. Cambridge, MA: Harvard University Press.
Keeney, Ralph L., Howard Raiffa, and Richard F. Meyer, 1976. Decisions with Multiple Objectives: Preferences and Value Tradeoffs. New York: John Wiley & Sons.
Merton. Robert K. 1936. The Unanticipated Consequences of Purposive Social Action. American Sociological Review. 1(6): 894-904. CrossRef
Schoemaker, Paul J.H., 2002. Profiting from Uncertainty: Strategies for Succeeding No matter What the Future Brings. New York: Free Press.
Schwartz, Peter. 1991. The Art of the Long View: Planning for the Future in an Uncertain World. New York: Doubleday Currency.
Silver, Nate. 2012. The Signal and The Noise: why so many predictions fail – but some don’t. New York: Penguin Books.
Simon, Herbert A. 1998. The Sciences of the Artificial. Cambridge, MA. MIT Press.
Taleb, Nassim Nicholas. 2007. The Black Swan: The Impact of the Highly Improbable. New York: Random House.
van der Heijden, Kees. 1996. Scenarios: The Art of Strategic Conversation. New York: John Wiley and Sons.
- Test Drive Your Critical Decisions
Richard M. Adler
- Chapter 9
Neuer Inhalt/© Stellmach, Neuer Inhalt/© Maturus, Pluta Logo/© Pluta