Skip to main content

Shaping the Future

Scientific uncertainty often becomes an excuse to ignore long-term problems, such as climate change. It doesn't have to be so

Last year a high-profile panel of expertsknown as the Copenhagen Consensus ranked the world's most pressing environmental, health and social problems in a prioritized list. Assembled by the Danish Environmental Assessment Institute under its then director, Bj¿rn Lomborg, the panel used cost-benefit analysis to evaluate where a limited amount of money would do the most good. It concluded that the highest priority should go to immediate concerns with relatively well understood cures, such as control of malaria. Long-term challenges such as climate change, where the path forward and even the scope of the threat remain unclear, ranked lower.

Usually each of these problems is treated in isolation, as though humanity had the luxury of dealing with its problems one by one. The Copenhagen Consensus used state-of-the-art techniques to try to bring a broader perspective. In so doing, however, it revealed how the state of the art fails to grapple with a simple fact: the future is uncertain. Attempts to predict it have a checkered history--from declarations that humans would never fly, to the doom-and-gloom economic and environmental forecasts of the 1970s, to claims that the "New Economy" would do away with economic ups and downs. Not surprisingly, those who make decisions tend to stay focused on the next fiscal quarter, the next year, the next election. Feeling unsure of their compass, they hug the familiar shore.

This understandable response to an uncertain future means, however, that the nation's and the world's long-term threats often get ignored altogether or are even made worse by shortsighted decisions. In everyday life, responsible people look out for the long term despite the needs of the here and now: we do homework, we save for retirement, we take out insurance. The same principles should surely apply to society as a whole. But how can leaders weigh the present against the future? How can they avoid being paralyzed by scientific uncertainty?


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.



The approach replicates the way people reason about UNCERTAIN DECISIONS in everyday life.


In well-understood situations, science can reliably predict the implications of alternative policy choices. These predictions, combined with formal methods of decision analysis that use mathematical models and statistical methods to determine optimal courses of action, can specify the trade-offs that society must inevitably make. Corporate executives and elected officials may not always heed this advice, but they do so more often than a cynic might suppose. Analysis has done much to improve the quality of lawmaking, regulation and investment. National economic policy is one example. Concepts introduced by analysts in the 1930s and 1940s--unemployment rate, current-account deficit and gross national product--are now commonplace. For the most part, governments have learned to avoid the radical boom-and-bust cycles that were common in the 19th and early 20th centuries.

The trouble now is that the world faces a number of challenges, both long- and short-term, that are far from well understood: how to preserve the environment, ensure the future of Social Security, guard against terrorism and manage the effects of novel technology. These problems are simply too complex and contingent for scientists to make definitive predictions. In the presence of such deep uncertainty, the machinery of prediction and decision making seizes up. Traditional analytical approaches gravitate to the well-understood parts of the challenge and shy away from the rest. Hence, even sophisticated analyses such as the one by the Copenhagen Consensus have trouble assessing the value of near-term steps that might shape our long-term future.

The three of us--an economist, a physicist and a computer scientist all working in RAND's Pardee Center--have been fundamentally rethinking the role of analysis. We have constructed rigorous, systematic methods for dealing with deep uncertainty. The basic idea is to liberate ourselves from the need for precise prediction by using the computer to help frame strategies that work well over a very wide range of plausible futures. Rather than seeking to eliminate uncertainty, we highlight it and then find ways to manage it. Already companies such as Volvo have used our techniques to plan corporate strategy. [break]

The methods offer a way to break the ideological logjam that too often arises in Washington, D.C. By allowing decision makers to explore a rich variety of what-if scenarios, the new approach reframes the age-old but unanswerable question--What will the long-term future bring?--to one better reflecting our real concern: What actions today will best shape the future to our liking?

The Perils of Prediction
Striking a balance between the economy and the environment is one leading example of the difficulty in using science to inform long-term decisions. In his 2002 book The Future of Life, Edward O. Wilson described the debate between economists and environmental scientists [see "The Bottleneck," by Edward O. Wilson; Scientific American, February 2002]. The former group frequently argues that present policies will guide society successfully through the coming century. Technological innovation will reduce pollution and improve energy efficiency, and changes in commodity prices will ensure timely switching from scarce to more plentiful resources. The latter group argues that society's present course will prove unsustainable. By the time the signs of environmental stress become unambiguous, society may have passed the point of easy recovery. Better to apply the brakes now rather than jam them on later when it may be too late.

No matter how compelling their arguments, both sides' detailed predictions are surely wrong. Decisions made today will affect the world 50 to 100 years hence, but no one can credibly predict what life will be like then, regardless of the quality of the science. Interested parties view the same incomplete data, apply different values and assumptions, and arrive at different conclusions. The result can be static and acrimonious debate: "Tree hugger!" "Eco-criminal!"

The (in)famous report The Limits to Growth from the early 1970s is the perfect example of how the standard tools of analysis often fail to mediate such debates. A group of scientists and opinion leaders called the Club of Rome predicted that the world would soon exhaust its natural resources unless it took immediate action to slow their use. This conclusion flowed from a then state-of-the-art computer model of the dynamics of resource use. The report met with great skepticism. Since the days of Thomas Malthus, impending resource shortages have melted away as new technologies have made production more efficient and provided alternatives to dwindling resources.

But the model was not wrong; it was just used incorrectly. Any computer model is, by definition, a simplified mirror of the real world, its predictions vulnerable to some neglected factor. The model developed for The Limits to Growth revealed some important aspects of the challenges faced by society. In presenting the analysis as a forecast, the authors stretched the model beyond its limits and reduced the credibility of their entire research program.

Grappling with the Future
Conscious of this failing, analysts have turned to techniques such as scenario planning that involve exploring different possible futures rather than gambling on a single prediction. As an example, in 1995 the Global Scenario Group, convened by the Stockholm Environment Institutes, developed three scenario families. The "Conventional Worlds" family described a future in which technological innovation, driven by markets and lightly steered by government policy, produces economic growth without undermining environmental quality. In the "Barbarization" set of scenarios, the same factors--innovation, markets and policy--prove inadequate to the challenge, leading to social collapse and the spread of violence and misery. The third set, "Great Transitions," portrayed the widespread adoption of eco-friendly social values. The Global Scenario Group argued that the Conventional Worlds scenarios are plausible but not guaranteed; to avoid the risk of Barbarization, society should follow the Great Transitions paths. [break]

Although scenario analysis avoids making definite predictions, it has its own shortcomings. It addresses no more than a handful of the many plausible futures, so skeptics can always question the choice of the highlighted few. More fundamentally, scenario families do not translate easily into plans for action. How should decision makers use the scenarios? Should they focus on the most threatening case or the one regarded by experts as most likely? Each approach has faults.

The European Union often favors the "precautionary principle"--in essence, basing policy on the most hazardous plausible scenarios. The Kyoto treaty on climate change, for example, requires reductions of greenhouse gas emissions even though their long-term effects are far from understood. On one level, the precautionary principle makes perfect sense. It is better to be safe than sorry. The long-term future will always be cloudy; some dangers may become certain only when it is too late to prevent them. Yet the principle is an imperfect guide. The future presents many potential harms. Should we worry about them all equally? Few choices are risk-free, and the precautionary principle can lead to contradictory conclusions. For instance, both the harm from greenhouse gas emissions and the cost of reducing them are uncertain. To safeguard the environment, we should reduce the emissions now. To safeguard the economy, we should postpone reductions. So what do we do?

In contrast, many in the U.S. favor cost-benefit analysis, which balances the benefits of eliminating each potential harm against the costs of doing so. When outcomes are uncertain, cost-benefit analysis weights them with odds. We should be willing to pay up to $500 to eliminate a $1,000 harm whose chance of occurring is 50¿50. Cost-benefit analysis provides unambiguous answers in many instances. Lead in gasoline enters the environment and affects the developing brains of children. Even though scientists do not know precisely how many children are affected, the benefit of removing lead from gasoline far exceeds the cost. But the long-term future rarely offers such clear choices. Often both the costs and benefits are sufficiently unclear that small disagreements over assigning odds can make a huge difference in the recommended policy.

Making Policies Robust
Traditional tools such as cost-benefit analysis rely on a "predict then act" paradigm. They require a prediction of the future before they can determine the policy that will work best under the expected circumstances. Because these analyses demand that everyone agree on the models and assumptions, they cannot resolve many of the most crucial debates that our society faces. They force people to select one among many plausible, competing views of the future. Whichever choice emerges is vulnerable to blunders and surprises.

Our approach is to look not for optimal strategies but for robust ones. A robust strategy performs well when compared with the alternatives across a wide range of plausible futures. It need not be the optimal strategy in any future; it will, however, yield satisfactory outcomes in both easy-to-envision futures and hard-to-anticipate contingencies.

This approach replicates the way people often reason about complicated and uncertain decisions in everyday life. The late Herbert A. Simon, a cognitive scientist and Nobel laureate who pioneered in the 1950s the study of how people make real-world decisions, observed that they seldom optimize. Rather they seek strategies that will work well enough, that include hedges against various potential outcomes and that are adaptive. Tomorrow will bring information unavailable today; therefore, people plan on revising their plans.

Incorporating robustness and adaptability into formal decision analysis used to be impossible because of the complexity and vast number of required calculations. Technology has overcome these hurdles. Confronting deep uncertainty requires more than raw computational power, though. The computers have to be used differently. Traditional predict-then-act methods treat the computer as a glorified calculator. Analysts select the model and specify the assumptions; the computer then calculates the optimal strategy implied by these inputs. [break]

In contrast, for robust decision making the computer is integral to the reasoning process. It stress-tests candidate strategies, searching for plausible scenarios that could defeat them. Robust decision making interactively combines the complementary abilities of humans and machines. People excel at seeking patterns, drawing inferences and framing new questions. But they can fail to recognize inconvenient facts and can lose track of how long chains of causes relate to effects. The machine ensures that all claims about strategies are consistent with the data and can reveal scenarios that challenge people's cherished assumptions. No strategy is completely immune to uncertainty, but the computer helps decision makers exploit whatever information they do have to make choices that can endure a wide range of trends and surprises.

Sustainable Development
To see how this approach works in practice, return to the dilemma of sustainable development. The first step is to figure out what exactly the computer should calculate. Robust decision making requires the machine to generate multiple paths into the future, spanning the full diversity of those that might occur. We may not know the exact future that will transpire, but any strategy that performs well across a sufficiently diverse set of computer-generated scenarios is likely to meet the challenges presented by what actually comes to pass.

In our analysis of sustainable development, we used a revised version of the Wonderland model originally created by economist Warren C. Sanderson of Stony Brook University and the International Institute for Applied Systems Analysis in Laxenburg, Austria. The Wonderland simulation incorporates, in a very simple manner, scientific understanding of the dynamics of the global economy, demographics and environment. Growing population and wealth will increase pollution, whereas technological innovation may reduce it. The pollution, in turn, hurts the economy when it taxes the environment beyond its absorptive capacity.

Our version of Wonderland is similar to--but with only 41 uncertain parameters, much simpler than--the simulation used for The Limits to Growth. This simplicity can be a virtue: experience demonstrates that additional detail alone does not make predictions more accurate if the model's structure or inputs remain uncertain. For robust planning, models should be used not to predict but to produce a diversity of scenarios, all consistent with the knowledge we do possess.

Running models within special "exploratory modeling" software, analysts can test various strategies and see how they perform. The human user suggests a strategy; for each scenario in the ensemble, the computer compares this approach to the optimal strategy (the one that would have been chosen with perfect predictive foresight) according to such measures as income or life expectancy. A systematic process reveals futures in which the proposed strategies could perform poorly. It also highlights ways each strategy could be adjusted to handle those stressful futures better.

In the sustainability example, we run the model through the year 2100. Two key uncertainties are the average global economic growth rate during this period and the business-as-usual "decoupling rate" (that is, the reduction in pollution per unit of economic output that would occur in the absence of new environmental policies). The decoupling rate will be positive if existing regulations, productivity increases and the shift to a service economy lessen pollution without lessening growth. It can go negative if growth requires an increase in pollution.

Depending on the values of these quantities, different strategies perform differently. One strategy, "Stay the Course," simply continues present policy. It performs well in futures where the decoupling rate exceeds the growth rate, but if the reverse is true, pollution eventually becomes so serious that policymakers are forced to abandon the strategy and try to reverse the damage. During the 20th century, the growth and decoupling rates were nearly equal. If the same proves to be true for the 21st, the world will totter on a knife-edge between success and failure. [break]

The more aggressive "Crash Program" pours money into technological development and environmental regulations that speed decoupling beyond its business-as-usual rate. Although this strategy eliminates the risk of catastrophe, it can impose unnecessarily high costs, inhibiting economic growth.

Becoming Flexible
Both these strategies involve policies that are fixed in advance. An adaptive strategy bests them both. Inspired by the complementary strengths and weaknesses of "Stay the Course" and "Crash Program," we considered a flexible alternative that imposes rigorous emissions limits but relaxes them if they cost too much. Such a strategy can be robust. If the technological optimists are right (the decoupling rate turns out to be high), the cost threshold is never breached and industry meets the aggressive environmental goals. If technological pessimists prove correct (the decoupling rate is low), then tight pollution restrictions will exceed the agreed-on cost limits, in which case the strategy gives industry more time to meet the goals.


New decision-making methods can break the POLITICAL LOGJAM that arises in Washington.


Such strategies can help cut through contentious debates by providing plans of action that all can agree will play out no matter whose view of the future proves correct. Our adaptive strategy is similar to the "safety valve" strategies that some economists have proposed as alternatives to the immutable emissions targets in the Kyoto treaty. Our new analytical machinery enables decision makers both to design such strategies and to demonstrate their effectiveness to the various interest groups involved.

Of course, even adaptive strategies have their Achilles' heel. In the case of the safety valve, the combination of environmental goals and cost constraints that works best in most futures performs poorly when technological innovation proves to be extremely expensive. To get around this problem, the user can repeat the analysis to come up with a variety of robust strategies, each of which breaks down under different conditions. One strategy may work well when another fails, and vice versa, so the choice between them involves an unavoidable trade-off. The computer calculates how likely each set of circumstances would have to be to justify picking one strategy over the other. Our method thus reduces a complex problem to a small number of simple choices. Decision makers make the final call. Instead of fruitlessly debating models and other assumptions, they can focus on the fundamental trade-offs, fully aware of the surprises that the future may bring.

Clearly, this approach is applicable not only to sustainable development but also to a wide range of other challenges: bringing new products to market, managing the nation's entitlement programs, even defeating terrorism. Science and technology cannot change the future's fundamental unpredictability. Instead they offer an answer to a different question: Which actions today can best usher in a desirable future? Humans and computers search for plausible futures in which a proposed strategy could fail and then identify means to avoid these potential adverse outcomes.

Past failures of prediction should humble anyone who claims to see a clear course into the decades ahead. Paradoxically, though, our greatest possible influence in shaping the future may extend precisely over those timescales where our gaze becomes most dim. We often have little effect on a predictable, near-term future subject to well-understood forces. Where the future is ill defined, unpredictable and hardest to see, our actions today may well have their most profound effects. New tools can help us chart the right course.

Exploratory Modeling for Policy Analysis. Steven C. Bankes in Operations Research, Vol. 41,

Assumption-Based Planning. James A. Dewar. Cambridge University Press, 2002.

High-Performance Government in an Uncertain World. Robert J. Lempert and Steven W. Popper in High Performance Government: Structure, Leadership, Incentives. Edited by Robert Klitgaard and Paul C. Light. RAND-MG-256; 2005.

Balancing the Economy and the Environment

Scientific American Magazine Vol 292 Issue 4This article was originally published with the title “Shaping the Future” in Scientific American Magazine Vol. 292 No. 4 ()