5.3.1 Five Conceptual Proxies for Robustness
Like all words, “robustness” has many connotations, and its meanings overlap with the meanings of other words.
1 We discuss five concepts that overlap significantly with the concept of robustness against uncertainty, and that are useful in the qualitative assessment of decisions under uncertainty. Each of these five concepts emphasizes a different aspect of robustness, although they also overlap. The
five proxies for robustness are resilience, redundancy, flexibility, adaptiveness, and comprehensiveness. A decision, policy, action, or system is highly robust against uncertainty if it is strong in some or all of these attributes; it has low robustness if it is weak in all of them. We will subsequently use the term “system” quite broadly, to refer to a physical or organizational system, a policy for formulating or implementing decisions, or a procedure for political foresight or clinical diagnosis, etc.
Resilience of a system is the attribute of rapid recovery of critical functions. Adverse surprise is likely when facing deep uncertainty. A system is robust against uncertainty if it can rapidly recover from adverse surprise and achieve critical outcomes.
Redundancy of a system is the attribute of providing multiple alternative solutions. Robustness to surprise can be achieved by having alternative responses available.
Flexibility (sometimes called agility) of a system is the ability for rapid modification of tools and methods. Flexibility or agility, as opposed to stodginess, is often useful in recovering from surprise. A physical or organizational system, a policy, or a decision procedure is robust to surprise if it can be modified in real time.
Adaptiveness of a system is the ability to adjust goals and methods in the mid- to long-term. A system is robust if it can be adjusted as information and understanding change. Managing Knightian uncertainty is rarely a once-through procedure. We often must re-evaluate and revise assessments and decisions. The emphasis is on the longer time range, as distinct from on-the-spot flexibility.
Comprehensiveness of a system is its interdisciplinary system-wide coherence. A system is robust if it integrates relevant considerations from technology, organizational structure and capabilities, cultural attitudes and beliefs, historical context, economic mechanisms and forces, or other factors. A robust system addresses the multi-faceted nature of the problem.
5.3.2 Simple Qualitative Example: Nuclear Weapon Safety
Nuclear weapons play a role in the national security strategy of several countries today. Like all munitions, the designer must assure effectiveness (devastating explosion during wartime use) together with safety (no explosion during storage, transport, or abnormal accident conditions). The “always/never” dilemma is “the need for a nuclear weapon to be safe and the need for it to be reliable …. A safety mechanism that made a bomb less likely to explode during an accident could also, during wartime, render it more likely to be a dud. … Ideally, a nuclear weapon would always detonate when it was supposed to—and never detonate when it wasn’t supposed to” (Schlosser
2013, pp. 173–174). There are many quantitative methods for assessing effectiveness, safety, and the balance between them, but there remains a great need for human judgment based on experience. We briefly illustrate the relevance of the five qualitative proxies for assessing and achieving robustness to uncertainty.
Nuclear weapon safety is assured, in part, by the requirement for numerous independent actions to arm and detonate the weapon. Safety pins must be removed, secret codes must be entered, multiple activation keys controlled by different individuals must be inserted and turned, etc. This redundancy of safety features is a powerful concept for assuring the safety of weapon systems. On the other hand, the wartime detonation of the weapon is prevented if any of these numerous redundant safety features gets stuck and fails to activate the device. Redundancy for safety is a primary source of the “always/never” dilemma.
Resilience of the weapon system is the ability to recover critical functions—detonation during wartime in the present example—when failure occurs. For example, resilience could entail the ability to override safety features that fail in the locked state in certain well-defined circumstances. This override capability may be based on a voting system of redundant safety features, or on human intervention, or on other functions. The robustness to uncertainty is augmented by redundant safety features together with a resilient ability to countervail those safety features in well-defined situations where safety features have failed in the locked mode.
Sometimes, the critical function of a system is not a physical act, like detonation, but rather the act of deciding. A command and control hierarchy, like those controlling nuclear weapon use, needs to respond effectively to adverse surprise. The decision to initiate the use of nuclear weapons in democratic countries is usually vested exclusively in the highest civilian executive authority. A concern here is that a surprise “decapitation” strike against that civilian authority could leave the country without a nuclear-response capability. The decisionmaking hierarchy needs flexibility against such a surprise: the ability to exercise the critical function of deciding to use (or not to use) nuclear weapons after a decapitating first strike by an adversary. Flexibility could be attained by a clearly defined line of succession after incapacitation of the chief executive, together with both physical separation between the successors and reliable communication among them. It is no simple matter to achieve this finely balanced combination of succession, separation, and communication. The concept of flexibility assists in assessing alternative implementations in terms of the resulting robustness against uncertainty in hierarchical decisionmaking.
Hierarchical decisionmaking needs to be adaptive in response to changing circumstances in the mid- to long-term. For example, the US line of Presidential succession is specified in the US Constitution, and this specification has been altered by amendment and clarified by legislation repeatedly over time to reflect new capabilities and challenges.
The comprehensiveness of a decision is the interdisciplinary scope of the mechanisms and interactions that it accounts for and the implications it identifies. The uncertainties regarding nuclear weapons are huge, because many mechanisms, interactions, and implications are unknown or poorly understood. This means that the potential for adverse surprise is quite large. Comprehensiveness of the decision analysis is essential in establishing robustness against uncertainty. Thinking “outside the box” is a quintessential component in achieving comprehensiveness, and human qualitative judgment is of foremost importance here.
We now use these ideas to schematically prioritize two alternative hypothetical strategies for supervising nuclear weapons in a liberal democracy such as the USA, based on the proxies for robustness.
The first strategy is based on current state-of-the-art (SotA) technologies, and authority is vested in the President as commander in chief. Diverse mechanisms assure redundancy of safety features as well as resilience and flexibility of the systems to assure only operational detonation, and adaptability in response to changes over longer times. The system of controls is comprehensive, but primarily based on human observation, communication, and decision.
The second strategy is new and innovative (NaI) and extensively exploits automated sensor- and computer-based access, authentication, communication, control, and decision. The strategy employs “big data” and artificial intelligence in assessing threats and evaluating risks. Humans are still in the loop, but their involvement is supported to a far greater extent by new and innovative technologies.
Our best understanding of these strategies—SotA and NaI—predicts that the second strategy would provide better safety and operability. However, deep uncertainties surround both strategies, and more so for the innovative second strategy because of its newness. The innovation dilemma is that the putatively preferable innovative alternative is more uncertain, and hence potentially worse, than the standard alternative. Two properties of robustness assist in resolving this dilemma: zeroing and trade-off.
The zeroing property of an IG robustness assessment states that the predicted performances have no robustness to uncertainty. This is because even small errors or lacunae in the knowledge or understanding (upon which predictions are based) could result in outcomes that are worse than predicted. Hence, prioritizing the strategies based on the predictions is unreliable and irresponsible. We must ask what degrees of safety and operability are essential for acceptable performance. That is, we satisfice the performance, rather than trying to optimize it.
We then note that more demanding performance requirements can fail in more ways and thus are more vulnerable to uncertainty. This implies a trade-off between performance and robustness to uncertainty: Greater robustness is obtained only by accepting more modest performance. In high-consequence systems, such as nuclear weapons, the performance requirements are very demanding. Nonetheless, the trade-off is irrevocable and it is wishful thinking to ignore it. The robustness of each strategy is assessed by its strength in the conceptual proxies. The robust-satisficing preference is for the strategy that satisfies the performance requirements at greater robustness.
Suppose that the predicted performance of the SotA strategy only barely satisfies the performance requirements. The proxies for robustness of the SotA will have low strength, because small errors can jeopardize the adequacy of the performance. This may in fact have motivated the search for the NaI strategy whose predicted performance exceeds the requirements. In this case, the robust preference will be for NaI, although consideration must be given to the strength of its proxies for robustness. If the proxies for robustness of NaI are also weak, then neither alternative may be acceptable.
Alternatively, suppose that the SotA satisfies the performance requirements by a wide margin. Its proxies for robustness will be strong and probably stronger than for NaI. In this case, the robust preference is for SotA, illustrating the potential for a reversal of preference between the strategies: NaI is putatively preferred (based on predicted outcomes), but SotA is more robust, and hence SotA is preferred (based on robustness) over NaI. This emphasizes the difference between robustly satisficing the performance requirements (which leads to either SotA or NaI, depending on the requirements) as distinct from prioritizing based on predicted outcomes (which leads to the putatively better alternative, NaI).