Skip to main content

1980 | Buch

Societal Risk Assessment

How Safe is Safe Enough?

herausgegeben von: Richard C. Schwing, Walter A. Albers Jr.

Verlag: Springer US

Buchreihe : General Motors Research Laboratories

insite
SUCHEN

Über dieses Buch

This volume constitutes the papers and discussions from a symposium on "Societal Risk Assessment: How Safe is Safe Enough?" held at the General Motors Research Laboratories on October 8-9, 1979. This symposium was the twenty-fourth in an annual series sponsored by the Research Laboratories. Initi­ ated in 1957, these symposia have as their objective the promotion of the interchange ofknowledge among specialists from many allied disciplines in rapidly developing or changing areas ofscience or technology. Attendees characteristically represent the academic, government, and industrial institutions that are noted for their ongoing activities in the particular area of interest. The objective of this symposium was to develop a balanced view of the current status of societal risk assessment's role in the public policy process and then to establish, if possible, future directions of research. Accordingly, the symposium was structured in two dimensions; certainty versus uncertainty and the subjective versus the objective. Furthermore, people representing extremely diverse discip­ lines concerned with the perception, quantification, and abatement of risks were brought together to provide an environment that stimulated the exchange of ideas and experiences. The keys to this exchange were the invited papers, arranged into four symposium sessions. These papers appear in this volume in the order of their presentation. The discussions that in turn followed from the papers are also included.

Inhaltsverzeichnis

Frontmatter

The Risks We Run and the Risks We “Accept”

Frontmatter
The Nature of Risk
Abstract
“Risk” may be defined as a compound measure of the probability and magnitude of adverse effect. Important distinctions can be made among six major classes of hazard (infectious and degenerative diseases; natural catastrophes; failure of large technological systems; discrete, small-scale accidents; low-level, delayed-effect hazards; and sociopolitical disruptions). Decisions about risks meet with four kinds of limitations: of empirical analysis of “the facts”, of social value appraisal, of “risk management”, and of the assignment of rights and responsibilities.
William W. Lowrance
The Uncertain Risks We Run: Hazardous Materials
Abstract
The problems of risk assessment are discussed from the dual point of view of:
1.
Who is exposed — and to how much and
 
2.
Given the estimated levels of exposures, what estimates of risk can be developed.
 
The major issue of concern is that in general there is inadequate data on who is exposed, and almost no data on how much exposure there is. Individual exposure monitoring is rarely done (except for such things as radiation, through radiation badges) making it impossible to associate individual exposure with subsequent disease — if any. It is often difficult to know how many employees are exposed to a given agent. Before monitoring is undertaken an agent usually has to come under suspicion, thereby making inevitable unmonitored exposures to materials later found to be hazardous.
The mathematical models for extrapolation from species to species vary enormously in the predictions they yield, even when one accepts all the assumptions about species-to-species jumping. Recent work derived from Whittemore and Keller would seem to imply that in a multi-stage cancer process, the initiator stage dose-effect may pile up early in the life history of exposure, implying that several short term exposures to different carcinogens may create more risk than one long-term exposure. Promoting agents, according to this model, on the other hand, seem to have a different mode of action — with far more rapidly diminishing residual effects — so that the promoter may “have to be there” for its effect to be manifest. These different modes of action have implications for research and applications for prevention.
Marvin A. Schneiderman
The Known Risks We Run: The Highway
Abstract
The paper reviews known risks on the highway in perspective with the options available for reduction of risk, illustrating principles from experience in Great Britain.
Firstly, the overall risks of accident and injury on the highway are enumerated statistically in relation to the individual and to the community. The subjective view of these risks is discussed in terms of public response to different situations and monetary values ascribed.
The relatively objective assessment of risks associated with different factors has been obtained through a multi-disciplinary study of the contribution of aspects of highway design, vehicle condition, and road user behaviour to accident occurrence. Stress is laid on interactions between the road, the vehicle, and the road user. Other studies give quantification of risks for specific road engineering and environmental factors, and show how probability of injury may be related to vehicle design.
The potential for reduction in accident or injury risk is examined in the light of known benefits from well tried countermeasures, and quantified in relation to road engineering and traffic management, vehicle design and use, road user behaviour and road usage.
Finally, risk factors are contrasted with remedial potential. Future directions for application of countermeasures to reduce risk and for research to enhance quantification of risk and identify new effective countermeasures are enumerated.
Barbara E. Sabey, Harold Taylor
Perceptions of Risk and Their Effects on Decision Making
Abstract
The evaluation of risks is a part of any rational decision-making process. But what risks? And whose evaluation? Measures of risk tend to fall in two broad categories: those that purport to observe or calculate the actual risk of a process or project and those that rely upon the judgments of those assessing the risk. A nomenclature problem arises here. Some characterize the categories as “objective” and “subjective”; others, perhaps somewhat arrogantly, call them “real” and “imagined.” Technical experts tend to consider the use of measures of the first sort the only legitimate way of describing risk, yet measures of the second sort tend to dominate the thinking and actions of most individuals. Whatever they are called, the two measures of risk seem seldom, if ever, to agree. The social and psychological reasons (or explanations) for the disparity are interesting but, beyond them, there are important implications of the disparity for decision making in an increasingly technological society. The distinction between the two measures of risk creates difficulties for decision makers and regulators among which are: an increase in the use of propaganda and indoctrination by government, industry, and technical experts in attempts to convince the public that technical estimates of risk are valid; a continued erosion of trust and understanding between experts and the rest of the public; further complication in the already involved process of setting priorities for government or corporate programs; and a challenge to decision makers to explain uncertainties about the effects of their actions.
Raphael G. Kasper

“Acceptability” with Fixed Resources

Frontmatter
On Making Life and Death Decisions
Abstract
Recent research has provided us with methods by which an individual can make decisions that involve risk to his life in a way that is consistent with his total preferences and with his current risk environment. These methods may ethically be used only by the individual himself or by an agent designated by the individual. In the absence of such delegation, anyone who imposes a risk on another is guilty of assault if the risk is large enough. Just as society has found ways to distinguish a “pat on the back” from physical battery, so must it now determine what risk may be placed upon another without his consent.
The research on hazardous decision making creates a framework for this exploration. The basic concept of this approach is that no one may impose on another a risk-of-death loss greater than a specified criterion value established by the experience of society. If anyone attempted to do so, he could be forbidden by injunction. The only way that an injunction could be avoided would be by showing evidence of insurance that would cover the damages to be paid by the imposer of the risk if the unfortunate outcome should occur. The methodological framework is used both to estimate the risk-of-death loss and the amount to be paid if death occurs, an amount that is likely to be much larger than present “economic” values of life. Evidence would be required both on the preferences of the individual-at-risk as revealed and corroborated by his behavior and on the magnitude of the risk as assessed by experts.
Such a system is likely to require revisions in the present legal codes. It is to be expected that when a logically and ethically based risk system is functioning, there will be an increased interest in purchasing the consent of people to imposed risk. Problems of securing the consent of contiguous property owners, for example, could be handled by interlocking options. People will also be more likely to be informed of the risk implied by using products or services. Thus risk would become an explicit part of purchasing decisions. The joining of logic and ethics in these new procedures offers hope for a more effective and humane treatment of risk issues in society.
Ronald A. Howard
Economic Tools for Risk Reduction
Abstract
Risks are inherent in human activity. Society has devised a series of tools (or institutions or modes of behavior) to cope with risk; some of the coping takes the form of risk reduction behavior and some takes the form of rationalizing the acceptance of risk. Much of the nonrational behavior can be seen as suboptimizing, given the difficulty and expense of obtaining data, the difficulty and expense of analysis, and the cost of changing behavior. Three contradictions exemplify the difficulty of regulating risk: we are safer than ever before but more worried about risk, consumerists have achieved virtually all of their objectives but there is general dissatisfaction with risk regulation, and regulations never seem to achieve their objective. These contradictions show the need for explicit estimation of risks, costs, and benefits and recognition of the difficulty of changing individual’s behavior. Only by recognizing the difficulties of regulating risks and developing thoughtful approaches can we hope to achieve progress.
Lester B. Lave
Trade-Offs
Abstract
Finite resources meeting competing desires require efficient expenditures in all areas of life, even those that are life extending. Efficiency thus becomes synonymous with trade-offs. A survey of longevity for longevity trades introduces the concept of choosing among alternative life extending programs.
The 55 mph speed limit introduces more dimensions into the problem and the trade-offs become more difficult. Finally, the risks inherent in different energy futures present a very complex trade-off problem. The situation is complicated by individual and group perspectives and illustrates why a political consensus is difficult to achieve. The paper graphically illustrates why our institutions are virtually paralyzed by attempting to gain a mutually acceptable policy.
Richard C. Schwing
Risk-Spreading through Underwriting and the Insurance Institution
Abstract
The insurance underwriting decision process is often viewed as scientific, precise, and capable of accurate risk assessment. In fact, it is frequently influenced by competitive forces, the availability of reinsurance, and the judgment of underwriters. Inflation and changing social values also add to underwriting uncertainty by causing claims distributions to be unstable over time.
The financial stability of insurers to take risks is affected by the level of surplus and the stability of the underwriting portfolio. The latter is affected by the mix of insurance lines underwritten. Portfolio theory provides a helpful frame of reference for the evaluation of underwriting portfolios.
J. D. Hammond

“Acceptability” in a Democracy — Who Shall Decide?

Frontmatter
Facts and Fears: Understanding Perceived Risk
Abstract
Subjective judgments, whether by experts or lay people, are a major component in any risk assessment. If such judgments are faulty, efforts at public and environmental protection are likely to be misdirected. The present paper begins with an analysis of biases exhibited by lay people and experts when they make judgments about risk. Next, the similarities and differences between lay and expert evaluations are examined in the context of a specific set of activities and technologies. Finally, some special issues are discussed, including the difficulty of reconciling divergent opinions about risk, the possible irrelevance of voluntariness as a determinant of acceptable risk, the importance of catastrophic potential in determing perceptions and triggering social conflict, and the need to facilitate public participation in the management of hazards.
Paul Slovic, Baruch Fischhoff, Sarah Lichtenstein
Ethics, Economics and the Value of Safety
Abstract
Economists view private risk as a commodity — riskier jobs receive higher wages to compensate workers for voluntarily accepting job related risks of death. In the economist’s view, it is then a simple task to determine a money-risk trade-off derived from the job market and apply this trade-off in making decisions on social or public risks (as opposed to private risks). However, whereas private risks are typically compensated (payment is made a priori to accept risk), social or public risks are typically uncompensated. This distinction, compensated versus uncompensated risks, has long been observed, but in terms of a parallel terminology, voluntary versus involuntary risks. Society seems much more averse to the latter as opposed to the former. Economics as a behavioral science does not provide any clues as to this distinction. However, ethics, or rather the study of ethical systems, does provide a rationale for distinguishing between compensated and uncompensated risks. In this paper, four ethical systems — Utilitarian, Nietzschean, Rawlsian, and Libertarian — are examined as they apply to the question of the right or wrong of imposing or accepting risks. The conclusion of this analysis is that all of the four ethical systems examined accept compensated risks but reject uncompensated risks under at least some circumstances. The technical implication for economic analysis is that if any of these four ethical systems are used to construct a social welfare function, that function may imply lexicographic social preferences between public and private safety.
William D. Schulze
Problems and Procedures in the Regulation of Technological Risk
Abstract
Until recently, evaluating the risks of technology has been considered a technical problem, not a political issue; a problem relegated to expertise, not to public debate. But disputes have politicized the issue of risk and led to the development of procedures to enhance public acceptability of controversial projects. Our paper reviews various types of hearings, public inquiries, advisory councils, study groups and information forums that have been formed in the United States and Western Europe to resolve disputes over science and technology. We analyze the assumptions about the sources of conflict and the appropriate modes of decision making that underly these procedures. And we suggest some reasons for their rather limited success in reducing political conflict and achieving public consensus.
Dorothy Nelkin, Michael Pollak
The Role of Law in Determining Acceptability of Risk
Abstract
Legal criteria and procedures for determining societal acceptability of risk are established by governmental institutions performing legislative (i. e., deliberate formulation of rules with prospective application) and/or judicial (i. e., resolution of conflict on a case-by-case basis) functions. In both cases the law-making process reflects both the subjective preferences of the human beings who administer these institutions and their sense of societal attitudes.
Laws, once formulated, are in no sense immutable. The words used in formulation of the laws are inexorably subject to interpretation, and interpretation necessarily reflects ethical, cultural, and political considerations. Moreover, the laws — whether made by legislature, court, or administrative agency — are always subject to override by political effort culminating in legislative action. Thus courts and administrative bodies are constrained in deciding “how safe is safe enough” by their sense of what the body politic desires or will tolerate.
It is futile to expect that societal determinations as to acceptability of risk can be made in an entirely rational, objective manner that will reflect a “correct” balancing of benefits and risks. Indeed, societal “acceptability” is an ephemeral thing that frequently changes from day-to-day. The most for which we can hope is that law provides procedures through which decision-makers will have access to all relevant data — hard as well as soft — so that they can perform their assigned function of making determinations that reflect their sense of what their constituency wants.
Harold P. Green

Directions and Perspectives of Societal Risk Assessment

Frontmatter
Aesthetics of Risk: Culture or Context
Abstract
High standard Himalayan climbing is, quite probably, the riskiest business there is: the chances of being killed are around 1 in 8 or 1 in 10 per expedition. So Himalayan mountaineering provides an ideal (if dangerous) laboratory for the investigation of how and why people come to accept a very high level of risk. By a fortunate coincidence, such discussion of risk as exists in the anthropological literature is centered on the phenomenon of Himalayan trade, and some of these traders (the Sherpas) are now heavily involved in mountaineering.
The conventional anthropological theory is that individuals are guided in their choice between risk-avoiding and risk-accepting strategies by their world views - their culture. A more radical hypothesis accepts this but goes on to suggest that both chosen strategy and culture are, in turn, closely related to the social context that an individual finds himself in. Since an individual’s social context can be changed, either by his own efforts or by the actions of his fellows, it follows that his culture and his chosen strategy may also change. On this hypothesis, it is a waste of time trying to discover which is the correct (or best) strategy. Strategies are not right or wrong; they are appropriate or inappropriate. At this point, the anthropology of risk begins to acquire practical implications. But just what these implications are is not (as yet) too clear.
1.
Complex industrial societies are likely to generate a wide variety of social contexts and, at the same time, the social policies that such societies adopt are likely to alter the distribution of those contexts - increasing the number of individuals in some and decreasing the numbers in others.
 
2.
Each context will generate its own strategy, its own appropriate pattern of behavior... its own rationality. The interesting question then becomes: how do these different rationalities impinge upon one another — what does the risk avoider stand to gain as well as lose, from the activities of the risk accepter, and vise versa?
 
Could this mean that there is some optimum configuration of social contexts — some particular mix of contradictory rationalities — at which the welfare of the totality will reach a maximum? If the answer is “yes” then public policy can take an oblique approach to risk; advocated policies can be assessed simply according to whether they are likely to bring the mix of rationalities nearer to, or further from, this optimum. There will be no need for anyone to say how much a human life is worth.
Michael Thompson
Witches, Floods, and Wonder Drugs: Historical Perspectives on Risk Management
Abstract
Risk is a people problem, and people have been contending with it for a very long time indeed. I extract some lessons from this historical record and explore their implications for current and future practice of risk management.
Socially relevant risk is not uncertainty of outcome, or violence of event, or toxicity of substance, or anything of the sort. Rather, it is a perceived inability to cope satisfactorily with the world around us. Improving our ability to cope is essentially a management problem: a problem of identifying and carrying out the actions which will change the rules of the game so that the game becomes more to our liking.
To cope better is to better understand the nature of risks and how they develop. It is naive and destructive to pretend that such understanding can carry with it the certainties and completeness of traditional science. Risk management lies in the realm of trans-science, of ill-structured problems, of messes. In analyzing risk messes, the central need is to evaluate, order, and structure inevitably incomplete and conflicting knowledge so that the management acts can be chosen with the best possible understanding of current knowledge, its limitations, and its implications. This requires an undertaking in policy analysis, rather than science.
One product of such analyses is a better conceptualization of “feasibility” in risk management. Past and present efforts have too often and too uncritically equated the feasible with the desirable. Results have been both frustrating and wasteful.
Another is an emphasis on the design of resilient or “soft-fail” coping strategies. The essential issue is not optimality or efficiency, but robustness to the unknowns on which actual coping performance is contingent.
The most important lesson of both experience and analysis is that societies’ abilities to cope with the unknown depend on the flexibility of their institutions and individuals, and on their capability to experiment freely with alternative forms of adaptation to the risks which threaten them.
Neither the witch hunting hysterics nor the mindlessly rigid regulations characterizing so much of our present chapter in the history of risk management say much for our ability to learn from the past.
William C. Clark
Prospects for Change
Abstract
There will be change — no question about that. But there are questions about the direction and rate of change, whether or not the change can be controlled, and if so, who will direct it and how.
The problem of controlling changes raises the issue of technological determinism. Is technology autonomous, as Langdon Winner claims? Is it not also possible that there is autonomous social change?
In modern industrial society, neither technological nor social change is autonomous. They are interconnected and interactive. And the directions of both social and technical changes will be determined in large measure by our assessment of societal risks and benefits, which, in the last analysis, will be both cause and effect of changes in our values.
Some technological changes are already under way. Certain of these are the result of advances in our scientific technology, such as developments in computers, communications, and information science. Others derive from a changing public attitude toward the environment, while still others result from our profligate use of energy and materials in the past, and are heightened by international political developments.
How we view those changes — how we choose among the available technologies or which ones we will endeavor to make available — will depend on how they interact with a series of sociopolitical-cultural changes which are also occurring or are about to make themselves felt.
These social changes include a burgeoning world population (affecting the population/resource ratio); transformation in the basic institution of the American family; revolutionary changes in learning, living, and leisure patterns; the growing older of the American population; the emergence of participatory democracy; alterations in the city-suburb interface; and the like. Perceptions of risk will undoubtedly change as new sociocultural and demographic patterns emerge.
Inasmuch as our technology has not yet fully responded to these massive social changes which are already underway, we might be faced with a “technological lag.” Despite the present crisis of confidence, there is ample evidence to indicate that American science and technology can respond effectively to these changes while providing new and changing answers to the question of “How safe is safe enough?”
Melvin Kranzberg
Miscellaneous Discussion
Richard C. Schwing, Walter A. Albers Jr.
Concluding Remarks
Abstract
First, if you’ll pardon me, I’ll start with a facetious remark. I thought a few minutes about the subtitle of this conference: “How Safe is Safe Enough?” My judgmental, but professional, assessment is.3624 units. Really, all the members in this conference realize that this question, in the abstract, does not make sense. It could make sense, of course, if suitably amplified. There is no absolute standard for safety — nor should there be. In any particular policy or decision choice where safety is a concern there is undoubtedly a myriad of other concerns. What are the full panoply of costs of benefits and of risks and what are their distributional impacts? If an action is contemplated, what are other possible contending action alternatives? What do we know about present uncertainties, the disputes about these uncertainties and how might the assessments of these uncertainties change over time? What about the precedent that will be established if such and such an action is taken? “How Safe is Safe Enough?” is a short-hand, catchy-sounding phrase that is merely a pitifully weak and misleading simplification of a very complex problem.
Howard Raiffa
Symposium Summary the Safety Profession’s Image of Humanity
Abstract
In the 1780’s, Immanuel Kant was struggling to understand the basic principle underlying morality. First he stated it as a “categorical” imperative, meaning that it holds unconditionally: “you ought to do X,” and no if’s, and and’s, or but’s. The X you ought to do is to act so that you can will the principle of your action to hold universally, i. e., for everyone in every situation. In order to clarify the meaning of this categorical imperative, Kant gave us an alternative version: “so act as to treat humanity, either in yourself or in another, never as means only but as an end withal.”
C. West Churchman
Backmatter
Metadaten
Titel
Societal Risk Assessment
herausgegeben von
Richard C. Schwing
Walter A. Albers Jr.
Copyright-Jahr
1980
Verlag
Springer US
Electronic ISBN
978-1-4899-0445-4
Print ISBN
978-1-4899-0447-8
DOI
https://doi.org/10.1007/978-1-4899-0445-4