Skip to main content
Erschienen in:
Buchtitelbild

Open Access 2018 | OriginalPaper | Buchkapitel

7. Situated Practice and Safety as Objects of Management

verfasst von : Petter G. Almklov

Erschienen in: Beyond Safety Training

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

This chapter focuses on the relationship between representations of work (rules, procedures, models, specifications, plans) and work as a situated practice, performed by real people in always unique contexts. Empirically, it is organized around two main examples, the first one being a discussion of the compartmentalization of safety seen in shipping and the railway sector. It shows how safety, as an object of management, has become decoupled from practice, and how current discourses about safety disempower practitioners and subordinate their perspectives to more “theoretical” positions. The second is based on a study of control room operators in a space research operations setting. Here safety in the sense of avoiding harm to people is not the main concern; rather it is the reliability and robustness of an experiment on the International Space Station that is at stake. This example serves as a starting point for discussing how the research and theory on industrial safety should address the different temporalities of different work situations. It also helps to discuss the role of rules and procedures to support safety, reliability and resilience within the field of safety science. Finally, some propositions about the relationship between situated practice and the management of safety are provided: how invisible aspects of situated work might be important for safety yet hard to manage, how procedures and rules might be integrated parts of situated work as much as representations of it and how different temporalities of work situations should be included in the theorizing of safety and resilience.

7.1 Introduction

Safety is a word we use to refer to a state or a condition, not an event in itself. This doesn’t mean that nothing happens in a safe condition. On the contrary, safety more often than not depends on practice, on continuous actions and situational adjustments. But these are not in themselves safety. Thus, regulating, managing and controlling safety is always a matter of indirect measures, directed at other things that might influence safety. This book discusses how the professionalization of safety coupled with the increasing interest in managing safety and of training professionals in it, can influence industrial safety. Together with my colleagues1 I have studied situated practice in a variety of industrial contexts and based on this I will reflect on the relationship between representations of work, that is, descriptions and prescriptions, and the practice of the professionals involved. Organizational studies generally including, to some extent, safety studies, have a tendency to stereotype work (See Suchman 1995; Barley and Kunda 2001). We often fail to capture the nuances in how work is actually performed, and we draw boxes and arrows and superficial models of “workflow” to represent it. The starting point and analytical lens of my discussion is this relationship between representations of work (rules, procedures, models, specifications, plans) and work as a situated practice, something that is performed by real people in always unique contexts.
Empirically, this chapter is organized around two main examples. These are not intended as comparative cases, but as two examples that allow us to develop some ideas about the relationship between professionalization and safety and reliability. The first example is a discussion of the compartmentalization of safety seen in shipping and the railway sector. The key motivation for this part is to show how safety, as an object of management, has become decoupled from practice, and how current discourses about safety disempower practitioners and subordinate their perspectives to more “theoretical” positions. The second is based on a study of control room operators in a space research operations setting. Here, safety in the sense of avoiding harm to people is not the main concern; rather it is the reliability and robustness of an experiment on the International Space Station that is at stake. This example serves as a starting point for discussing how the research and theory on industrial safety should address the different temporalities of different work situations. Secondly, this example invites us to recognize that procedures are (in some cases) an integrated part of situated work (and part of the “distributed cognition” of the control room operators). This serves to elaborate the discussions of the role of rules and procedures to support safety, reliability and resilience within the field of safety science. These examples form the backbone of the chapter but are supplemented with observations from other settings, such as infrastructure and petroleum processing. I conclude by providing some proposals about the relationship between situated practice and the management of safety.
Based on both my theoretical interests and the empirical data, the present discussion has value for some contexts and topics more so than others. Industrial safety is a matter of avoiding small accidents and incidents as well as larger events. This chapter is mainly, but not exclusively, about situations in which the work itself is critical for safety and reliability. The quality of the work of ship captains, infrastructure technicians and control room operators is in itself relevant for safety and/or reliability. A typical setting for this discussion is an information-dense control setting where there is some catastrophic potential, the bridge of a ship, a cockpit of a plane, or a control room. In other work situations, safety may be more loosely associated with the quality of the work itself. A related delimitation of this discussion is that we are mostly concerned with safety and reliability with regards to major accidents and incidents of a more systemic nature. The questions inspiring this book concern how one can train employees to be safer and implement policies to improve safety. In this respect there is a difference between simple injuries (a worker falling down the stairs or bumping his head) and more systemic and complex system breakdowns. This chapter, and the findings reported here, is skewed towards the latter type of incidents. Lastly, there is an implicit assumption of good intentions in my argument. In the cases I have studied, both the management and workers have great interest in prioritizing safety and some leverage to achieve it. Sometimes that is just not the case.2

7.2 Briefly on the Theoretical Background

Suchman’s (1987) book Plans and situated action is a cornerstone in ethnographically-oriented studies of work, and a central reference point for my discussions of the relationships between situated work and representations of it. Her book and related theory based on detailed studies of work tend to highlight the uniqueness of situations, and thus provide a necessary counterweight to organizational theory and management perspectives. While studies of “situated practice” can be seen as an insistence that procedures and plans do not represent action, this is only half the story. They should also be considered as a call to see the pragmatic role of these representations, the tools they constitute, in situations. The way practice is intertwined with material and symbolic artefacts in situated work represents another part of the theoretical background for this chapter.3 This is inspired by several sociotechnical approaches to situated practice.4 One such, is Hutchins’ (1995; Hutchins & Klausen 1996) discussions of “distributed cognition”, a strand of theory that stresses the relations between technologies and representation and thought, to the extent that the primary object of study is the distributed system. Understanding the always unique nature of situated action also fits well with recent safety theoretical frameworks like Resilience Engineering, which stresses the importance of ever-present variability, and how one performs work in situated contexts to handle it.5 In this literature Suchman’s plans and situated actions have their counterpart in “work as imagined and work as done” (Dekker 2006; see also Hollnagel 2015; Nathanael and Marmaras 2006; Haavik 2014).
A key trend in organizational life today is the increased focus on accountability and auditability. In the “audit society” (Power 1997) control, including over risk, is sought through standardization, measurement and counting (Power 2007; Hohnen and Hasle 2011; Almklov and Antonsen 2010, 2014). If possible, work is broken into manageable entities to be controlled by bureaucratic methods (such as audits or “management by objectives”) or market-based means. Tasks are delimited and decontextualized as much as possible in order for them to fit with audit schemes. The resulting paper trails can be used to make workers and managers “accountable” for safety. Of course, some things are easier to standardize and control in this way. More complex and situationally contingent work is hard to standardize (Almklov and Antonsen 2014), and much of what we regard as professional competence is left out. Moreover, the whole doctrine of accountability tends to skew our attention towards anticipating known risk, rather than being open for the unknown (see Wildavsky 1988). The first cases I will describe are examples of how safety, under the global developments towards standardization, accountability and self-regulation, has become an organizational discourse where generic models dominate over insights into the contextual peculiarities of different industries and work contexts.

7.3 First Example: Compartmentalization of Safety in Shipping and Railroads

Sometimes analytical ideas can be located in time and space. This comes from Bergen, Norway in 2012.
In 2012, Kristine Størkersen and I conducted interviews with the Norwegian Association of Cargo Freighters as a part of a project on regulation and safety culture in the transport sectors. This visit followed several interviews onboard transport ships and passenger vessels. Compared to the mighty Norwegian Shipowners’ Organization in Norway, that represents the international shipping industry, this interest organization is small and modest. Our interviews concerned how regulation of shipping influenced safety culture in shipping. In particular, we ended up discussing the ISM code, the international system dominating the management of safety on ships around the globe. The ISM code is an international standard requiring every ship to have a safety management system. It is built around principles of self-regulation, but it also places several demands on these systems. The ISM code is developed by the International Maritime Organization (IMO). The organization we visited represented several small (and a few larger) ship owners, many of them family businesses with one or two ships, and the ships themselves varied in size and technical complexity. Our interviews in this organization centered on the tension between the global standards, represented by the ISM code and the practical reality onboard some of these ships. Throughout the industry, the standard was seen as demanding way too much in terms of paper work and of being of little practical use, being hard to adapt to the practical reality. One interviewee exemplified this for us by describing how some sand boats operated in Norwegian fjords, basically sailing back and forth with sand or gravel from a quarry with a crew of two to three. And yet, he sighed, these boats are essentially under the same legislation as an oil tanker, so the inspector “should have some sense of reality!” Most ships needed consultants to help them develop a safety management system, and the systems they developed were typically too generic and too complicated to be of practical use. The inspections by national authorities (through classification societies or directly by the regulator) also focused on compliance with the ISM code and that the paperwork was in order, i.e. that they had a compliant safety management system (SMS). Thus, the discourse of safety drifted towards a system of auditable items, satisfying the ISM standards, and then complying with them.
Several of the employees at the association had worked on ships, and they cooperated closely with captains and shipowners. They, like the seamen we had interviewed earlier in the project, lamented the distance between the safety management systems implemented to control safety and the practical realities onboard the ships. Most systems were primarily paperwork, something that they were required to comply with, with little practical relevance for the operational safety. Moreover, the shipowners and captains were caught in a principal-agent6 relationship with the consultants. Consultants, moving around from ship to ship, may be fine with a generic and large safety management system, while the seamen that are supposed to use it and pay for it would prefer a simpler system, and one more adapted to their operational context. The ship owners’ interest organization, recognizing this, had developed their own consultancy service to help their member shipping companies develop less complicated systems tailored to their needs (while still fulfilling the minimal demands of the ISM code). What we observed, and which became so clear for us during interviews with these “translators”, was the compartmentalization of safety the ISM code and the safety management systems had led to. The well-intentioned efforts towards improving safety demanded a system of governance that was so complicated that the practitioners were unable to handle it, and had to resort to consultants processing the paperwork for them. The knowledge of individual ships, on how to operate them and the risks that this implies, became subordinate to a formal generic system. Moreover, handling the interface between the system of governance and practice depended on another form of expertise.
This was also apparent in our interviews with captains, ship owners and crews. The demand for documentation and reports took attention away from the key tasks of the seamen, and was particularly problematic in small businesses. Moreover, the paperwork7 was not aligned with their professional practices as seamen. Rules and procedures were too specific and too little adapted to their work context and skills to be useful. The SMS didn’t help them in the most important parts of their work. A sailor on an anchor handler, a strong tugboat working for the petroleum industry, described the lack of relevance of the SMS to me during a break between activities on deck in an operation at an offshore oilfield. Being on deck on an anchor handler is truly hazardous work involving heavy machinery, chains, wires and winches. There wasn’t much paperwork with this work, he told me, but as soon as the ship is anchored in the harbor and he wants to do some painting there are all sorts of forms to fill out. The procedures were most relevant for the least dangerous work, and then they didn’t make much of a difference anyway according to him. Another informant on a high speed passenger craft noted how the SMS describes how to mark out routes in a way that didn’t consider weather and current, commenting:
experienced navigators want to – and do – choose a course according to wind and current.8
In his organization, operating a fleet of High Speed Passenger crafts, they had answered the demand for reporting and a solid safety management system by employing safety professionals onshore. Many of these professionals had experience from other industries and a more generic and systems-oriented approach to safety. Though there are nuances to this image, we recorded numerous examples in this and other projects of how the safety management system was regarded having little relevance for the core activities onboard the ships. In both these examples, one may assume that this lack of relevance has to do with how the professional competence of the users, of captains and deck hands, is about navigating within dynamic and situationally contingent situations. A generic “recipe” on how to behave on deck during an evolving anchor handling operation will just not capture the essence of this dynamic and situationally contingent work.
In the resulting paper (Almklov et al. 2014) we also included Ragnar Rosness’ historical account of the Norwegian Railways. There too, the development towards a more “professional” approach to safety, or “Health Safety and Environment”, led to a discursive dominance of what one may call “theoretical” or generic approaches to safety. This can be traced as a historical development through several organizational changes and reorganizations where the railroads’ traditional “Safety Office”, specializing on how to build and operate the train system safely, gradually became subordinated to an HSE department consisting of safety experts from other industries, specializing in more generic models of safety. The once so powerful safety office moved downwards in the hierarchy in the organizational model. Their perspectives on how to make the railroad system safe became less important, and less significant in the organizational discourses. Several mechanisms contributed to this. For example, since investigations after accidents were typically based on generic models of safety, inspired by other industries, the need for more systematic and accountability-based approaches to safety tended to be the obvious measures to implement afterwards. The railroad-specific safety knowledge was still there, but its proponents were less powerful, and consequently resources were directed towards other forms of safety. In both cases we observe a weakening of the practitioners perspectives in safety management. These are some possible downsides of strengthening safety as a separate discipline. If the object of interest is safety, it is easy to ignore or lose track of the peculiarity of the operational contexts.

7.4 Second Example: Anticipatory Work in Space Operations

The control room operating a research module at the international space station (ISS) is a fascinating study object for research on reliability and resilience (see Fig. 7.1). However, going beyond the control room itself, and including details of the surrounding organizational processes, preparation, planning and training, is even more interesting.
These other activities also, our informants repeatedly reminded us, makes up more than 90% of their work. When you work with advanced and costly space operations, reliability and resilience is at the very core of the work activities. My colleagues and I followed the work of a team of research engineers conducting a biological experiment on the ISS, and we studied their extreme focus on anticipating and mitigating possible problems in advance.
The control center N-USOC9 is part of a distributed network of small control rooms operating individual equipment onboard the ISS. This control room’s most important payload is a microgravity research laboratory used for biological experiments on plants. The research engineers at N-USOC can be seen as a form of lab technicians, helping researchers transform ideas into workable experiments, testing and verifying equipment and procedures before the seeds are sent to the IS. Then they monitor the experiment as it is conducted. Due to the high cost, low accessibility and low tolerance for risk,10 space operations is an interesting case for studying reliability and resilience. Every trivial detail that could possibly cause a problem is subject to intense scrutiny. In the paper “What can possibly go wrong?” (Johansen et al. 2015) we identify and discuss “anticipatory work”: practices constituted of an entanglement of cognitive, social and technical elements involved in anticipating and proactively mitigating everything that might go wrong.11 The nature of anticipatory work changes between the planning and the operational phases of an experiment.
The case revolves around an incident where the control room operators have to solve a telemetry error. The data from the lab module fails to reach the control room. This threatens to ruin a multi-million dollar experiment that has been planned and prepared for seven years. We followed the resolution of the problem. But, importantly, we had also studied the anticipatory work that this troubleshooting relied upon. In this preparatory stage, every anomaly that has happened in previous experiments is analyzed and mitigated in advance, either by technological changes, by changing computer scripts, or writing “just in case” scripts, by developing procedures or protocols. An informant explains:
First of all it is things that have happened before and we know can happen again. After that we just sit and think ‘what if that happens, even though it looks impossible?’, so we start to think very negatively, that works well, and we write what-if scenarios.
Throughout the planning phase possible problems that could occur were identified and subject to collective reflection. They were documented and possible solutions were developed. The telemetry error they experienced had been experienced before. They did not know exactly what caused it, and could not fix it permanently, but they had developed several procedures that might fix it.
Problem resolution in the operational phase definitely resembled the typical story in safety journals on control room operations. There was a process of confusion and ad hoc-sensemaking as they tried to understand the problem. The process also demanded some creative thinking. However, the cognitive and social process in the operational phase is intrinsically connected to the anticipatory work conducted in the planning phase. The critical difference being that the solutions developed in the calm of the preparatory phase had to be situated in the temporal flow and situational contingencies of the real-time phase. The first solution was to send a pre-programmed work-around script to the unit. This is minimally invasive and something the N-USOC can do without involving entities from the NASA/ESA network, which they did after that they had diagnosed the problem. However, this work around was unsuccessful. The next procedure was to restart a computer on the ISS handling the telemetry data. To do this, they would have to coordinate with other entities at ESA and NASA. Even though these preplanned fixes had been worked out in detail, their plans could not take into account parallel activities at the ISS. Thus a key task for the operators is to use their understanding of the interaction effects with other operations and systems and find a way to execute this reboot in an acceptable manner. Unfortunately, another greenhouse experiment was active with ongoing astronaut activities that continued for some time, and N-USOC couldn’t restart the computer before that had been completed, since the other team’s equipment was connected to it as well.
The temporal dimension complicates the matter further in several ways:
1.
their own experiment cannot continue without telemetry for much longer, so it is urgent to get it fixed,
 
2.
communication with the ISS only works in irregular, but pre-identified intervals,12
 
3.
and of course, they are unable to control the speed of the other experiment blocking their reboot.
 
Thus, they need to look for upcoming time-slots to perform their shut down, as soon as the other experiment is done. This is something they have not pre-planned, but their pre-planning of solutions is crucial for their resolutions, as it provides them with pieces of the temporal puzzle. They improvise with plans, and this improvisation is mainly about situating the plans in a temporal flow. Moreover, in their interaction with important stakeholders in the ESA and NASA hierarchy being able to refer to pre-planned interventions fast-tracks their go-ahead for the restart.
By focusing not only on the control room activities as the experiment unfolded, which we recorded on video and analyzed in detail, but also on the organizational context and extensive preparations, we made two observations with implications for the governance of safety. We demonstrate in some detail how the engineers try to anticipate upcoming contingencies and how they produce solutions to these—technological fixes, procedures, checklists, etc. and how these become parts of a sociotechnical body of knowledge. The procedures and fixes are indivisible parts of their “distributed cognition” (Hutchins and Klausen 1996). The actions of the control room operators are located in a situation where procedures, protocols, checklists, computer scripts etc. are an intrinsic part. The debates in safety research on the extent to which rules and procedures can or should control practice, must be nuanced with a discussion of whether these are an integrated part of practice or not. In this case they are, and procedures and practice are entwined, but in other cases procedures mainly serve management purposes. We saw how this seemed to be the case in shipping, and we have also seen similar developments in petroleum (see for example Antonsen et al. 2008, 2012). Due to the dominating logic of accountability, control by standardization and compartmentalization of HSE, the representations of work are (often) too decontextualized to be of much use in situated work contexts.
A second observation with relevance for this book is the implications of the different temporalities of the planning phase and the operations phase. In the operation phase of the experiment, plants have been watered and are growing, so time is running unstoppably. The operators continuously try to stay ahead of unfolding events and coordinate with parallel activities. They cannot turn back, and must continuously improvise to implement even the best-laid plans. This work clearly fits the typical narrative in resilience engineering. It is about handling variability and navigating uncertainty not only to avoid errors. In the planning phase, however, the anticipatory work is indeed characterized by an intense focus on “what can possibly go wrong”. The tolerance for errors is very low (due to the cost and low accessibility of the space station) so extensive work is undertaken for mitigating every possible contingency in advance. The differences in terms of temporalities of these two phases, and the practices that make them safe, require different strategies of management and training.

7.5 Discussion: Some Propositions

In sum I have put forward some ideas based on studies of situated work in critical settings. While I have exemplified these ideas with observations from shipping, railways and a control room, they are not solely based on these settings.
The mode of control in modern organizations, centered around standards, accountability and a decontextualized view on practice, could render important aspects of practice less visible, and discursively weaker. The drift towards more generic and accountability-centered approaches to safety can make procedures increasingly decontextualized, and decoupled from practice. However, some of the aspects of work that are “invisible” in this discourse of work, such as adapting to the variability of concrete situations, are important for resilience and reliability. Thus, important parts of what makes work safe are often not regulated or supported in the installed safety management systems, due to their situation-specific nature. Increasing the granularity of the existing systems, regulating work in even more detail, is not likely to improve that.
It is important to note that procedures, rules and checklists can be an integrated part of a community of practice, a resource for improvisation, a means of remembering shared knowledge, and an inextricable part of the “distributed” knowledge of the workers. Other times, they primarily serve purposes of accountability and external control. Discussions of rules and procedures (see e.g. Hale and Borys 2012) and how they contribute to safe practice should distinguish between these functions. It is not a matter of rules versus improvisation, but of how rules and procedures may support or hamper situational improvisation. For managers, a consequence of this insight should be to resist, or at least reflect critically on, the temptation to integrate procedures that work in one setting, within one community of practice, with the company’s more generalized safety management systems. Secondly, managers should seek to understand the situationally adaptive work that is necessary in critical work processes, recognize that this work might be impossible to standardize and enroll in organizational systems of control. However, it still needs to be supervised.
The temporality of the work situation is an important factor in understanding the relationship between representations of work and situated practice. In some types of work, such as the work of control room operators described here, the petroleum processing plant operators described by Kongsvik et al. (2015) or infrastructure technicians described in Almklov and Antonsen (2014), creatively situating planned activities in a temporally unfolding situation is a core task. In all these settings, the workers deal with unique situational contingencies. This fits poorly in rationalistic models of work and can be invisible in formal descriptions. Generally, representations of work tend to be detached from the evolving temporal trajectories of work as performed. A process that goes on and on, like the seedling growing in a greenhouse on the space station or a process plant running continuously, has a temporal trajectory that must be considered. There are temporal constraints on decisions and work execution. For example: simultaneous activities that might influence your activities or system, people getting tired over time, shifts ending, there is a difference between doing the same task the first time from the second time, etc. In operational work, managing such temporal trajectories and handling temporal variability is crucial, both for getting work done and getting it done safely.
One caveat, however, is that the accounts and theorizing about improvisation and the handling of variability in such situations should not be uncritically employed in work in situations with other temporal characteristics. Sometimes, like in the planning phase of the space experiment, one has the time and takes the time to plan and re-plan to avoid everything that could possibly go wrong. And sometimes a standardized description of a task is almost all you need. Arguably, many of the insights generated in recent years in safety science, e.g. in Resilience Engineering, on the importance on managing variability, are mostly relevant in operational settings, within an operational temporality and with a certain amount of situational variability. Thus, for managers and workers seeking to improve safety, recognizing the difference in temporality of different settings is an important step in choosing strategies for safety management for each situation.13 One should not be trying to model one in the image of the other.
Many organizational discourses and systems implemented to improve safety are centered on standardized tasks and measurable goals and they fail to capture important aspects of what makes work safe. This book is about professionalization of safety, on how to improve safety even further in industrial settings. A key argument of this chapter in this respect is that the systems, procedures, rules, checklists and reports supporting work in operational settings must be developed with a keen eye on the situational improvisation and adaptation that is often important in such work, not only for its efficient execution but also for its safety.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/​), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Fußnoten
1
The observations from the two main cases here are developed in collaboration with Ragnar Rosness, Kristine Størkersen, Jens Petter Johansen and Abdul Basit Mohammad.
 
2
The underlying causes for the South Korean Sewol ferry accident show how several actors seem to have a weak interest in safety (Kim et al. 2016). At the workshop (organized by FonCSI in November 2015 and highlight of the project that led to this book, editors’ note) Jonathan Molyneux presented a rather grim picture with regards to the available resources for addressing safety in the global mining industry.
 
3
See also Gherardi in this volume on the relationship between situated practice and safety.
 
4
A somewhat idiosyncratic selection of mine would include studies from science and technology studies (e.g. Latour, 1999; and my own take in Almklov, 2008), distributed cognition (Hutchins, 1995; Hutchins & Klausen, 1996) anthropology of technology (Ingold, 2000) and sociomaterial theory (Orlikowski & Scott, 2008). All of these have different, but some sort of relational, conceptions of representation and technology.
 
5
Similar thoughts are also found in the literature on High Reliability Organizations (LaPorte and Consolini 1991; Weick and Sutcliffe 2015). It is most explicitly argued in Resilience Engineering (Hollnagel et al. 2006). Also within the field of ergonomics the distinctions and relationships between representations of work and work as performed has been theorized (e.g. Guérin et al. 2007).
 
6
See Eisenhardt (1989) for an introduction to Agency Theory.
 
7
See Knudsen (2009) for a discussion of the relationship between paperwork and seamanship.
 
8
This example is also discussed in Størkersen et al. (2016).
 
9
The Norwegian User Support and Control Centre.
 
10
E.g. any risk of the experiment polluting the atmosphere of the ISS or harming the astronauts is unacceptable.
 
11
Recently similar types of sociotechnical work have been labeled “anticipation work” within STS. See Steinhart and Jackson (2015) and Clarke (2016).
 
12
The communication coverage is displayed on a timeline that is usually displayed on one of the control-room screen to allow the operator to be aware of upcoming communication shadows before initiating activities or data transfers.
 
13
Journé and Raulet-Crouset (2006) interestingly discuss how to manage situations, or situated work, including the role of temporal structures. Also Hayes (2012) suggests ways to balance pre-planned operational envelopes with ways of managing safety in evolving situations. More generally Grøtan (2017) presents interesting models for the management of resilient practice.
 
Literatur
Zurück zum Zitat Almklov, P. G. (2008). Standardized Data and Singular Situations. Social Studies of Science, 38(6), 873–897. Almklov, P. G. (2008). Standardized Data and Singular Situations. Social Studies of Science, 38(6), 873–897.
Zurück zum Zitat Almklov, P. G., & Antonsen, S. (2010). The commoditization of societal safety. Journal of contingencies and crisis management, 18(3), 132–144. Almklov, P. G., & Antonsen, S. (2010). The commoditization of societal safety. Journal of contingencies and crisis management, 18(3), 132–144.
Zurück zum Zitat Almklov, P. G., Antonsen, S. 2014. Making work invisible: new public management and operational work in critical infrastructure sectors. Public Administration, 92(2), 477–92. Almklov, P. G., Antonsen, S. 2014. Making work invisible: new public management and operational work in critical infrastructure sectors. Public Administration, 92(2), 477–92.
Zurück zum Zitat Almklov, P. G., Rosness, R., & Størkersen, K. (2014). When safety science meets the practitioners: Does safety science contribute to marginalization of practical knowledge?. Safety science, 67, 25–36. Almklov, P. G., Rosness, R., & Størkersen, K. (2014). When safety science meets the practitioners: Does safety science contribute to marginalization of practical knowledge?. Safety science, 67, 25–36.
Zurück zum Zitat Antonsen, S., Almklov, P., & Fenstad, J. (2008). Reducing the gap between procedures and practice–lessons from a successful safety intervention. Safety Science Monitor, 12(1), 1–16. Antonsen, S., Almklov, P., & Fenstad, J. (2008). Reducing the gap between procedures and practice–lessons from a successful safety intervention. Safety Science Monitor, 12(1), 1–16.
Zurück zum Zitat Antonsen, S., Skarholt, K., & Ringstad, A. J. (2012). The role of standardization in safety management–A case study of a major oil & gas company. Safety science, 50(10), 2001–2009. Antonsen, S., Skarholt, K., & Ringstad, A. J. (2012). The role of standardization in safety management–A case study of a major oil & gas company. Safety science, 50(10), 2001–2009.
Zurück zum Zitat Barley, S. R., & Kunda, G. (2001). Bringing Work Back In. Organization Science, 12(1), 76–95. Barley, S. R., & Kunda, G. (2001). Bringing Work Back In. Organization Science, 12(1), 76–95.
Zurück zum Zitat Clarke, A. E. (2016). Anticipation Work: Abduction, Simplification, Hope. Boundary Objects and Beyond: Working with Leigh Star, 85. Clarke, A. E. (2016). Anticipation Work: Abduction, Simplification, Hope. Boundary Objects and Beyond: Working with Leigh Star, 85.
Zurück zum Zitat Dekker, S. (2006). Resilience engineering: Chronicling the emergence of confused consensus. In E. Hollnagel, D. D. Woods & N. Leveson (Eds.), Resilience engineering: Concepts and precepts. Hampshire: Ashgate. Dekker, S. (2006). Resilience engineering: Chronicling the emergence of confused consensus. In E. Hollnagel, D. D. Woods & N. Leveson (Eds.), Resilience engineering: Concepts and precepts. Hampshire: Ashgate.
Zurück zum Zitat Eisenhardt, K. M. (1989). Agency theory: An assessment and review. Academy of management review, 14(1), 57–74. Eisenhardt, K. M. (1989). Agency theory: An assessment and review. Academy of management review, 14(1), 57–74.
Zurück zum Zitat Grøtan, T.O. (2017). Training for operational resilience capabilities (TORC): Summary of concept and experiences. Trondheim: SINTEF Report A28088. Grøtan, T.O. (2017). Training for operational resilience capabilities (TORC): Summary of concept and experiences. Trondheim: SINTEF Report A28088.
Zurück zum Zitat Guérin, F., Laville, A., Daniellou, F., Duraffourg, J., & Kerguelen, A. (2007). Understanding and transforming work. The practice of ergonomics. Lyon: Anact Network Edition. Guérin, F., Laville, A., Daniellou, F., Duraffourg, J., & Kerguelen, A. (2007). Understanding and transforming work. The practice of ergonomics. Lyon: Anact Network Edition.
Zurück zum Zitat Haavik, T. K. 2014. Sensework. Computer Supported Cooperative Work (CSCW), 23(3), 269–298. Haavik, T. K. 2014. Sensework. Computer Supported Cooperative Work (CSCW), 23(3), 269–298.
Zurück zum Zitat Hale, A., Borys, D. 2012. Working to rule, or working safely? Part 1: A state of the art review. Safety Science. Hale, A., Borys, D. 2012. Working to rule, or working safely? Part 1: A state of the art review. Safety Science.
Zurück zum Zitat Hayes, J. 2012. Use of safety barriers in operational safety decision making. Safety Science. 50(3), 424–432. Hayes, J. 2012. Use of safety barriers in operational safety decision making. Safety Science. 50(3), 424–432.
Zurück zum Zitat Hohnen, P., & Hasle, P. (2011). Making work environment auditable–A ‘critical case’ study of certified occupational health and safety management systems in Denmark. Safety Science, 49(7), 1022–1029. Hohnen, P., & Hasle, P. (2011). Making work environment auditable–A ‘critical case’ study of certified occupational health and safety management systems in Denmark. Safety Science, 49(7), 1022–1029.
Zurück zum Zitat Hollnagel, E. (2015). Why is work-as-imagined different from work-as-done. In R. L. Wears, E. Hollnagel, & J. Braithwaite (Eds.), Resilient Health Care: The resilience of everyday clinical work (pp. 249–264). Farnham, UK: Ashgate. Hollnagel, E. (2015). Why is work-as-imagined different from work-as-done. In R. L. Wears, E. Hollnagel, & J. Braithwaite (Eds.), Resilient Health Care: The resilience of everyday clinical work (pp. 249–264). Farnham, UK: Ashgate.
Zurück zum Zitat Hollnagel, E., Woods. D.D., Leveson. N. (2006). Resilience engineering: concepts and precepts. Gower Publishing Company. Hollnagel, E., Woods. D.D., Leveson. N. (2006). Resilience engineering: concepts and precepts. Gower Publishing Company.
Zurück zum Zitat Hutchins, E. 1995. Cognition in the wild. Cambridge: MIT Press. Hutchins, E. 1995. Cognition in the wild. Cambridge: MIT Press.
Zurück zum Zitat Hutchins, E., Klausen, T. 1996. Distributed cognition in an airline cockpit. In: Engeström Y, Middleton D (Eds.), Cognition and communication at work (pp. 15–34). Cambridge: Cambridge University Press. Hutchins, E., Klausen, T. 1996. Distributed cognition in an airline cockpit. In: Engeström Y, Middleton D (Eds.), Cognition and communication at work (pp. 15–34). Cambridge: Cambridge University Press.
Zurück zum Zitat Ingold, T. (2000). The perception of the environment: essays on livelihood, dwelling and skill. Psychology Press. Ingold, T. (2000). The perception of the environment: essays on livelihood, dwelling and skill. Psychology Press.
Zurück zum Zitat Johansen, J. P., Almklov, P. G., & Mohammad, A. B. (2015). What can possibly go wrong? Anticipatory work in space operations. Cognition, Technology & Work, 1–18. Johansen, J. P., Almklov, P. G., & Mohammad, A. B. (2015). What can possibly go wrong? Anticipatory work in space operations. Cognition, Technology & Work, 1–18.
Zurück zum Zitat Journé, B., & Raulet-Croset, N. (2006). The concept of situation: a key concept in the studying of strategizing and organizing practices in a context of risk. Paper presented at EGOS 2006. Journé, B., & Raulet-Croset, N. (2006). The concept of situation: a key concept in the studying of strategizing and organizing practices in a context of risk. Paper presented at EGOS 2006.
Zurück zum Zitat Kim, H., Haugen, S., & Utne, I. B. (2016). Assessment of accident theories for major accidents focusing on the MV SEWOL disaster: similarities, differences, and discussion for a combined approach. Safety science, 82, 410–420. Kim, H., Haugen, S., & Utne, I. B. (2016). Assessment of accident theories for major accidents focusing on the MV SEWOL disaster: similarities, differences, and discussion for a combined approach. Safety science, 82, 410–420.
Zurück zum Zitat Knudsen, F. (2009). Paperwork at the service of safety? Workers’ reluctance against written procedures exemplified by the concept of ‘seamanship’. Safety science, 47(2), 295–303. Knudsen, F. (2009). Paperwork at the service of safety? Workers’ reluctance against written procedures exemplified by the concept of ‘seamanship’. Safety science, 47(2), 295–303.
Zurück zum Zitat Kongsvik, T., Almklov, P., Haavik, T., Haugen, S., Vinnem, J. E., & Schiefloe, P. M. (2015). Decisions and decision support for major accident prevention in the process industries. Journal of Loss Prevention in the Process Industries, 35, 85–94. Kongsvik, T., Almklov, P., Haavik, T., Haugen, S., Vinnem, J. E., & Schiefloe, P. M. (2015). Decisions and decision support for major accident prevention in the process industries. Journal of Loss Prevention in the Process Industries, 35, 85–94.
Zurück zum Zitat LaPorte, T.R., Consolini, P.M. 1991. Working in practice but not in theory: theoretical challenges of “high-reliability organizations”. Journal of Public Administration Research and Theory, 1(1), 19–48. LaPorte, T.R., Consolini, P.M. 1991. Working in practice but not in theory: theoretical challenges of “high-reliability organizations”. Journal of Public Administration Research and Theory, 1(1), 19–48.
Zurück zum Zitat Latour, B. 1999. Pandora’s hope: essays on the reality of science studies. Cambridge: Harvard University Press. Latour, B. 1999. Pandora’s hope: essays on the reality of science studies. Cambridge: Harvard University Press.
Zurück zum Zitat Nathanael, D., & Marmaras, N. (2006). The interplay between work practices and prescription: a key issue for organizational resilience. In Proc. 2nd Resilience Eng. Symp (pp. 229–237). Nathanael, D., & Marmaras, N. (2006). The interplay between work practices and prescription: a key issue for organizational resilience. In Proc. 2nd Resilience Eng. Symp (pp. 229–237).
Zurück zum Zitat Orlikowski, W. J., & Scott, S. V. (2008). Sociomateriality: challenging the separation of technology, work and organization. The academy of management annals, 2(1), 433–474. Orlikowski, W. J., & Scott, S. V. (2008). Sociomateriality: challenging the separation of technology, work and organization. The academy of management annals, 2(1), 433–474.
Zurück zum Zitat Power, M. (1997). The audit society: Rituals of verification. Oxford: Oxford University Press. Power, M. (1997). The audit society: Rituals of verification. Oxford: Oxford University Press.
Zurück zum Zitat Power, M. (2007). Organized uncertainty: designing a world of risk management. Oxford: Oxford University Press. Power, M. (2007). Organized uncertainty: designing a world of risk management. Oxford: Oxford University Press.
Zurück zum Zitat Steinhardt, S. B., & Jackson, S. J. (2015). Anticipation work: Cultivating vision in collective practice. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing (pp. 443–453). ACM. Steinhardt, S. B., & Jackson, S. J. (2015). Anticipation work: Cultivating vision in collective practice. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing (pp. 443–453). ACM.
Zurück zum Zitat Størkersen, K. V., Antonsen, S., & Kongsvik, T. (2016). One size fits all? Safety management regulation of ship accidents and personal injuries. Journal of Risk Research, 1–19. Størkersen, K. V., Antonsen, S., & Kongsvik, T. (2016). One size fits all? Safety management regulation of ship accidents and personal injuries. Journal of Risk Research, 1–19.
Zurück zum Zitat Suchman, L. 1987. Plans and situated actions. New York: Cambridge University Press. Suchman, L. 1987. Plans and situated actions. New York: Cambridge University Press.
Zurück zum Zitat Suchman, L. A. (1995). Making work visible. Communications of the ACM, 38(9), 56–64. Suchman, L. A. (1995). Making work visible. Communications of the ACM, 38(9), 56–64.
Zurück zum Zitat Weick, K. E., & Sutcliffe, K. M. (2015). Managing the unexpected: Sustained performance in a complex world. [Third edition] John Wiley & Sons. Weick, K. E., & Sutcliffe, K. M. (2015). Managing the unexpected: Sustained performance in a complex world. [Third edition] John Wiley & Sons.
Zurück zum Zitat Wildavsky, A. B. (1988). Searching for safety. Transaction publishers. Wildavsky, A. B. (1988). Searching for safety. Transaction publishers.
Metadaten
Titel
Situated Practice and Safety as Objects of Management
verfasst von
Petter G. Almklov
Copyright-Jahr
2018
DOI
https://doi.org/10.1007/978-3-319-65527-7_7

Neuer Inhalt