Skip to main content
Top

Open Access 2023 | Open Access | Book

Cover of the book

Safety in the Digital Age

Sociotechnical Perspectives on Algorithms and Machine Learning

insite
SEARCH

About this book

This open access book gathers authors from a wide range of social-scientific and engineering disciplines to review challenges from their respective fields that arise from the processes of social and technological transformation taking place worldwide. The result is a much-needed collection of knowledge about the integration of social, organizational and technical challenges that need to be tackled to uphold safety in the digital age.

The contributors whose work features in this book help their readers to navigate the massive increase in the capability to generate and use data in developing algorithms intended for automation of work, machine learning and next-generation artificial intelligence and the blockchain technology already in such extensive use in real-world organizations.

This book deals with such issues as:

· How can high-risk and safety-critical systems be affected by these developments, in terms of their activities, their organization, management and regulation?

· What are the sociotechnical challenges of the proliferation of big data, algorithmic influence and cyber-security challenges in health care, transport, energy production/distribution and production of goods?

Understanding the ways these systems operate in the rapidly changing digital context has become a core issue for academic researchers and other experts in safety science, security and critical-infrastructure protection. The research presented here offers a lens through which the reader can grasp the way such systems evolve and the implications for safety—an increasingly multidisciplinary challenge that this book does not shrink from addressing.

Table of Contents

Frontmatter

Open Access

Chapter 1. Safety in a Digital Age: Old and New Problems—Algorithms, Machine Learning, Big Data and Artificial Intelligence
Abstract
Digital technologies including machine learning, artificial intelligence and big data are leading to dramatic changes, in both the workplace and our private lives. These trends raise concerns, ranging from the pragmatic to the philosophical, regarding the nature of work, the professional identity of workers, our privacy, the distribution of power within organizations and societies. They also represent both opportunities and challenges for the work of producing safety in high-hazard systems. We highlight a number of pressing issues related to these evolutions and analyze the extent to which existing lenses from sociotechnical theory can help understand them.
Jean-Christophe Le Coze, Stian Antonsen

Open Access

Chapter 2. The Digitalisation of Risk Assessment: Fulfilling the Promises of Prediction?
Abstract
Risk assessment is a scientific exercise that aims at anticipating hazards. Prediction has always been a rallying call for the scientists that gave birth to this interdisciplinary movement in the 1970s. Several decades later, the broad movement of digitalisation and the promises of artificial intelligence seem to be pushing the limits of risk assessment and herald an era of faster and more precise predictions. This chapter briefly reviews the history of chemical risk assessment methods developed by regulatory bodies and associated research groups, and the complex ways it has digitalised. It unpacks digitalisation, to probe how its various aspects—datafication, computational innovation and modelling theories—align to meaningfully transform it, and determine whether the ever-revamped technological promise of prediction is within a closer reach than it was before.
David Demortain

Open Access

Chapter 3. Key Dimensions of Algorithmic Management, Machine Learning and Big Data in Differing Large Sociotechnical Systems, with Implications for Systemwide Safety Management
Abstract
The time is ripe for more case-by-case analyses of “big data”, “machine learning” and “algorithmic management”. A significant portion of current discussion on these topics occurs under the rubric of Automation (or, artificial intelligence) and in terms of broad political, social and economic factors said to be at work. We instead focus on identifying sociotechnical concerns arising out of software development in the topic areas. In so doing, we identify trade-offs and at least one longer-term system safety concern not typically included alongside notable political, social and economic considerations. This is the system safety concern of obsolescence. We end with a speculation on how skills in making these trade-offs might be noteworthy when system safety has been breached in emergencies.
Emery Roe, Scott Fortmann-Roe

Open Access

Chapter 4. Digitalisation, Safety and Privacy
Abstract
In order to increase the safety of industrial facilities and people, firms and their managers traditionally pay attention to the visibility of activities and the intentions of workers. Firms and managers can use connected objects worn by workers to collect this data. Analysing the introduction of smart glasses and smart shoes in an industrial site, this contribution explains how workers can use these tools without sacrificing their autonomy and privacy. In this site, performance as well as the safety of the activities is based on a combination of high individual autonomy and solidarity between colleagues strengthened by a private life at work. The sociotechnical context of this industrial site and the desire of professionals to control their privacy at work have a strong impact on the trajectory of these technologies. They insist on the use of technologies with their colleagues which strengthen the bonds of cooperation and solidarity and are resistant to technologies that could geolocate them or trace their movements. Moreover, the association of user spokespersons is an essential condition for the success of the design and dissemination of digital technologies. Finally, the control of privacy and private life at work by the workers contributes to the reinforcement of the performance and the safety of production.
Olivier Guillaume

Open Access

Chapter 5. Design and Dissemination of Blockchain Technologies: The Challenge of Privacy
Abstract
Presented as trust technologies, blockchains, by allowing immediate secure peer-to-peer exchanges without a trusted third party, have strong disruptive potential, but raise privacy issues. We illustrate some challenges that this antagonism raises and the sociotechnical compromises made to overcome them, by analysing the design of a mobility service by a consortium of some fifteen operators, and its experimentation with the employees of these operators. The service seeks to respond to the new needs linked to the electrification of company fleets, by tracking the recharging of (personal) electric vehicles at work or (professional) vehicles at home with a view to reimbursing employees’ professional expenses by relying on a blockchain. Privacy management is a skill, based on emerging expertise, distributed across a range of professions and users, which requires compromises between different conceptions of technology and data to be guaranteed. For blockchain designers, these compromises have limited the disruptive potential of blockchain technology by recentralising data management and losing the open nature of blockchain. However, in the eyes of other designers and users, they have allowed unexpected uses and benefits to emerge, such as reinforcing the choice of blockchain technology as a “privacy solution”.
Cécile Caron

Open Access

Chapter 6. Considering Severity of Safety-Critical System Outcomes in Risk Analysis: An Extension of Fault Tree Analysis
Abstract
With the advent of digitalisation and big data sources, new advanced tools are needed to precisely project safety-critical system outcomes. Existing systems safety analysis methods, such as fault tree analysis (FTA), lack systematic and structured approaches to specifically account for system event consequences. Consequently, we proposed an algorithmic extension of FTA for the purposes of: (a) analysis of the severity of consequences of both top and intermediate events as part of a fault tree (FT) and (b) risk assessment at both the event and cut set level. The ultimate objective of the algorithm is to provide a fine-grained analysis of FT event and cut set risks as a basis for precise and cost-effective safety control measure prescription by practitioners.
David B. Kaber, Yunmei Liu, Mei Lau

Open Access

Chapter 7. Are We Going Towards “No-Brainer” Safety Management?
Abstract
Industry is stepping into its 4.0 phase by implementing and increasingly relying on cyber-technological systems. Wider networks of sensors may allow for continuous monitoring of industrial process conditions. Enhanced computational power provides the capability of processing the collected “big data”. Early warnings can then be picked and lead to suggestion for proactive safety strategies or directly initiate the action of autonomous actuators ensuring the required level of system safety. But have we reached these safety 4.0 promises yet, or will we ever reach them? A traditional view on safety defines it as the absence of accidents and incidents. A forward-looking perspective on safety affirms that it involves ensuring that “as many things as possible go right”. However, in both the views there is an element of uncertainty associated to the prediction of future risks and, more subtly, to the capability of possessing all the necessary information for such prediction. This uncertainty does not simply disappear once we apply advanced artificial intelligence (AI) techniques to the infinite series of possible accident scenarios, but it can be found behind modelling choices and parameters setting. In a nutshell, any model claiming superior flexibility usually introduces extra assumptions (“there ain’t no such thing as a free lunch”). This contribution will illustrate a series of examples where AI techniques are used to continuously update the evaluation of the safety level in an industrial system. This will allow us to affirm that we are not even close to a “no-brainer” condition in which the responsibility for human and system safety is entirely moved to the machine. However, this shows that such advanced techniques are progressively providing a reliable support for critical decision making and guiding industry towards more risk-informed and safety-responsible planning.
Nicola Paltrinieri

Open Access

Chapter 8. Looking at the Safety of AI from a Systems Perspective: Two Healthcare Examples
Abstract
There is much potential and promise for the use of artificial intelligence (AI) in healthcare, e.g., in radiology, mental health, ambulance service triage, sepsis diagnosis and prognosis, patient-facing chatbots, and drug and vaccine development. However, the aspiration of improving the safety and efficiency of health systems by using AI is weakened by a narrow technology focus and by a lack of independent real-world evaluation. It is to be expected that when AI is integrated into health systems, challenges to safety will emerge, some old, and some novel. Examples include design for situation awareness, consideration of workload, automation bias, explanation and trust, support for human–AI teaming, training requirements and the impact on relationships between staff and patients. The use of healthcare AI also raises significant ethical challenges. To address these issues, a systems approach is needed for the design of AI from the outset. Two examples are presented to illustrate these issues: 1. Design of an autonomous infusion pump and 2. Implementation of AI in an ambulance service call centre to detect out-of-hospital cardiac arrest.
Mark A. Sujan

Open Access

Chapter 9. Normal Cyber-Crises
Abstract
Despite an increasing scholarly interest in cyber-security issues, the phenomenon of large-scale cyber-crises affecting critical infrastructure is largely unexplored. While some characteristics of its consequence dynamics have been identified—prominently its transboundary features—the underlying conditions that allow such dynamics to unfold have not yet been thoroughly explored. This chapter aims to contribute to bridging this gap by applying the classical theoretical perspectives of Normal Accidents (NA) and High Reliability organisations (HRO) on the sociotechnical systems of modern critical infrastructure. It argues that NA characteristics (the combination of interactive complexity and tight coupling) can be found in multiple layers of critical infrastructure operations (technical, cognitive, organisational and macro). Implications are discussed in terms of its connection to transboundary crisis dynamics.
Sarah Backman

Open Access

Chapter 10. Information Security Behaviour in an Organisation Providing Critical Infrastructure: A Pre-post Study of Efforts to Improve Information Security Culture
Abstract
The study examines whether information security behaviour (ISB) in an organisation providing critical infrastructure improved after systematic efforts to improve information security culture (ISC) through the implementation of an information security management system (ISMS). The data are based on quantitative surveys before (N = 323) and after (N = 446) efforts to improve ISC in the organisation. Qualitative interviews were also conducted before (N = 22) and after (N = 12). The study finds that the organisation has managed to improve its ISC through systematic efforts over a two-year period (2014–2016), and that this also has led to improvements in ISB among the personnel in the organisation. Multivariate regression analyses indicate that ISC is the most important variable influencing ISB, while ISMS measures is the most important variables influencing ISC. Thus, our results indicate that it is important to work with ISMS and ISC to increase IS in our increasingly digitalised society, especially in organisations providing critical infrastructure.
T.-O. Nævestad, J. Hovland Honerud, S. Frislid Meyer

Open Access

Chapter 11. AI at Work, Working with AI. First Lessons from Real Use Cases
Abstract
This chapter deals with the transformations of employment and work associated with recent developments in artificial intelligence. It proposes a classification based on five figures of the worker: replaced, dominated, augmented, divided and rehumanised. This taxonomy is illustrated by use cases from the catalogue of the Global Partnership on AI, a multistakeholder initiative which aims to bridge the gap between theory and practice on AI. We conclude by highlighting three shifts in the forms of work engagement that are likely to impact safety issues: the distancing of the object of work, the work on the machine itself and the reconfiguration of professional identity.
Yann Ferguson

Open Access

Chapter 12. Safety in the Digital Age—Sociotechnical Challenges
Abstract
This chapter describes some of the recurring themes that emerged from the contributions in this book, as well as from the workshop in which the contributions were presented and discussed. The themes are in one way or another related to the term “sociotechnical” and thus point to problems (old and new) that are linked to the relationship between the social and technological dimensions of organisations. The chapter provides a brief explanation of the history and current use of the term “sociotechnical” before discussing three sociotechnical issues that we believe are important for dealing with safety in the digital age.
Stian Antonsen, Jean-Christophe Le Coze
Metadata
Title
Safety in the Digital Age
Editors
Jean-Christophe Le Coze
Stian Antonsen
Copyright Year
2023
Electronic ISBN
978-3-031-32633-2
Print ISBN
978-3-031-32632-5
DOI
https://doi.org/10.1007/978-3-031-32633-2

Premium Partner