Skip to main content


Weitere Artikel dieser Ausgabe durch Wischen aufrufen

11.12.2019 | Ausgabe 4/2019 Open Access

Minds and Machines 4/2019

Algorithmic Decision-Making and the Control Problem

Minds and Machines > Ausgabe 4/2019
John Zerilli, Alistair Knott, James Maclaurin, Colin Gavaghan
Wichtige Hinweise

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.


The danger of human operators devolving responsibility to machines and failing to detect cases where they fail has been recognised for many years by industrial psychologists and engineers studying the human operators of complex machines. We call it “the control problem”, understood as the tendency of the human within a human–machine control loop to become complacent, over-reliant or unduly diffident when faced with the outputs of a reliable autonomous system. While the control problem has been investigated for some time, up to this point its manifestation in machine learning contexts has not received serious attention. This paper aims to fill that gap. We argue that, except in certain special circumstances, algorithmic decision tools should not be used in high-stakes or safety-critical decisions unless the systems concerned are significantly “better than human” in the relevant domain or subdomain of decision-making. More concretely, we recommend three strategies to address the control problem, the most promising of which involves a complementary (and potentially dynamic) coupling between highly proficient algorithmic tools and human agents working alongside one another. We also identify six key principles which all such human–machine systems should reflect in their design. These can serve as a framework both for assessing the viability of any such human–machine system as well as guiding the design and implementation of such systems generally.
Über diesen Artikel

Weitere Artikel der Ausgabe 4/2019

Minds and Machines 4/2019 Zur Ausgabe

Premium Partner