Skip to content
Licensed Unlicensed Requires Authentication Published by De Gruyter March 16, 2019

Privacy as Protection of the Incomputable Self: From Agnostic to Agonistic Machine Learning

  • Mireille Hildebrandt

Abstract

This Article takes the perspective of law and philosophy, integrating insights from computer science. First, I will argue that in the era of big data analytics we need an understanding of privacy that is capable of protecting what is uncountable, incalculable or incomputable about individual persons. To instigate this new dimension of the right to privacy, I expand previous work on the relational nature of privacy, and the productive indeterminacy of human identity it implies, into an ecological understanding of privacy, taking into account the technological environment that mediates the constitution of human identity. Second, I will investigate how machine learning actually works, detecting a series of design choices that inform the accuracy of the outcome, each entailing trade-offs that determine the relevance, validity and reliability of the algorithm’s accuracy for real life problems. I argue that incomputability does not call for a rejection of machine learning per se but calls for a research design that enables those who will be affected by the algorithms to become involved and to learn how machines learn — resulting in a better understanding of their potential and limitations. A better understanding of the limitations that are inherent in machine learning will deflate some of the eschatological expectations, and provide for better decision-making about whether and if so how to implement machine learning in specific domains or contexts. I will highlight how a reliable research design aligns with purpose limitation as core to its methodological integrity. This Article, then, advocates a practice of “agonistic machine learning” that will contribute to responsible decisions about the integration of data-driven applications into our environments while simultaneously bringing them under the Rule of Law. This should also provide the best means to achieve effective protection against overdetermination of individuals by machine inferences.


∗ Tenured Research Professor of “Interfacing Law and Technology,” appointed by the Research Council of the Vrije Universiteit Brussels (VUB) at the research group of Law, Science, Technology and Society studies (LSTS) at the Faculty of Law and Criminology at VUB. She is also a part-time Full Professor of “Smart Environments, Data Protection and the Rule of Law,” at the institute of Computing and Information Sciences (iCIS), Science Faculty of Radboud University, Nijmegen. This Article was presented at the International Conference “The Problem of Theorizing Privacy,” co-organized by Michael Birnhack, Julie Cohen and myself at Tel Aviv University on 8th January 2018, and at the Privacy Law Scholars Conference (PLSC) Europe in Brussels on 27th January 2018. I thank Eran Fisher, Bart van der Sloot and Ben Wagner and the editors of TIL for their in-depth comments and many others for seriously engaging with the content during both conferences. Cite as: Mireille Hildebrandt, Privacy as Protection of the Incomputable Self: From Agnostic to Agonistic Machine Learning, 20 THEORETICAL INQUIRIES L. 83 (2019).


Published Online: 2019-03-16

© 2019 by Theoretical Inquiries in Law

Downloaded on 30.4.2024 from https://www.degruyter.com/document/doi/10.1515/til-2019-0004/html
Scroll to top button