Skip to main content

2016 | Buch | 1. Auflage

Data Protection on the Move

Current Developments in ICT and Privacy/Data Protection

herausgegeben von: Serge Gutwirth, Ronald Leenes, Paul De Hert

Verlag: Springer Netherlands

Buchreihe : Law, Governance and Technology Series

insite
SUCHEN

Über dieses Buch

This volume brings together papers that offer methodologies, conceptual analyses, highlight issues, propose solutions, and discuss practices regarding privacy and data protection. It is one of the results of the eight annual International Conference on Computers, Privacy, and Data Protection, CPDP 2015, held in Brussels in January 2015.

The book explores core concepts, rights and values in (upcoming) data protection regulation and their (in)adequacy in view of developments such as Big and Open Data, including the right to be forgotten, metadata, and anonymity. It discusses privacy promoting methods and tools such as a formal systems modeling methodology, privacy by design in various forms (robotics, anonymous payment), the opportunities and burdens of privacy self management, the differentiating role privacy can play in innovation.
The book also discusses EU policies with respect to Big and Open Data and provides advice to policy makers regarding these topics.
Also attention is being paid to regulation and its effects, for instance in case of the so-called ‘EU-cookie law’ and groundbreaking cases, such as Europe v. Facebook.

This interdisciplinary book was written during what may turn out to be the final stages of the process of the fundamental revision of the current EU data protection law by the Data Protection Package proposed by the European Commission. It discusses open issues and daring and prospective approaches. It will serve as an insightful resource for readers with an interest in privacy and data protection.

Inhaltsverzeichnis

Frontmatter
Mind the Air Gap
Preventing Privacy Issues in Robotics
Abstract
The market for domestic and service robots, which can help consumers in their homes, is growing rapidly. Privacy scholars have warned that the deployment of robots in the home can lead to serious privacy risks, since these robots come equipped with sensors that register the environment in which they operate, including the human beings present therein. Moreover, many modern robots are, or will soon become, connected to the internet. This means that they can pass on any data they record, and that they can also be hacked by outsiders. The privacy and security risks are significant with this novel type of technology. In this chapter I argue that attaching robots to the internet does indeed lead to serious privacy and security risks. But I will also argue that there is a straightforward solution that can contribute to eliminating many of these risks, which is left unaddressed in the current debate. Borrowing a term from the field of cybersecurity I will argue that consumer robots ought to be ‘air gapped’, that is they ought not to be connected to the internet or the cloud—expect in a few exceptional cases where network connections are a critical requirement for robots to be able to function properly or effectively. I will explain what air gaps are, how they are used, and which strengths and weaknesses they have. Next, I will critically assess their use as a strategy to prevent privacy and security issues in domestic and service robots. I will argue that it is important to have a debate on the networked character of robots today, since we can now still prevent privacy and security problems in this novel technology, rather than having to remedy them once they are mass-marketed. This is why I propose to have a debate on privacy before design.
Bibi van den Berg
Europe Versus Facebook: An Imbroglio of EU Data Protection Issues
Abstract
In this paper, the case Europe versus Facebook is presented as a microcosm of the modern data protection challenges that arise from globalization, technological progress and seamless cross-border flows of personal data. It aims to shed light on a number of sensitive issues closely related to the case, which namely surround how to delimit the power of a European Data Protection Authority to prevent a specific data flow to the US from the authority of the European Commission to find the entire EU-US Safe Harbor Agreement invalid. This comment will also consider whether the entire matter might have been more clear-cut if Europe-versus-Facebook had asserted its claims against Facebook US directly pursuant to Article 4 of the EU Data Protection Directive, rather than through Facebook Ireland indirectly under the Safe Harbor Agreement.
Liane Colonna
The Context-Dependence of Citizens’ Attitudes and Preferences Regarding Privacy and Security
Abstract
This paper considers the relationship between privacy and security and, in particular, the traditional “trade-off” paradigm that argues that citizens might be willing to sacrifice some privacy for more security. Academics have long argued against the trade-off paradigm, but these arguments have often fallen on deaf ears. Based on data gathered in a pan-European survey we discuss which factors determine citizens’ perceptions of concrete security technologies and surveillance practices.
Michael Friedewald, Marc van Lieshout, Sven Rung, Merel Ooms
On Locational Privacy in the Absence of Anonymous Payments
Abstract
In this paper we deal with the situation that in certain contexts vendors have no incentive to implement anonymous payments or that existing regulation prevents complete customer anonymity. While the paper discusses the problem also in a general fashion, we use the recharging of electric vehicles using public charging infrastructure as a working example. Here, customers leave rather detailed movement trails, as they authenticate to charge and the whole process is post-paid, i.e., are billed after consumption. In an attempt to enforce transparency and give customers the information necessary to dispute a bill they deem inaccurate, Germany and other European countries require to retain the ID of the energy meter used in each charging process. Similar information is also retained in other applications, where Point of Sales terminals are used. While this happens in the customers’ best interest, this information is a location bound token, which compromises customers’ locational privacy and thus allows for the creation of rather detailed movement profiles. We adapt a carefully chosen group signature scheme to match these legal requirements and show how modern cryptographic methods can reunite the, in this case, conflicting requirements of transparency on the one hand and locational privacy on the other. In our solution, the user’s identity is explicitly known during a transaction, yet the user’s location is concealed, effectively hindering the creation of a movement profile based on financial transactions.
Tilman Frosch, Sven Schäge, Martin Goll, Thorsten Holz
Development Towards a Learning Health System—Experiences with the Privacy Protection Model of the TRANSFoRm Project
Abstract
The connection of clinical care with clinical research is the main purpose of the Learning Health System (LHS) integrating scientific information, informatics, and patient care. The LHS generates new medical knowledge as a by-product of the care process. For this purpose, the aggregation of data from Electronic Health Records (EHR), Case Report Forms (CRF), web questionnaires with other data sources like primary care databases and genetic data repositories is necessary for research purposes. This joining of healthcare and research processes results in challenges for the privacy protection framework of the LHS. Based on an exploration of EU legal requirements for data protection and privacy, different data access policies of data provider organizations as well as existing privacy frameworks of research projects, basic privacy principles and privacy requirements were extracted. Based on privacy principles and legal requirements a graphical model to display privacy protection requirements was created. This graphical model is based on concepts of requirements engineering and can be used like a model kit to create new privacy frameworks and to ease knowledge exchange with stakeholders of the LHS. Our model is built upon the concept of three privacy zones (Care Zone, Non-Care Zone and Research Zone) representing areas where similar legal requirements and rules apply. These zones contain databases, data transformation operators, such as data linkers and privacy filters and graphs to indicate the data flow necessary for research processes. The aim of the model is to help arrange its components in a way that creates a risk gradient for the data flow from a zone of high risk for patient identification to a zone of low risk. The model is applied to the analysis of several general clinical research usage scenarios and two research use cases from the TRANSFoRm project (finding patients for clinical research and linkage of databases). Both use cases represent different data collection aspects of the LHS. The model was used during discussions with data managers from the NIVEL Primary Care Database in the Netherlands and validated by representing an approved research case of using primary care data employing NIVEL services. Experiences with the graphic privacy model used to improve the privacy framework of TRANSFoRm and with the presentation of the model to LHS stakeholders and the research community are discussed.
Wolfgang Kuchinke, Christian Ohmann, Robert A. Verheij, Evert-Ben van Veen, Brendan C. Delaney
Could the CE Marking Be Relevant to Enforce Privacy by Design in the Internet of Things?
Abstract
This paper aims at evaluating the relevance of using the CE marking process to enforce Data Protection by Design principles suggested by Article 23 of the proposed General Data Protection Regulation in connected devices involved in the Internet of Things. The CE marking is a conformity assessment process (A quick presentation of the basic principles of the CE marking is available on the website of the European Commission. Accessed June 14, 2015 http://​europa.​eu/​legislation_​summaries/​other/​l21013_​en.​htm. More information can be found within the recently updated guide issued by the European Commission’s “Guide to the implementation of directives based on the New Approach and the Global Approach”, 2014. Accessed May 21, 2015 http://​ec.​europa.​eu/​enterprise/​newsroom/​cf/​itemdetail.​cfm?​item_​id=​7326.) designed by the European Commission during the 1980s to allow manufacturers to voluntarily demonstrate their compliance with mandatory regulations on safety, health and environment. This process offers some interesting features for the enforcement of data protection rules in products especially in the context of the globalization of trade. It promoted a co-regulation process between public and private stakeholders and contributed to the spreading of European technical standards worldwide. However, it does not fully address data protection issues raised by the IoT and it has been criticized for its lack of reliability. Moreover, this process has never been designed to include an unlimited list of requirements and adding data protection requirements could undermine it. Another option might be to transform the CE marking in an overarching European mark housing different certification schemes dedicated to the compliance of products. This option might preserve the existing process and offer the opportunity to set up a scheme arranged according a similar process but dedicated to the enforcement of Data Protection by Design principles.
Eric Lachaud
Visions of Technology
Big Data Lessons Understood by EU Policy Makers in Their Review of the Legal Frameworks on Intellectual Property Rights, Access to and Re-use of PSI and the Protection of Personal Data
Abstract
This article’s focus is on how the advent of big data technology and practices has been understood and addressed by policy makers in the EU. We start with a reflection on of how big data affects business processes and how it contributes to the creation of a data economy. Then we look at EU policy making on big data and its understanding of the role and impact of ICT in the economy. We study 3 major legal frameworks affecting data flows and uses: intellectual property rights, access to and re-use of PSI and the protection of personal data. We explore how these frameworks affect the use of big data and how this is perceived and dealt with in the policy documents. In order to widen our perspective, we also take a comparative look at similar legal frameworks and policies in the US.
Hans Lammerant, Paul De Hert
Privacy and Innovation: From Disruption to Opportunities
Abstract
In this chapter I present an approach of privacy from the perspective of innovation theory. I bring two conceptual approaches together. First, I disentangle privacy in three interconnected concepts: information security, data protection and the private sphere. Each of these concepts has its own dynamics and refers to a specific logic: technology in case of information security, regulation in case of data protection and society in case of the private sphere. By interconnecting them, a more nuanced perspective on the innovative incentives stemming from privacy considerations arises. Second, innovation is considered to be hampered by market and system imperfections. These imperfections reduce the efficiency of the innovation system. Analysing which imperfections exist helps in overcoming them by identifying adequate counter-strategies. I will use a policy study that has been performed for the Dutch Ministry of Economic Affairs to elaborate the relation between privacy and innovation in more detail. The resulting tone is optimistic: during the study several indications for a more privacy respecting approach by firms were found. Still, the challenges to be addressed are huge.
Marc van Lieshout
Behavioural Advertising and the New ‘EU Cookie Law’ as a Victim of Business Resistance and a Lack of Official Determination
Abstract
This paper looks into Article 5(3) of the ePrivacy Directive on cookies and more specifically, on the practical effect of its 2009 amendment which changed the legal approach towards the use of cookies to opt-in. The new rule had minor practical effect as except that notice about cookie use has overall been improved, behavioural advertising, which is the privacy-invasive commercial practice that the recent amendment of the rule mainly intended to tackle, is still conducted without prior real user consent. The paper inquires into the reasons behind the failure of the rule and finds them in the logical, yet unfounded, business resistance, the rule’s negative publicity as well as in the (misguided) scepticism of EU officials and a lack of enthusiasm or determination at national level. The latter is translated into relaxed national implementations and absence of official guidance for compliance ‘moments’ before the rule was to enter into force. The final blow to the rule was given by a change in the approach of the UK ICO, which essentially aligned the law to the practice of implied consent adopted by many online businesses and by the reaction of the DPWP, which, far from strongly opposing this ‘back off’, confusingly moved closer to the updated stance of the UK ICO. The paper finally suggests that if they really want to, data protection authorities can restore a strict opt-in approach towards behavioural advertising and insist in business compliance with it.
Christina Markou
Forget About Being Forgotten
From the Right to Oblivion to the Right of Reply
Abstract
User concerns about the dissemination and impact of their digital identity have led to the spread of privacy-enhancing and reputation-management technologies. The former allow data subjects to decide the personal information they want to disclose in the limited scope of a particular transaction. The latter help them adjust the visibility of the information disclosed and tune how other people perceive it. However, as the sources of personal information are growing without their control, and both original and deceptive data coexist, it is increasingly difficult for data subjects to govern the impact of their personal information and for the information consumers to grasp its trustworthiness so as to separate the wheat from the chaff. This paper analyses the aforementioned risks and presents a protocol, an architecture, and a business approach supporting both data subjects in replying to the information linked to them and consumers in gaining opportune access to these replies.
Yod-Samuel Martin, Jose M. del Alamo
Do-It-Yourself Data Protection—Empowerment or Burden?
Abstract
Data protection by individual citizens, here labeled do-it-yourself (DIY) data protection, is often considered as an important part of comprehensive data protection. Particularly in the wake of diagnosing the so called “privacy paradox”, fostering DIY privacy protection and providing the respective tools is seen both as important policy aim and as a developing market. Individuals are meant to be empowered in a world where an increasing amount of actors is interested in their data. We analyze the preconditions of this view empirically and normatively: Thus, we ask (1) Can individuals protect data efficiently; and (2) Should individuals be responsible for data protection. We argue that both for pragmatic and normative reasons, a wider social perspective on data protection is required. The paper is concluded by providing a short outlook how these results could be taken up in data protection practices.
Tobias Matzner, Philipp K. Masur, Carsten Ochs, Thilo von Pape
Privacy Failures as Systems Failures: A Privacy-Specific Formal System Model
A Systemic and Multi-perspective Approach
Abstract
There have been numerous cases of adverse publicity concerning the negative effect of technology services—the combination of a technology platform and providing organisation—on people’s privacy. Privacy failures represent complex and cross-disciplinary failure situations, encompassing the design and development of technology services, and organisational privacy practice. Investigation of the root causes of privacy failures requires a systemic and multi-perspective approach which views privacy failures as systems failures. Systems thinking, tools and methods have been used for several decades to analyse and model failures, but have not been applied to privacy failures. This chapter introduces the use of a systemic method—the Systems Failures Approach—to study privacy failures. The Systems Failures Approach—founded on Soft Systems Methodology—compares a conceptual model of a failure situation with a Formal System Model (FSM)—a paradigm of a robust system capable of purposeful activity—to identify its causes, and recommend feasible and desirable changes. This chapter describes a Privacy-Specific Formal System Model (PSFSM), as part of the Systems Failures Approach, to identify the actual or potential causes of privacy failures in technology services, and concludes with a brief proof-of-concept application of the PSFSM to the launch of Google Buzz.
Anthony Morton
A Precautionary Approach to Big Data Privacy
Abstract
Once released to the public, data cannot be taken back. As time passes, data analytic techniques improve and additional datasets become public that can reveal information about the original data. It follows that released data will get increasingly vulnerable to re-identification—unless methods with provable privacy properties are used for the data release. We review and draw lessons from the history of re-identification demonstrations; explain why the privacy risk of data that is protected by ad hoc de-identification is not just unknown, but unknowable; and contrast this situation with provable privacy techniques like differential privacy. We then offer recommendations for practitioners and policymakers. Because ad hoc de-identification methods make the probability of a privacy violation in the future essentially unknowable, we argue for a weak version of the precautionary approach, in which the idea that the burden of proof falls on data releasers guides policies that incentivize them not to default to full, public releases of datasets using ad hoc de-identification methods. We discuss the levers that policymakers can use to influence data access and the options for narrower releases of data. Finally, we present advice for six of the most common use cases for sharing data. Our thesis is that the problem of “what to do about re-identification” unravels once we stop looking for a one-size-fits-all solution, and each of the six cases we consider a solution that is tailored, yet principled.
Arvind Narayanan, Joanna Huey, Edward W. Felten
The Impact of Domestic Robots on Privacy and Data Protection, and the Troubles with Legal Regulation by Design
Abstract
The paper examines a particular class of robotic applications, i.e. “domestic robots,” in order to stress that such robots will likely affect current legal frameworks of privacy and data protection. Since most of these machines act, new responsibilities of humans for the behaviour of others should be expected in the legal field. More particularly, focus is on the protection of people’s “opaqueness” and the transparency with which domestic robots should collect, process, and make use of personal data. Whilst the aim of the law to govern the process of technological innovation concerns here the regulation of producers and designers of robots through specific sets of norms, or the regulation of users behaviour through the design of their robots, three issues are fated to remain open. They concern: (i) a new expectation of privacy; (ii) the realignment of the traditional distinction between data processors and data controllers; and, (iii) a novel set of challenges to the principle of privacy by design. Although the claim and goal of lawmakers will probably revolve around the protection of individuals against every harm, e.g. psychological problems related to the interaction with domestic robots and the processing of third parties’ information, the intent to embed normative constraints into the internal control architecture of such artificial agents entails a major risk. If there is no need to humanize our robotic applications, we should not robotize human life either.
Ugo Pagallo
Is the Human Rights Framework Still Fit for the Big Data Era? A Discussion of the ECtHR’s Case Law on Privacy Violations Arising from Surveillance Activities
Abstract
Human rights protect humans. This seemingly uncontroversial axiom might become quintessential over time, especially with regard to the right to privacy. Article 8 of the European Convention on Human Rights grants natural persons a right to complain, in order to protect their individual interests, such as those related to personal freedom, human dignity and individual autonomy. With Big Data processes, however, individuals are mostly unaware that their personal data are gathered and processed and even if they are, they are often unable to substantiate their specific individual interest in these large data gathering systems. When the European Court of Human Rights assesses these types of cases, mostly revolving around (mass) surveillance activities, it finds itself stuck between the human rights framework on the one hand and the desire to evaluate surveillance practices by states on the other. Interestingly, the Court chooses to deal with these cases under Article 8 ECHR, but in order to do so, it is forced to go beyond the fundamental pillars of the human rights framework.
Bart van der Sloot
Metadata, Traffic Data, Communications Data, Service Use Information… What Is the Difference? Does the Difference Matter? An Interdisciplinary View from the UK
Abstract
In the wake of the Snowden revelations, it has become standard practice to rely upon the dichotomies metadata/data or metadata/content of communications to delineate the remit of the surveillance and investigation power of law enforcement agencies as well as the range of data retention obligations imposed upon telecommunications operators and in particular Internet service providers (ISPs). There is however no consensual definition of what metadata is and different routes can be taken to describe what metadata really covers. The key question is whether or to what extent metadata should be treated akin to content data for the purposes of identifying the categories of data which shall actually be retained by telecommunications operators and to which law enforcement agencies can have access. In an attempt to answer the question, this paper provides an understanding of what metadata is and what their diversity is by following two steps. First, adopting an interdisciplinary approach, we argue that three types of metadata should be distinguished in relation to the nature of the activity of the service provider processing them and their level in a network communications—network-level, application-level metadata, and service-use metadata—and we identify three types of criteria to classify these metadata and determine whether they should be deemed as akin to content data. Second, we compare these categories with legal concepts and in particular UK legal concepts to assess to what extent law-makers have managed to treat content data and metadata differently.
Sophie Stalla-Bourdillon, Evangelia Papadaki, Tim Chown
Global Views on Internet Jurisdiction and Trans-border Access
Abstract
This paper offers insights and perspectives on the jurisdiction of law enforcement authorities (LEAs) under international law and reviews current approaches to the territoriality principle and trans-border access to data for LEAs to conduct criminal investigations; controversial topics that are currently in the center of discussions, both at the international and national level. The views and perspectives offered in this paper seek to contribute to the international debate on cross-border access to data by LEAs and how the principles on internet jurisdiction should evolve in order to turn the administration of the criminal justice system more efficient, dynamic and compliant with the needs to obtain and secure evidence while respecting data protection safeguards.
Cristos Velasco, Julia Hörnle, Anna-Maria Osula
Metadaten
Titel
Data Protection on the Move
herausgegeben von
Serge Gutwirth
Ronald Leenes
Paul De Hert
Copyright-Jahr
2016
Verlag
Springer Netherlands
Electronic ISBN
978-94-017-7376-8
Print ISBN
978-94-017-7375-1
DOI
https://doi.org/10.1007/978-94-017-7376-8