Skip to main content

2017 | Buch

Privacy Technologies and Policy

5th Annual Privacy Forum, APF 2017, Vienna, Austria, June 7-8, 2017, Revised Selected Papers

herausgegeben von: Prof. Dr. Erich Schweighofer, Herbert Leitold, Andreas Mitrakas, Prof. Dr. Kai Rannenberg

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the thoroughly refereed post-conference proceedings of the 5th Annual Privacy Forum, APF 2017, held in Vienna, Austria, in June 2017. The 12 revised full papers were carefully selected from 41 submissions on the basis of significance, novelty, and scientific quality. These selected papers are organized in three different chapters corresponding to the conference sessions. The first chapter, “Data Protection Regulation”, discusses topics concerning big genetic data, a privacy-preserving European identity ecosystem, the right to be forgotten und the re-use of privacy risk analysis. The second chapter, “Neutralisation and Anonymization”, discusses neutralisation of threat actors, privacy by design data exchange between CSIRTs, differential privacy and database anonymization. Finally, the third chapter, “Privacy Policies in Practice”, discusses privacy by design, privacy scores, privacy data management in healthcare and trade-offs between privacy and utility.

Inhaltsverzeichnis

Frontmatter

Data Protection Regulation

Frontmatter
The GDPR and Big Data: Leading the Way for Big Genetic Data?
Abstract
Genetic data as a category of personal data creates a number of challenges to the traditional understanding of personal data and the rules regarding personal data processing. Although the peculiarities of and heightened risks regarding genetic data processing were recognized long before the data protection reform in the EU, the General Data Protection Regulation (GDPR) seems to pay no regard to this. Furthermore, the GDPR will create more legal grounds for (sensitive) personal data (incl. genetic data) processing whilst restricting data subjects’ means of control over their personal data. One of the reasons for this is that, amongst other aims, the personal data reform served to promote big data business in the EU. The substantive clauses of the GDPR concerning big data, however, do not differentiate between the types of personal data being processed. Hence, like all other categories of personal data, genetic data is subject to the big data clauses of the GDPR as well; thus leading to the question whether the GDPR is creating a pathway for ‘big genetic data’. This paper aims to analyse the implications that the role of the GDPR as a big data enabler bears on genetic data processing and the respective rights of the data subject.
Kärt Pormeister
Towards a Privacy-Preserving Reliable European Identity Ecosystem
Abstract
This paper introduces the ARIES identity ecosystem aimed at setting up a reliable identity framework comprising new technologies, processes and security features that ensure highest levels of quality in secure credentials for highly secure and privacy-respecting physical and digital identity management processes. The identity ecosystem is being devised in the scope of ARIES European project and aspires to tangibly achieve a reduction in levels of identity fraud, theft, wrong identity and associated crimes and to create a decisive competitive advantage for Europe at a global level.
Jorge Bernal Bernabe, Antonio Skarmeta, Nicolás Notario, Julien Bringer, Martin David
Forget Me, Forget Me Not - Redefining the Boundaries of the Right to Be Forgotten to Address Current Problems and Areas of Criticism
Abstract
In the landmark decision Google Spain v AEPD and Mario Costeja González, the Court of Justice of the European Union has declared that individuals have a so-called ‘right to be forgotten’, that is, the right to demand search engines to erase search results obtained through searches for their names. The ruling has been praised by many and seen as a welcome relief for individuals who were gradually losing all control over the private information stored about them online. However, because the court has failed to provide proper guidance as to the application and scope of the new right, the ruling has opened risks to freedom of expression and the right to receive and impart information as well as introduced questions as to the legitimacy, fairness and international scope of the delisting process. Taking a closer look at the problems currently surrounding the right to be forgotten, this paper will attempt to narrow down and define the scope of the application of the new right. In order to do so, it will first argue that personal information should be predominantly protected by reliance on existing laws rather than through the creation of an ambiguous right to delist search results. It will then advocate for a rejection of the court’s broad formulation of the right to be forgotten and suggest that, in order to attain a fairer balance between the fundamental rights at stake, the right should be only permitted to apply in three, clearly defined and limited circumstances.
Beata Sobkow
A Refinement Approach for the Reuse of Privacy Risk Analysis Results
Abstract
The objective of this paper is to improve the cost effectiveness of privacy impact assessments through (1) a more systematic approach, (2) a better integration with privacy by design and (3) enhanced reusability. We present a three-tier process including a generic privacy risk analysis depending on the specifications of the system and two refinements based on the architecture and the deployment context respectively. We illustrate our approach with the design of a biometric access control system.
Sourya Joyee De, Daniel Le Métayer

Neutralisation and Anonymization

Frontmatter
A Gamified Approach to Explore Techniques of Neutralization of Threat Actors in Cybercrime
Abstract
In the serious game “Operation Digital Chameleon” red and blue teams develop attack and defense strategies as part of an IT security Awareness training. This paper presents the game design and selected results from a structured evaluation of techniques of neutralization applied by cybercrime threat actors. Various motives and five neutralization techniques are identified in fifteen instances of “Operation Digital Chameleon”. We argue that “Operation Digital Chameleon” is not only an instrument to raise IT security awareness but also a sensible method to explore techniques of neutralization in cybercrime.
Andreas Rieb, Tamara Gurschler, Ulrike Lechner
Privacy by Design Data Exchange Between CSIRTs
Abstract
Computer Security Incident Response Teams (‘CSIRTs’) may exchange personal data about incidents. A privacy by design solution can ensure the compliance with data protection law and the protection of trade secrets. An information platform of CSIRTs is proposed, where incidents are reported in encoded form. Without knowledge of other personal data, only the quantity, region and industry of the attacks can be read out. Additional data–primarily from own security incidents–can be used to calculate a similarity to other incidents.
Erich Schweighofer, Vinzenz Heussler, Peter Kieseberg
Mr X vs. Mr Y: The Emergence of Externalities in Differential Privacy
Abstract
The application of differential privacy requires the addition of Laplace noise, whose level must be measured out to achieve the desired level of privacy. However, the protection of the data concerning a Mr. X, i.e., its privacy level, also depends on the other data contained in the database: a negative externality is recognized. In this paper we show that an attack on Mr. X can be conducted by an oracle, by computing the likelihood ratio under two scenarios, where the database population is made of either independent or correlated entries. We show that the target Mr. X can be spotted, notwithstanding the addition of noise, when its position happens to be eccentric with respect to the bulk of the database population.
Maurizio Naldi, Giuseppe D’Acquisto
Diffix: High-Utility Database Anonymization
Abstract
In spite of the tremendous privacy and liability benefits of anonymization, most shared data today is only pseudonymized. The reason is simple: there haven’t been any anonymization technologies that are general purpose, easy to use, and preserve data quality. This paper presents the design of Diffix, a new approach to database anonymization that promises to break new ground in the utility/privacy trade-off. Diffix acts as an SQL proxy between the analyst and an unmodified live database. Diffix adds a minimal amount of noise to answers—Gaussian with a standard deviation of only two for counting queries—and places no limit on the number of queries an analyst may make. Diffix works with any type of data and configuration is simple and data-independent: the administrator does not need to consider the identifiability or sensitivity of the data itself. This paper presents a high-level but complete description of Diffix. It motivates the design through examples of attacks and defenses, and provides some evidence for how Diffix can provide strong anonymity with such low noise levels.
Paul Francis, Sebastian Probst Eide, Reinhard Munz

Privacy Policies in Practice

Frontmatter
Towards a Principled Approach for Engineering Privacy by Design
Abstract
Privacy by Design has emerged as a proactive approach for embedding privacy into the early stages of the design of information and communication technologies, but it is no ‘silver bullet’. Challenges involved in engineering Privacy by Design include a lack of holistic and systematic methodologies that address the complexity and variability of privacy issues and support the translation of its principles into engineering activities. A consequence is that its principles are given at a high level of abstraction without accompanying tools and guidelines to address these challenges. We analyse three privacy requirements engineering methods from which we derive a set of criteria that aid in identifying data-processing activities that may lead to privacy violations and harms and also aid in specifying appropriate design decisions. We also present principles for engineering Privacy by Design that can be developed upon these criteria. Based on these, we outline some preliminary thoughts on the form of a principled framework that addresses the plurality and contextuality of privacy issues and supports the translation of the principles of Privacy by Design into engineering activities.
Majed Alshammari, Andrew Simpson
PrivacyScore: Improving Privacy and Security via Crowd-Sourced Benchmarks of Websites
Abstract
Website owners make conscious and unconscious decisions that affect their users, potentially exposing them to privacy and security risks in the process. In this paper we introduce PrivacyScore, an automated website scanning portal that allows anyone to benchmark security and privacy features of multiple websites. In contrast to existing projects, the checks implemented in PrivacyScore cover a wider range of potential privacy and security issues. Furthermore, users can control the ranking and analysis methodology. Therefore, PrivacyScore can also be used by data protection authorities to perform regularly scheduled compliance checks. In the long term we hope that the transparency resulting from the published assessments creates an incentive for website owners to improve their sites. The public availability of a first version of PrivacyScore was announced at the ENISA Annual Privacy Forum in June 2017.
Max Maass, Pascal Wichmann, Henning Pridöhl, Dominik Herrmann
Privacy Data Management and Awareness for Public Administrations: A Case Study from the Healthcare Domain
Abstract
Development of Information Systems that ensure privacy is a challenging task that spans various fields such as technology, law and policy. Reports of recent privacy infringements indicate that we are far from not only achieving privacy but also from applying Privacy by Design principles. This is due to lack of holistic methods and tools which should enable to understand privacy issues, incorporate appropriate privacy controls during design-time and create and enforce a privacy policy during run-time. To address these issues, we present VisiOn Privacy Platform which provides holistic privacy management throughout the whole information system lifecycle. It contains a privacy aware process that is supported by a software platform and enables Data Controllers to ensure privacy and Data Subjects to gain control of their data, by participating in the privacy policy formulation. A case study from the healthcare domain is used to demonstrate the platform’s benefits.
Vasiliki Diamantopoulou, Konstantinos Angelopoulos, Julian Flake, Andrea Praitano, José Francisco Ruiz, Jan Jürjens, Michalis Pavlidis, Dimitri Bonutto, Andrès Castillo Sanz, Haralambos Mouratidis, Javier García Robles, Alberto Eugenio Tozzi
Better Data Protection by Design Through Multicriteria Decision Making: On False Tradeoffs Between Privacy and Utility
Abstract
Data Protection by Design (DPbD, also known as Privacy by Design) has received much attention in recent years as a method for building data protection into IT systems from the start. In the EU, DPbD will become mandatory from 2018 onwards under the GDPR. In earlier work, we emphasized the multidisciplinary nature of DPbD. The present paper builds on this to argue that DPbD also needs a multicriteria approach that goes beyond the traditional focus on (data) privacy (even if understood in its multiple meanings).
The paper is based on the results of a survey (n = 101) among employees of a large institution concerning the introduction of technology that tracks some of their behaviour. Even though a substantial portion of respondents are security/privacy researchers, concerns revolved strongly around social consequences of the technology change, usability issues, and transparency. The results taken together indicate that the decrease in privacy through data collection was associated with (a) an increase in accountability, (b) the blocking of non-authorized uses of resources, (c) a decrease in usability, (d) an altered perception of a communal space, (e) altered actions in the communal space, and (f) an increased salience of how decisions are made and communicated. These results call into question the models from computer science/data mining that posit a privacy-utility tradeoff. Instead, this paper argues, multicriteria notions of utility are needed, and this leads to design spaces in which less privacy may be associated with less utility rather than be compensated for by more utility, as the standard tradeoff models suggest. The paper concludes with an outlook on activities aimed at raising awareness and bringing the wider notion of DPbD into decision processes.
Bettina Berendt
Backmatter
Metadaten
Titel
Privacy Technologies and Policy
herausgegeben von
Prof. Dr. Erich Schweighofer
Herbert Leitold
Andreas Mitrakas
Prof. Dr. Kai Rannenberg
Copyright-Jahr
2017
Electronic ISBN
978-3-319-67280-9
Print ISBN
978-3-319-67279-3
DOI
https://doi.org/10.1007/978-3-319-67280-9