Skip to main content

2021 | Buch

Privacy Technologies and Policy

9th Annual Privacy Forum, APF 2021, Oslo, Norway, June 17–18, 2021, Proceedings

herausgegeben von: Nils Gruschka, Dr. Luís Filipe Coelho Antunes, Prof. Dr. Kai Rannenberg, Prof. Prokopios Drogkaris

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed conference proceedings of the 9th Annual Privacy Forum, APF 2021. Due to COVID-19 pandemic the conference was held virtually.

The 9 revised full papers were carefully reviewed and selected from 43 submissions. The papers are organized in topical sections on Implementing Personal Data Processing Principles; Privacy Enhancing Technologies; Promoting Compliance with the GDPR.

Inhaltsverzeichnis

Frontmatter

Implementing Personal Data Processing Principles

Frontmatter
The Right to Customization: Conceptualizing the Right to Repair for Informational Privacy
Abstract
Terms of use of a digital service are often framed in a binary way: Either one agrees to the service provider's data processing practices, and is granted access to the service, or one does not, and is denied the service. Many scholars have lamented these ‘take-it-or-leave-it’ situations, as this goes against the ideals of data protection law. To address this inadequacy, computer scientists and legal scholars have tried to come up with approaches to enable more privacy-friendly products and services. In this article, we call for a right to customize the processing of user data. Our arguments build upon technology-driven approaches as well as on the ideals of privacy by design and the now codified data protection by design and default norm within the General Data Protection Regulation. In addition, we draw upon the right to repair that is propagated to empower consumers and enable a more circular economy. We propose two technologically-oriented approaches, termed ‘variants’ and ‘alternatives’ that could enable the technical implementation of a right to customization. We posit that these approaches cannot be demanded without limitation, and that restrictions will depend on how reasonable a customization demand is.
Aurelia Tamò-Larrieux, Zaira Zihlmann, Kimberly Garcia, Simon Mayer
A Case Study on the Implementation of the Right of Access in Privacy Dashboards
Abstract
The right of access under Art. 15 of the General Data Protection Regulation (GDPR) grants data subjects the right to obtain comprehensive information about the processing of personal data from a controller, including a copy of the data. Privacy dashboards have been discussed as possible tools for implementing this right, and are increasingly found in practice. However, investigations of real world implementations are sparse. We therefore qualitatively examined the extent to which privacy dashboards of ten online services complied with the essential requirements of Art. 15 GDPR. For this, we compared the information provided in dashboards with the information provided in privacy statements and data exports. We found that most privacy dashboards provided a decent initial overview, but lacked important information about purposes, recipients, sources, and categories of data that online users consider to be sensitive. In addition, both the privacy dashboards and the data exports lacked copies of personal data that were processed according to the online services’ own privacy statements. We discuss the strengths and weaknesses of current implementations in terms of their ability to fulfill the objective of Art. 15 GDPR, namely to create awareness about data processing. We conclude by providing an outlook on what steps would be necessary for privacy dashboards to facilitate the exercise of the right of access and to provide real added value for online users.
Jan Tolsdorf, Michael Fischer, Luigi Lo Iacono
Consent Management Platforms Under the GDPR: Processors and/or Controllers?
Abstract
Consent Management Providers (CMPs) provide consent pop-ups that are embedded in ever more websites over time to enable streamlined compliance with the legal requirements for consent mandated by the ePrivacy Directive and the General Data Protection Regulation (GDPR). They implement the standard for consent collection from the Transparency and Consent Framework (TCF) (current version v2.0) proposed by the European branch of the Interactive Advertising Bureau (IAB Europe). Although the IAB’s TCF specifications characterize CMPs as data processors, CMPs factual activities often qualifies them as data controllers instead. Discerning their clear role is crucial since compliance obligations and CMPs liability depend on their accurate characterization. We perform empirical experiments with two major CMP providers in the EU: Quantcast and OneTrust and paired with a legal analysis. We conclude that CMPs process personal data, and we identify multiple scenarios wherein CMPs are controllers.
Cristiana Santos, Midas Nouwens, Michael Toth, Nataliia Bielova, Vincent Roca
Improving the Transparency of Privacy Terms Updates
Opinion Paper
Abstract
Updates are an essential part of most information systems. However, they may also serve as a means to deploy undesired features or behaviours that potentially undermine users’ privacy. In this opinion paper, we propose a way to increase update transparency, empowering users to easily answer the question “what has changed with regards to my privacy?”, when faced with an update prompt. This is done by leveraging a formal notation of privacy terms and a set of rules that dictate when privacy-related prompts can be omitted, to reduce fatigue. A design that concisely visualizes changes between data handling practices of different software versions or configurations is also presented. We argue that it is an efficient way to display information of such nature and provide the method and calculations to support our assertion.
Alexandr Railean, Delphine Reinhardt

Privacy Enhancing Technologies

Frontmatter
User-Generated Pseudonyms Through Merkle Trees
Abstract
A pseudonymisation technique based on Merkle trees is described in this paper. More precisely, by exploiting inherent properties of the Merkle trees as cryptographic accumulators, we illustrate how user-generated pseudonyms can be constructed, without the need of a third party. Each such pseudonym, which depends on several user’s identifiers, suffices to hide these original identifiers, whilst the unlinkability property between any two different pseudonyms for the same user is retained; at the same time, this pseudonymisation scheme allows the pseudonym owner to easily prove that she owns a pseudonym within a specific context, without revealing information on her original identifiers. Compared to other user-generated pseudonymisation techniques which utilize public key encryption algorithms, the new approach inherits the security properties of a Merkle tree, thus achieving post-quantum security.
Georgios Kermezis, Konstantinos Limniotis, Nicholas Kolokotronis
Towards Improving Privacy of Synthetic DataSets
Abstract
Recent growth in domain specific applications of machine learning can be attributed to availability of realistic public datasets. Real world datasets may always contain sensitive information about the users, which makes it hard to share freely with other stake holders, and researchers due to regulatory and compliance requirements. Synthesising datasets from real data by leveraging generative techniques is gaining popularity. However, the privacy analysis of these dataset is still a open research. In this work, we fill this gap by investigating the privacy issues of the generated data sets from attacker and auditor point of view. We propose instance level Privacy Score (PS) for each synthetic sample by measuring the memorisation coefficient \(\boldsymbol{\alpha _{m}}\) per sample. Leveraging, PS we empirically show that accuracy of membership inference attacks on synthetic data drop significantly. PS is a model agnostic, post training measure, which helps data sharer with guidance about the privacy properties of a given sample but also helps third party data auditors to run privacy checks without sharing model internals. We tested our method on two real world data sets and show that attack accuracy reduced by PS based filtering.
Aditya Kuppa, Lamine Aouad, Nhien-An Le-Khac

Promoting Compliance with the GDPR

Frontmatter
Protection of Personal Data in High Performance Computing Platform for Scientific Research Purposes
Abstract
The Open Science projects are also aimed at strongly encouraging the use of Cloud technologies and High Performance Computing (HPC), for the benefit of European researchers and universities. The emerging paradigm of Open Science enables an easier access to expert knowledge and material; however, it also raises some challenges regarding the protection of personal data, considering that part of the research data are personal data thus subjected to the EU’s General Data Protection Regulation (GDPR). This paper investigates the concept of scientific research in the field of data protection, with regard both to the European (GDPR) and national (Luxembourg Data Protection Law) legal framework for the compliance of the HPC technology. Therefore, it focuses on a case study, the HPC platform of the University of Luxembourg (ULHPC), to pinpoint the major data protection issues arising from the processing activities through HPC from the perspective of the HPC platform operators. Our study illustrates where the most problematic aspects of compliance lie. In this regard, possible solutions are also suggested, which mainly revolve around (1) standardisation of procedures; (2) cooperation at institutional level; (3) identification of guidelines for common challenges. This research is aimed to support legal researchers in the field of data protection, in order to help deepen the understanding of HPC technology’s challenges and universities and research centres holding an HPC platform for research purposes, which have to address the same issues.
Ludovica Paseri, Sébastien Varrette, Pascal Bouvry
Representing Data Protection Aspects in Process Models by Coloring
Abstract
Business processes typically operate on personal data, e.g., customer data. This requires compliance with data protection regulations, e.g. the European Union General Data Protection Regulation (GDPR). The modeling of business processes is a widespread methodology to visualize and optimize business processes. This should include also data protection concerns. However, standard modeling languages like the Business Process Modeling and Notation (BPMN) do not offer an adequate notation for expressing data protection aspects.
In this paper, we propose a methodology for visualizing privacy concerns in BPMN models. We suggest using colors for marking critical activities and data in the process models. This provides an easy documentation of data protection problems and supports the optimization of processes by eliminating the privacy violation issues.
Melanie Windrich, Andreas Speck, Nils Gruschka
Trackers in Your Inbox: Criticizing Current Email Tracking Practices
Abstract
Email is among the cornerstones of our online lives. It has evolved from carrying text-only messages to delivering well-designed HTML contents. The uptake of web protocols into email, however, has facilitated the migration of web tracking techniques into email ecosystem. While recent privacy regulations have impacted the web tracking technologies, they have not directly influenced the email tracking techniques. In this short paper, we analyze a corpus of 5216 emails, give an overview of the identified tracking techniques, and argue that the existing email tracking methods do not comply with privacy regulations.
Shirin Kalantari, Andreas Put, Bart De Decker
Backmatter
Metadaten
Titel
Privacy Technologies and Policy
herausgegeben von
Nils Gruschka
Dr. Luís Filipe Coelho Antunes
Prof. Dr. Kai Rannenberg
Prof. Prokopios Drogkaris
Copyright-Jahr
2021
Electronic ISBN
978-3-030-76663-4
Print ISBN
978-3-030-76662-7
DOI
https://doi.org/10.1007/978-3-030-76663-4

Premium Partner