Skip to main content

Über dieses Buch

This book contains selected papers presented at the 13th IFIP WG 9.2, 9.6/11.7, 11.6/SIG 9.2.2 International Summer School on Privacy and Identity Management, held in Vienna, Austria, in August 2018.

The 10 full papers included in this volume were carefully reviewed and selected from 27 submissions. Also included are reviewed papers summarizing the results of workshops and tutorials that were held at the Summer School as well as papers contributed by several of the invited speakers. The papers combine interdisciplinary approaches to bring together a host of perspectives: technical, legal, regulatory, socio-economic, social, societal, political, ethical, anthropological, philosophical, historical, and psychological.



Keynotes and Invited Papers


A Causal Bayesian Networks Viewpoint on Fairness

We offer a graphical interpretation of unfairness in a dataset as the presence of an unfair causal effect of the sensitive attribute in the causal Bayesian network representing the data-generation mechanism. We use this viewpoint to revisit the recent debate surrounding the COMPAS pretrial risk assessment tool and, more generally, to point out that fairness evaluation on a model requires careful considerations on the patterns of unfairness underlying the training data. We show that causal Bayesian networks provide us with a powerful tool to measure unfairness in a dataset and to design fair models in complex unfairness scenarios.
Silvia Chiappa, William S. Isaac

Sharing Is Caring, a Boundary Object Approach to Mapping and Discussing Personal Data Processing

This work answers the following question, how to gather and act on personal data in smart city projects using boundary objects? Smart city projects require new mapping methods so they can share and discuss work collectively. Working collectively is necessary because smart city projects are difficult to map in one singular view for personal data because different smart city stakeholders have a part of the required information. Summarising data processing operations is most often taken for granted and under-defined in Data Protection Impact Assessment methods.
This paper is a plea for the use of boundary objects for GDPR compliance and research in smart cities. Therefore, this article is a comparison of the original context boundary objects with the context of smart cities to illustrate the need for a similar approach. The main results of this paper point to a new approach to enable collaborative GDPR compliance where specialist knowledge trickles down to developers and other actors not educated to comply with GDPR requirements.
Rob Heyman

Workshop and Tutorial Papers


Who You Gonna Call When There’s Something Wrong in Your Processing? Risk Assessment and Data Breach Notifications in Practice

With the assessment of the risk to the rights and freedoms of natural persons the GDPR introduces a novel concept. In a workshop participants were introduced to the notion of risk, based on the framework of the German data protection authorities, focusing on personal data breach notifications. This risk framework was then used by participants to assess case studies on data breaches. Taking the perspective of either a controller or a data protection authority, participants discussed the risks, the information provided and the necessary steps required by the GDPR after a data breach.
Susan Gonscherowski, Felix Bieker

Design and Security Assessment of Usable Multi-factor Authentication and Single Sign-On Solutions for Mobile Applications

A Workshop Experience Report
In this interactive workshop we focused on multi-factor authentication and Single Sign-On solutions for mobile native applications. The main objective was to create awareness of the current limitations of these solutions in the mobile context. Thus, after an introduction part, the participants were invited to discuss usability and security issues of different mobile authentication scenarios. After this interactive part, we concluded the workshop presenting our on-going work on this topic by briefly describing our methodology for the design and security assessment of multi-factor authentication and Single Sign-On solutions for mobile native applications; and presenting a plugin that helps developers make their mobile native application secure.
Roberto Carbone, Silvio Ranise, Giada Sciarretta

Towards Empowering the Human for Privacy Online

While it is often claimed that users are more and more empowered via online technologies [4, 16, 17, 31], the counterpart of privacy dis-empowerment is more than a suspicion [27]. From a human-computer interaction perspective, the following have previously been observed (1) users still fail to use privacy technologies on a large scale; (2) a number of human-computer interaction mismatches exist that impact the use of privacy technologies [1, 5, 6, 15, 22, 32]; and (3) the user affect dimension of privacy is fear focused [14].
This paper reports on a experts’ perspectives on empowering users towards privacy. We facilitated a workshop with \(N=12\) inter-disciplinary privacy experts to gather opinions and discuss empowering case-studies We reviewed literature focusing on the empowering versus dis-empowering impact of online technologies, and looked into psychological empowerment and usable privacy research.
The workshop participants pointed to a state of privacy dis-empowerment online, with human-computer interaction and business models as major themes. While it was clear that there is no clear-cut solution, supporting clearer communication channels was key to experts’ mental models of empowered privacy online. They recommended enabling user understanding, not only for privacy threats but also in using privacy technologies and building user skills. To facilitate user interaction with complex secure communication tools, they suggested a transparency enhancing tool as a bridge between the user and encryption technology.
The outcome of the workshop and our review support the need for an approach that enables the human user, as well as their interactions and their active participation. For that we postulate the application of psychological empowerment [33]. To our knowledge, this paper provides the first known discussion among inter-disciplinary privacy experts on the topic of privacy dis-empowerment online, as well as the first categorisation of HCI mismatches impacting the use of privacy technologies.
Kovila P. L. Coopamootoo

Trust and Distrust: On Sense and Nonsense in Big Data

Big data is an appealing source and often perceived to bear all sorts of hidden information. Filtering out the gemstones of information besides the rubbish that is equally easy to “deduce” is, however, a nontrivial issue. This position paper will open with the motivating problem of risk estimation for an enterprise, using big data. Our illustrative context here is the synERGY project (“security for cyber-physical value networks Exploiting smaRt Grid sYstems”), which serves as a case study to show the (unexplored) potential, application and difficulties of using big data in practice. The paper first goes into a list of a few general do’s and don’ts about data analytics, and then digs deeper into (semi-) automated risk evaluation via a statistical trust model. Ideally, the trust and hence risk assessment should be interpretable, justified, up-to-date and comprehensible in order to provide a maximum level of information with minimal additional manual effort. The ultimate goal of projects like synERGY is to establish trust in a system, based on observed behavior and its resilience to anomalies. This calls for a distinction of “normal” (in the sense of behavior under expected working conditions) from “abnormal” behavior, and trust can intuitively be understood as the (statistical) expectation of “normal” behavior.
Stefan Rass, Andreas Schorn, Florian Skopik

GDPR Transparency Requirements and Data Privacy Vocabularies

This tutorial introduced participants to the transparency requirements of the General Data Protection Regulation (GDPR) [35]. Therein, it was explored together with the attendees whether technical specifications can be valuable to support transparency in favour of a data subject whose personal information is being processed. In the context of the discussions, past and present international efforts were examined that focus on data privacy vocabularies and taxonomies as basis work to enable effective enforcement of data handling policies. One example of a current undertaking in this area is the W3C Data Privacy Vocabularies and Controls Community Group (DPVCG) which aims at developing a taxonomy of privacy terms aligned to the GDPR, which encompasses personal data categories, processing purposes, events of disclosures, consent, and processing operations. During the tutorial session, the potential of such efforts was discussed among the participants, allowing for conclusions about the need to re-align and update past research in this area to the General Data Protection Regulation.
Eva Schlehahn, Rigo Wenning

Wider Research Applications of Dynamic Consent

As research processes change due to technological developments in how data is collected, stored and used, so must consent methods. Dynamic consent is an online mechanism allowing research participants to revisit consent decisions they have made about how their data is used. Emerging from bio-banking where research data is derived from biological samples, dynamic consent has been designed to address problems with participant engagement and oversight. Through discussion that emerged during a workshop run at the IFIP 2018 Summer School, this paper explores wider research problems could be addressed by dynamic consent. Emergent themes of research design, expectation management and trust suggested overarching research problems which could be addressed with a longer term view of how research data is used, even if that use is unknown at the point of collection. We posit that the existing model of dynamic consent offers a practical research approach outside of bio-banking.
Arianna Schuler Scott, Michael Goldsmith, Harriet Teare

Selected Papers


Glycos: The Basis for a Peer-to-Peer, Private Online Social Network

Typical Web 2.0 applications are built on abstractions, allowing developers to rapidly and securely develop new features. For decentralised applications, these abstractions are often poor or non-existent.
By proposing a set of abstract but generic building blocks for the development of peer-to-peer (decentralised), private online social networks, we aim to ease the development of user-facing applications. Additionally, an abstract programming system decouples the application from the data model, allowing to alter the front-end independently from the back-end.
The proposed proof-of-concept protocol is based on existing cryptographic building blocks, and its viability is assessed in terms of performance.
Ruben De Smet, Ann Dooms, An Braeken, Jo Pierson

GDPR and the Concept of Risk:

The Role of Risk, the Scope of Risk and the Technology Involved
The prominent position of risk in the GDPR has raised questions as to the meaning this concept should be given in the field of data protection. This article acknowledges the value of extracting information from the GDPR and using this information as means of interpretation of risk. The ‘role’ that risk holds in the GDPR as well as the ‘scope’ given to the concept, are both examined and provide the reader with valuable insight as to the legislature’s intentions with regard to the concept of risk. The article also underlines the importance of taking into account new technologies used in personal data processing operations. Technologies such as IoT, AI, algorithms, present characteristics (e.g. complexity, autonomy in behavior, processing and generation of vast amounts of personal data) that influence our understanding of risk in data protection in various ways.
Katerina Demetzou

Privacy Patterns for Pseudonymity

To implement the principle of Privacy by Design mentioned in the European General Data Protection Regulation one important measurement stated there is pseudonymisation. Pseudonymous data is widely used in medical applications and is investigated e.g. for vehicular ad-hoc networks and Smart Grid. The concepts used there address a broad range of important aspects and are therefore often specific and complex. Some privacy patterns are already addressing pseudonymity, but they are mostly abstract or rather very specific. This paper proposes privacy patterns for the development of pseudonymity concepts based on the analysis of pseudonymity solutions in use cases.
Alexander Gabel, Ina Schiering

Implementing GDPR in the Charity Sector: A Case Study

Due to their organisational characteristics, many charities are poorly prepared for the General Data Protection Regulation (GDPR). We present an exemplar process for implementing GDPR and the DPIA Data Wheel, a DPIA framework devised as part of the case study, that accounts for these characteristics. We validate this process and framework by conducting a GDPR implementation with a charity that works with vulnerable adults. This charity processes both special category (sensitive) and personally identifiable data. This GDPR implementation was conducted and devised for the charity sector, but can be equally applied in any organisation that need to implement GDPR or conduct DPIAs.
Jane Henriksen-Bulmer, Shamal Faily, Sheridan Jeary

Me and My Robot - Sharing Information with a New Friend

This paper investigates user perception regarding social robots and personal information disclosure. During a study two participant groups stated their attitude towards functionality, shared personal information and the interest of transparency and intervenability. The impact of technical background knowledge regarding users attitude and perception was examined. Participants working with robots have a more open-minded attitude to share personal information achieving a wider range of functionality. Both groups care about transparency of collected data and the possibility of intervenability.
Tanja Heuer, Ina Schiering, Reinhard Gerndt

chownIoT: Enhancing IoT Privacy by Automated Handling of Ownership Change

Considering the increasing deployment of smart home IoT devices, their ownership is likely to change during their life-cycle. IoT devices, especially those used in smart home environments, contain privacy-sensitive user data, and any ownership change of such devices can result in privacy leaks. The problem arises when users are either not aware of the need to reset/reformat the device to remove any personal data, or not trained in doing it correctly as it can be unclear what data is kept where. In addition, if the ownership change is due to theft or loss, then there is no opportunity to reset. Although there has been a lot of research on security and privacy of IoT and smart home devices, to the best of our knowledge, there is no prior work specifically on automatically securing ownership changes. We present a system called for securely handling ownership change of IoT devices. combines authentication (of both users and their smartphone), profile management, data protection by encryption, and automatic inference of ownership change. For the latter, we use a simple technique that leverages the context of a device. Finally, as a proof of concept, we develop a prototype that implements inferring ownership change from changes in the WiFi SSID. The performance evaluation of the prototype shows that has minimal overhead and is compatible with the dominant IoT boards on the market.
Md Sakib Nizam Khan, Samuel Marchal, Sonja Buchegger, N. Asokan

Is Privacy Controllable?

One of the major views of privacy associates privacy with the control over information. This gives rise to the question how controllable privacy actually is. In this paper, we adapt certain formal methods of control theory and investigate the implications of a control theoretic analysis of privacy. We look at how control and feedback mechanisms have been studied in the privacy literature. Relying on the control theoretic framework, we develop a simplistic conceptual control model of privacy, formulate privacy controllability issues and suggest directions for possible research.
Yefim Shulman, Joachim Meyer

Assessing Theories for Research on Personal Data Transparency

A growing number of business models are based on the collection, processing and dissemination of personal data. For a free decision about the disclosure of personal data, the individual concerned needs transparency as insight into which personal data is collected, processed, passed on to third parties, for what purposes and for what time (Personal Data Transparency, or PDT for short). The intention of this paper is to assess theories for research on PDT. We performed a literature review and explored theories used in research on PDT. We assessed the selected theories that may be appropriate for exploring PDT. Such research may build on several theories that open up different perspectives and enable various fields of study.
Anette Siebenkäs, Dirk Stelzer

Data Protection by Design for Cross-Border Electronic Identification: Does the eIDAS Interoperability Framework Need to Be Modernised?

This paper contributes to the discussion on privacy preservation methods in the context of electronic identification (eID) across borders through interdisciplinary research. In particular, we evaluate how the GDPR principle of ‘Data Protection by Design’ applies to the processing of personal data undertaken for identification and authentication purposes, suggesting that, in some cases, unlinkable eIDs should be a key requirement in order to facilitate data minimisation and purpose limitation. We argue that in an attempt to welcome diverse types of architectures, the Interoperability Framework could have the effect of reducing the data protection level reached by some national eID schemes, when transacting with services that do not require unique identification. We consequently propose that data minimisation and purpose limitation principles should be facilitated through the implementation of two methods, pseudonymisation and selective disclosure, through an addition to eIDAS’ technical specifications.
Niko Tsakalakis, Sophie Stalla-Bourdillon, Kieron O’Hara

Risk Profiling by Law Enforcement Agencies in the Big Data Era: Is There a Need for Transparency?

This paper looks at the use of risk profiles by law enforcement in the age of Big Data. First, the paper discusses different use-types of risk profiling. Subsequently, the paper deals with the following three categories of challenges of risk profiling: (a) false positives (and to some extent false negatives) as well as incorrect data and erroneous analysis, (b) discrimination and stigmatization, (c) and maintaining appropriate procedural safeguards. Based on the hypothesis of risk profiling creating challenges, this paper addresses the question whether we need transparency of risk profiling by law enforcement actors, from the perspective of protecting fundamental rights of those affected by the use of risk profiles. The paper explores tackling these challenges from the angle of transparency, introducing Heald’s varieties of transparency as a theoretical model.
Sascha van Schendel


Weitere Informationen

Premium Partner