Skip to main content

2020 | Buch

Privacy and Identity Management. Data for Better Living: AI and Privacy

14th IFIP WG 9.2, 9.6/11.7, 11.6/SIG 9.2.2 International Summer School, Windisch, Switzerland, August 19–23, 2019, Revised Selected Papers

herausgegeben von: Dr. Michael Friedewald, Melek Önen, Prof. Dr. Eva Lievens, Stephan Krenn, Samuel Fricker

Verlag: Springer International Publishing

Buchreihe : IFIP Advances in Information and Communication Technology

insite
SUCHEN

Über dieses Buch

This book contains selected papers presented at the 14th IFIP WG 9.2, 9.6/11.7, 11.6/SIG 9.2.2 International Summer School on Privacy and Identity Management, held in Windisch, Switzerland, in August 2019.

The 22 full papers included in this volume were carefully reviewed and selected from 31 submissions. Also included are reviewed papers summarizing the results of workshops and tutorials that were held at the Summer School as well as papers contributed by several of the invited speakers. The papers combine interdisciplinary approaches to bring together a host of perspectives, which are reflected in the topical sections: language and privacy; law, ethics and AI; biometrics and privacy; tools supporting data protection compliance; privacy classification and security assessment; privacy enhancing technologies in specific contexts.

The chapters "What Does Your Gaze Reveal About You? On the Privacy Implications of Eye Tracking" and "Privacy Implications of Voice and Speech Analysis - Information Disclosure by Inference" are open access under a CC BY 4.0 license at link.springer.com.

Inhaltsverzeichnis

Frontmatter

Invited Papers

Frontmatter
Privacy as Enabler of Innovation
Abstract
Privacy has long been perceived as a hindrance to innovation. It has been considered to raise costs for data governance without providing real benefits. However, the attitude of various stakeholders towards the relationship between privacy and innovation has started to change. Privacy is increasingly embraced as an enabler of innovation, given that consumer trust is central for realising businesses with data-driven products and services. In addition to building trust by demonstrating accountability in the processing of personal data, companies are increasingly using tools to protect privacy, for example in the context of data storage and archiving. More and more companies are realising that they can benefit from a proactive approach to data protection. A growing number of tools for privacy protection, and the emergence of products and services that are inherently privacy friendly indicate that the market is about to change. In this paper, we first outline what “privacy as enabler of innovation” means and then present evidence for this position. Key challenges that need to be overcome on the way towards successful privacy markets include the lack of profitability of privacy-friendly offerings, conflicts with new and existing business models, low value attached to privacy by individuals, latent cultural specificities, skill gaps and regulatory loopholes.
Daniel Bachlechner, Marc van Lieshout, Tjerk Timan
Fair Enough? On (Avoiding) Bias in Data, Algorithms and Decisions
Abstract
This contribution explores bias in automated decision systems from a conceptual, (socio-)technical and normative perspective. In particular, it discusses the role of computational methods and mathematical models when striving for “fairness” of decisions involving such systems.
Francien Dechesne

Workshop and Tutorial Papers

Frontmatter
Opportunities and Challenges of Dynamic Consent in Commercial Big Data Analytics
Abstract
In the context of big data analytics, the possibilities and demands of online data services may change rapidly, and with it change scenarios related to the processing of personal data. Such changes may pose challenges with respect to legal requirements such as a transparency and consent, and therefore call for novel methods to address the legal and conceptual issues that arise in its course. We define the concept of ‘dynamic consent’ as a means to meet the challenge of acquiring consent in a commercial use case that faces change with respect to re-purposing the processing of personal data with the goal to implement new data services. We present a prototypical implementation that facilitates incremental consent forms based on dynamic consent. We report the results gained via two focus groups which we used to evaluate our design, and derive from our findings implications for future directions.
Eva Schlehahn, Patrick Murmann, Farzaneh Karegar, Simone Fischer-Hübner
Identity Management: State of the Art, Challenges and Perspectives
Abstract
Passwords are still the primary means for achieving user authentication online. However, using a username-password combination at every service provider someone wants to connect to introduces several possibilities for vulnerabilities. A combination of password reuse and a compromise of an iffy provider can quickly lead to financial and identity theft. Further, the username-password paradigm also makes it hard to distribute authorized and up-to-date attributes about users; like residency or age. Being able to share such authorized information is becoming increasingly more relevant as more real-world services become connected online. A number of alternative approaches such as individual user certificates, Single Sign-On (SSO), and Privacy-Enhancing Attribute-Based Credentials (P-ABCs) exist. We will discuss these different strategies and highlight their individual benefits and shortcomings. In short, their strengths are highly complementary: P-ABC based solutions are strongly secure and privacy-friendly but cumbersome to use; whereas SSO provides a convenient and user-friendly solution, but requires a fully trusted identity provider, as it learns all users’ online activities and could impersonate users towards other providers.
The vision of the Olympus project is to combine the advantages of these approaches into a secure and user-friendly identity management system using distributed and advanced cryptography. The distributed aspect will avoid the need of a single trusted party that is inherent in SSO, yet maintain its usability advantages for the end users. We will sketch our vision and outline the design of Olympus’ distributed identity management system.
Tore Kasper Frederiksen, Julia Hesse, Anja Lehmann, Rafael Torres Moreno
SoK: Cryptography for Neural Networks
Abstract
With the advent of big data technologies which bring better scalability and performance results, machine learning (ML) algorithms become affordable in several different applications and areas. The use of large volumes of data to obtain accurate predictions unfortunately come with a high cost in terms of privacy exposures. The underlying data are often personal or confidential and, therefore, need to be appropriately safeguarded. Given the cost of machine learning algorithms, these would need to be outsourced to third-party servers, and hence protection of the data becomes mandatory. While traditional data encryption solutions would not allow accessing the content of the data, these would, nevertheless, prevent third-party servers from executing the ML algorithms properly. The goal is, therefore, to come up with customized ML algorithms that would, by design, preserve the privacy of the processed data. Advanced cryptographic techniques such as fully homomorphic encryption or secure multi-party computation enable the execution of some operations over protected data and, therefore, can be considered as potential candidates for these algorithms. However, these techniques incur high computational and/or communication costs for some operations. In this paper, we propose a Systematization of Knowledge (SoK) whereby we analyze the tension between a particular ML technique, namely, neural networks (NN), and the characteristics of relevant cryptographic techniques.
Monir Azraoui, Muhammad Bahram, Beyza Bozdemir, Sébastien Canard, Eleonora Ciceri, Orhan Ermis, Ramy Masalha, Marco Mosconi, Melek Önen, Marie Paindavoine, Boris Rozenberg, Bastien Vialla, Sauro Vicini
Workshop on Privacy Challenges in Public and Private Organizations
Abstract
Recent developments in information technology such as the Internet of Things and the cloud computing paradigm enable public and private organisations to collect large amounts of data to employ various data analytic techniques for extracting important information that helps improve their businesses. Unfortunately, these benefits come with a high cost in terms of privacy exposures given the high sensitivity of the data that are usually processed at powerful third-party servers. Given the ever-increasing of data breaches, the serious damage they cause, and the need for compliance to the European General Data Protection Regulation (GDPR), these organisations look for secure and privacy-preserving data handling practices. During the workshop, we aimed at presenting an approach to the problem of user data protection and control, currently being developed in the scope of the PoSeID-on and PAPAYA H2020 European projects.
Alessandra Bagnato, Paulo Silva, Ala Sarah Alaqra, Orhan Ermis
News Diversity and Recommendation Systems: Setting the Interdisciplinary Scene
Abstract
Concerns about selective exposure and filter bubbles in the digital news environment trigger questions regarding how news recommender systems can become more citizen-oriented and facilitate – rather than limit – normative aims of journalism. Accordingly, this chapter presents building blocks for the construction of such a news algorithm as they are being developed by the Ghent University interdisciplinary research project #NewsDNA, of which the primary aim is to actually build, evaluate and test a diversity-enhancing news recommender. As such, the deployment of artificial intelligence could support the media in providing people with information and stimulating public debate, rather than undermine their role in that respect. To do so, it combines insights from computer sciences (news recommender systems), law (right to receive information), communication sciences (conceptualisations of news diversity), and computational linguistics (automated content extraction from text). To gather feedback from scholars of different backgrounds, this research has been presented and discussed during the 2019 IFIP summer school workshop on ‘co-designing a personalised news diversity algorithmic model based on news consumers’ agency and fine-grained content modelling’. This contribution also reflects the results of that dialogue.
Glen Joris, Camiel Colruyt, Judith Vermeulen, Stefaan Vercoutere, Frederik De Grove, Kristin Van Damme, Orphée De Clercq, Cynthia Van Hee, Lieven De Marez, Veronique Hoste, Eva Lievens, Toon De Pessemier, Luc Martens

Language and Privacy

Frontmatter
Ontology-Based Modeling of Privacy Vulnerabilities for Data Sharing
Abstract
When several parties want to share sensor-based datasets it can be difficult to know exactly what kinds of information can be extracted from the shared data. This is because many types of sensor data can be used to estimate indirect information, e.g., in smart buildings a \(\text {CO}_2\) stream can be used to estimate the presence and number of occupants in each room. If a data publisher does not consider these transformations of data their privacy protection of the data might be problematic. It currently requires a manual inspection by a knowledge expert of each dataset to identify possible privacy vulnerabilities for estimating indirect information. This manual process does not scale with the increasing availability of data due to the general lack of experts and the associated cost with their work. To improve this process, we propose a privacy vulnerability ontology that helps highlight the specific privacy challenges that can emerge when sharing a dataset. The ontology is intended to model data transformations, privacy attacks, and privacy risks regarding data streams. In the paper, we have used the ontology for modeling the findings of eight papers in the smart building domain. Furthermore, the ontology is applied to a case study scenario using a published dataset. The results show that the ontology can be used to highlight privacy risks in datasets.
Jens Hjort Schwee, Fisayo Caleb Sangogboye, Aslak Johansen, Mikkel Baun Kjærgaard
On the Design of a Privacy-Centered Data Lifecycle for Smart Living Spaces
Abstract
Many living spaces, such as homes, are becoming smarter and connected by using Internet of Things (IoT) technologies. Such systems should ideally be privacy-centered by design given the sensitive and personal data they commonly deal with. Nonetheless, few systematic methodologies exist that deal with privacy threats affecting IoT-based systems. In this paper, we capture the generic function of an IoT system to model privacy so that threats affecting such contexts can be identified and categorized at system design stage. In effect, we integrate an extension to so called Data Flow Diagrams (DFD) in the model, which provides the means to handle the privacy-specific threats in IoT systems. To demonstrate the usefulness of the model, we apply it to the design of a realistic use-case involving Facebook Portal. We use that as a means to elicit the privacy threats and mitigations that can be adopted therein. Overall, we believe that the proposed extension and categorization of privacy threats provide a useful addition to IoT practitioners and researchers in support for the adoption of sound privacy-centered principles in the early stages of the smart living design process.
Joseph Bugeja, Andreas Jacobsson
Language-Based Mechanisms for Privacy-by-Design
Abstract
The privacy by design principle has been applied in system engineering. In this paper, we follow this principle, by integrating necessary safeguards into the program system design. These safeguards are then used in the processing of personal information. In particular, we use a formal language-based approach with static analysis to enforce privacy requirements. To make a general solution, we consider a high-level modeling language for distributed service-oriented systems, building on the paradigm of active objects. The language is then extended to support specification of policies on program constructs and policy enforcement. For this we develop (i) language constructs to formally specify privacy restrictions, thereby obtaining a policy definition language, (ii) a formal notion of policy compliance, and (iii) a type and effect system for enforcing and analyzing a program’s compliance with the stated polices.
Shukun Tokas, Olaf Owe, Toktam Ramezanifarkhani

Law, Ethics and AI

Frontmatter
Aid and AI: The Challenge of Reconciling Humanitarian Principles and Data Protection
Abstract
Artificial intelligence systems have become ubiquitous in everyday life, and their potential to improve efficiency in a broad range of activities that involve finding patterns or making predictions have made them an attractive technology for the humanitarian sector. However, concerns over their intrusion on the right to privacy and their possible incompatibility with data protection principles may pose a challenge to their deployment. Furthermore, in the humanitarian sector, compliance with data protection principles is not enough, because organisations providing humanitarian assistance also need to comply with humanitarian principles to ensure the provision of impartial and neutral aid that does not harm beneficiaries in any way. In view of this, the present contribution analyses a hypothetical facial recognition system based on artificial intelligence that could assist humanitarian organisations in their efforts to identify missing persons. Recognising that such a system could create risks by providing information on missing persons that could potentially be used by harmful actors to identify and target vulnerable groups, such a system ought only to be deployed after a holistic impact assessment has been made, to ensure its adherence to both data protection and humanitarian principles.
Júlia Zomignani Barboza, Lina Jasmontaitė-Zaniewicz, Laurence Diver
Get to Know Your Geek: Towards a Sociological Understanding of Incentives Developing Privacy-Friendly Free and Open Source Software
Abstract
In this paper we sketch a road map towards a sociological understanding of software developers’ motivations to implement privacy features. Although there are a number of studies concerning incentives for developers, these accounts make little contribution to a comprehensive sociological understanding, as they are either based on a simplistic view of FOSS development in terms of altruism vs. utilitarianism, or are focused on individual psychological factors, leaving room for research that takes into account the complex social context of FOSS development. To address this gap, we propose a mixed methods approach, incorporating the strengths of qualitative and quantitative techniques for a comprehensive understanding of FOSS development as a social field. We then sketch how we envision developing a game theoretic approach based on the gathered data to analyze the situation in the field with respect to privacy features and propose relevant changes in policy and best practices.
Oğuz Özgür Karadeniz, Stefan Schiffner
Recommended for You: “You Don’t Need No Thought Control”. An Analysis of News Personalisation in Light of Article 22 GDPR
Abstract
More and more often personalisation at news websites is being introduced. In order to be capable of providing suggestions, recommender systems create and maintain a user profile per individual consumer that captures the latter’s preferences over time. This requires the processing of personal data, and more specifically its collection and use for both profiling purposes and the eventual delivery of recommendations. In view of the fact that providing users with individualised news recommendations is usually realised by solely automated means and may in some cases violate people’s absolute and non-derogable rights to freedom of thought and freedom of opinion, it could be prohibited on the basis of Article 22(1) of the EU General Data Protection Regulation. Online newspapers ideally abstain from becoming personalisation-only forums and give users some form of control as regards the determination of their reading preferences and interests.
Judith Vermeulen

Biometrics and Privacy

Frontmatter
Data Privatizer for Biometric Applications and Online Identity Management
Abstract
Biometric data embeds information about the user which enables transparent and frictionless authentication. Despite being a more reliable alternative to traditional knowledge-based mechanisms, sharing the biometric template with third-parties raises privacy concerns for the user. Recent research has shown how biometric traces can be used to infer sensitive attributes like medical conditions or soft biometrics, e.g. age and gender. In this work, we investigate a novel methodology for private feature extraction in online biometric authentication. We aim to suppress soft biometrics, i.e. age and gender, while boosting the identification potential of the input trace. To this extent, we devise a min-max loss function which combines a siamese network for authentication and a predictor for private attribute inference. The multi-objective loss function harnesses the output of the predictor through adversarial optimization and gradient flipping to maximize the final gain. We empirically evaluate our model on gait data extracted from accelerometer and gyroscope sensors: our experiments show a drop from 73% to 52% accuracy for gender classification while loosing around 6% in the identity verification task. Our work demonstrates that a better trade-off between privacy and utility in biometric authentication is not only desirable but feasible.
Giuseppe Garofalo, Davy Preuveneers, Wouter Joosen

Open Access

What Does Your Gaze Reveal About You? On the Privacy Implications of Eye Tracking
Abstract
Technologies to measure gaze direction and pupil reactivity have become efficient, cheap, and compact and are finding increasing use in many fields, including gaming, marketing, driver safety, military, and healthcare. Besides offering numerous useful applications, the rapidly expanding technology raises serious privacy concerns. Through the lens of advanced data analytics, gaze patterns can reveal much more information than a user wishes and expects to give away. Drawing from a broad range of scientific disciplines, this paper provides a structured overview of personal data that can be inferred from recorded eye activities. Our analysis of the literature shows that eye tracking data may implicitly contain information about a user’s biometric identity, gender, age, ethnicity, body weight, personality traits, drug consumption habits, emotional state, skills and abilities, fears, interests, and sexual preferences. Certain eye tracking measures may even reveal specific cognitive processes and can be used to diagnose various physical and mental health conditions. By portraying the richness and sensitivity of gaze data, this paper provides an important basis for consumer education, privacy impact assessments, and further research into the societal implications of eye tracking.
Jacob Leon Kröger, Otto Hans-Martin Lutz, Florian Müller

Open Access

Privacy Implications of Voice and Speech Analysis – Information Disclosure by Inference
Abstract
Internet-connected devices, such as smartphones, smartwatches, and laptops, have become ubiquitous in modern life, reaching ever deeper into our private spheres. Among the sensors most commonly found in such devices are microphones. While various privacy concerns related to microphone-equipped devices have been raised and thoroughly discussed, the threat of unexpected inferences from audio data remains largely overlooked. Drawing from literature of diverse disciplines, this paper presents an overview of sensitive pieces of information that can, with the help of advanced data analysis methods, be derived from human speech and other acoustic elements in recorded audio. In addition to the linguistic content of speech, a speaker’s voice characteristics and manner of expression may implicitly contain a rich array of personal information, including cues to a speaker’s biometric identity, personality, physical traits, geographical origin, emotions, level of intoxication and sleepiness, age, gender, and health condition. Even a person’s socioeconomic status can be reflected in certain speech patterns. The findings compiled in this paper demonstrate that recent advances in voice and speech processing induce a new generation of privacy threats.
Jacob Leon Kröger, Otto Hans-Martin Lutz, Philip Raschke
Border Control and Use of Biometrics: Reasons Why the Right to Privacy Can Not Be Absolute
Abstract
This paper discusses concerns pertaining to the absoluteness of the right to privacy regarding the use of biometric data for border control. The discussion explains why privacy cannot be absolute from different points of view, including privacy versus national security, privacy properties conflicting with border risk analysis, and Privacy by Design (PbD) and engineering design challenges.
Mohamed Abomhara, Sule Yildirim Yayilgan, Marina Shalaginova, Zoltán Székely

Tools Supporting Data Protection Compliance

Frontmatter
Making GDPR Usable: A Model to Support Usability Evaluations of Privacy
Abstract
We introduce a new model for evaluating privacy that builds on the criteria proposed by the EuroPriSe certification scheme by adding usability criteria. Our model is visually represented through a cube, called Usable Privacy Cube (or UP Cube), where each of its three axes of variability captures, respectively: rights of the data subjects, privacy principles, and usable privacy criteria. We slightly reorganize the criteria of EuroPriSe to fit with the UP Cube model, i.e., we show how EuroPriSe can be viewed as a combination of only rights and principles, forming the two axes at the basis of our UP Cube. In this way we also want to bring out two perspectives on privacy: that of the data subjects and, respectively, that of the controllers/processors. We define usable privacy criteria based on usability goals that we have extracted from the whole text of the General Data Protection Regulation. The criteria are designed to produce measurements of the level of usability with which the goals are reached. Precisely, we measure effectiveness, efficiency, and satisfaction, considering both the objective and the perceived usability outcomes, producing measures of accuracy and completeness, of resource utilization (e.g., time, effort, financial), and measures resulting from satisfaction scales. In the long run, the UP Cube is meant to be the model behind a new certification methodology capable of evaluating the usability of privacy, to the benefit of common users. For industries, considering also the usability of privacy would allow for greater business differentiation, beyond GDPR compliance.
Johanna Johansen, Simone Fischer-Hübner
Decision Support for Mobile App Selection via Automated Privacy Assessment
Abstract
Mobile apps have entered many areas of our everyday life through smartphones, smart TVs, smart cars, and smart homes. They facilitate daily routines and provide entertainment, while requiring access to sensitive data such as private end user data, e.g., contacts or photo gallery, and various persistent device identifiers, e.g., IMEI. Unfortunately, most mobile users neither pay attention nor fully understand privacy indicating factors that could expose malicious apps. We introduce APPA (Automated aPp Privacy Assessment), a technical tool to assist mobile users making privacy-enhanced app installation decisions. Given a set of empirically validated and publicly available factors which app users typically consider at install-time, APPA creates an output in form of a personalized privacy score. The score indicates the level of privacy safety of the given app integrating three different privacy perspectives. First, an analysis of app permissions determines the degree of privateness preservation after an installation. Second, user reviews are assessed to inform about the privacy-to-functionality trade-off by comparing the sentiment of privacy and functionality related reviews. Third, app privacy policies are analyzed with respect to their legal compliance with the European General Data Protection Regulation (GDPR). While the permissions based score introduces capabilities to filter over-privileged apps, privacy and functionality related reviews are classified with an average accuracy of 79%. As proof of concept, the APPA framework demonstrates the feasibility of user-centric tools to enhance transparency and informed consent as early as during the app selection phase.
Jens Wettlaufer, Hervais Simo
Tool-Assisted Risk Analysis for Data Protection Impact Assessment
Abstract
Unlike the classical risk analysis that protects the assets of the company in question, the GDPR protects data subject’s rights and freedoms, that is, the right to data protection and the right to have full control and knowledge about data processing concerning them. The GDPR articulates Data Protection Impact Assessment (DPIA) in article 35. DPIA is a risk-based process to enhance and demonstrate compliance with these requirements. We propose a methodology to conduct the DPIA in three steps and provide a supporting tool. In this paper, we particularly elaborate on risk analysis as a step of this methodology. The provided tool assists controllers to facilitate data subject’s rights and freedoms. The assistance that our tool provides differentiates our work from the existing ones.
Salimeh Dashti, Silvio Ranise

Privacy Classification and Security Assessment

Frontmatter
How to Protect My Privacy? - Classifying End-User Information Privacy Protection Behaviors
Abstract
The Internet and smart devices pose many risks at users’ information privacy. Individuals are aware of that and try to counter tracking activities by applying different privacy protection behaviors. These are manifold and differ in scope, goal and degree of technology utilization. Although there is a lot of literature which investigates protection strategies, it is lacking holistic user-centric classifications.
We review literature and identify 141 privacy protection behaviors end-users show. We map these results to 38 distinct categories and apply hybrid cart sorting to create a taxonomy, which we call the “End-User Information Privacy Protection Behavior Model” (EIPPBM).
Frank Ebbers
Annotation-Based Static Analysis for Personal Data Protection
Abstract
This paper elaborates the use of static source code analysis in the context of data protection. The topic is important for software engineering in order for software developers to improve the protection of personal data during software development. To this end, the paper proposes a design of annotating classes and functions that process personal data. The design serves two primary purposes: on one hand, it provides means for software developers to document their intent; on the other hand, it furnishes tools for automatic detection of potential violations. This dual rationale facilitates compliance with the General Data Protection Regulation (GDPR) and other emerging data protection and privacy regulations. In addition to a brief review of the state-of-the-art of static analysis in the data protection context and the design of the proposed analysis method, a concrete tool is presented to demonstrate a practical implementation for the Java programming language.
Kalle Hjerppe, Jukka Ruohonen, Ville Leppänen
Order of Control and Perceived Control over Personal Information
Abstract
Focusing on personal information disclosure, we apply control theory and the notion of the Order of Control to study people’s understanding of the implications of information disclosure and their tendency to consent to disclosure. We analyzed the relevant literature and conducted a preliminary online study (N = 220) to explore the relationship between the Order of Control and perceived control over personal information. Our analysis of existing research suggests that the notion of the Order of Control can help us understand people’s decisions regarding the control over their personal information. We discuss limitations and future directions for research regarding the application of the idea of the Order of Control to online privacy.
Yefim Shulman, Thao Ngo, Joachim Meyer
Aggregating Corporate Information Security Maturity Levels of Different Assets
Abstract
General Data Protection Regulation (GDPR) has not only a great influence on data protection but also on the area of information security especially with regard to Article 32. This article emphasizes the importance of having a process to regularly test, assess and evaluate the security. The measuring of information security however, involves overcoming many obstacles. The quality of information security can only be measured indirectly using metrics and Key Performance Indicators (KPIs), as no gold standard exist. Many studies are concerned with using metrics to get as close as possible to the status of information security but only a few focus on the comparison of information security metrics. This paper deals with aggregation types of corporate information security maturity levels from different assets in order to find out how the different aggregation functions effect the results and which conclusions can be drawn from them. The required model has already been developed by the authors and tested for applicability by means of case studies. In order to investigate the significance of the ranking from the comparison of the aggregation in more detail, this paper will try to work out in which way a maturity control should be aggregated in order to serve the company best in improving its security. This result will be helpful for all companies aiming to regularly assess and improve their security as requested by the GDPR. To verify the significance of the results with different sets, real information security data from a large international media and technology company has been used.
Michael Schmid, Sebastian Pape

Privacy Enhancing Technologies in Specific Contexts

Frontmatter
Differential Privacy in Online Dating Recommendation Systems
Abstract
By their very nature, recommendation systems that are based on the analysis of personal data are prone to leak information about personal preferences. In online dating, that data might be highly personal. The goal of this work is to analyse, for different online dating recommendation systems from the literature, if differential privacy can be used to hide individual connections (for example, an expression of interest) in the data set from any other user on the platform - or an adversary that has access to the information of one or multiple users. We investigate two recommendation systems from the literature on their potential to be modified to satisfy differential privacy, in the sense that individual connections are hidden from anyone else on the platform. For Social Collab by Cai et al. we show that this is impossible, while for RECON by Pizzato et al. we give an algorithm that theoretically promises a good trade-off between accuracy and privacy. Further, we consider the problem of stochastic matching, which is used as the basis for some other recommendation systems. Here we show the possibility of a good accuracy and privacy trade-off under edge-differential privacy.
Teresa Anna Steiner
Distributed Ledger for Provenance Tracking of Artificial Intelligence Assets
Abstract
High availability of data is responsible for the current trends in Artificial Intelligence (AI) and Machine Learning (ML). However, high-grade datasets are reluctantly shared between actors because of lacking trust and fear of losing control. Provenance tracing systems are a possible measure to build trust by improving transparency. Especially the tracing of AI assets along complete AI value chains bears various challenges such as trust, privacy, confidentiality, traceability, and fair remuneration. In this paper we design a graph-based provenance model for AI assets and their relations within an AI value chain. Moreover, we propose a protocol to exchange AI assets securely to selected parties. The provenance model and exchange protocol are then combined and implemented as a smart contract on a permission-less blockchain. We show how the smart contract enables the tracing of AI assets in an existing industry use case while solving all challenges. Consequently, our smart contract helps to increase traceability and transparency, encourages trust between actors and thus fosters collaboration between them.
Philipp Lüthi, Thibault Gagnaux, Marcel Gygli
A Survey-Based Exploration of Users’ Awareness and Their Willingness to Protect Their Data with Smart Objects
Abstract
In the last years, the Internet of Things (IoT) and smart objects have become more and more popular in our everyday lives. While IoT contributes in making our everyday life more comfortable and easier, it also increases the threats to our privacy, as embedded sensors collect data about us and our environment. To foster the acceptance of IoT, privacy-preserving solutions are therefore necessary. While such solutions have already been proposed, most of them do not involve the users in their design. In this paper, we therefore adopt a user-centric approach and lay the ground for the future design of user-centric privacy-preserving solutions dedicated to smart home environments. To this end, we have designed and distributed a questionnaire fulfilled by 229 anonymous participants. Our objectives are two-fold: We aim at investigating (1) requirements for end user-involved privacy-preserving solutions and (2) users’ readiness to be involved in their own privacy protection. Our results show that the majority of our participants are aware of the data collection happening as well as the associated privacy risks and would be ready to control and audit the collected data.
Chathurangi Ishara Wickramasinghe, Delphine Reinhardt
Self-Sovereign Identity Systems
Evaluation Framework
Abstract
Digital identity systems have been around for almost as long as computers and have evolved with the increased usage of online services. Digital identities have traditionally been used as a way of authenticating to the computer systems at work, or a personal online service, such as an email. Today, our physical existence has a digital counterpart that became an integral part of everyday life. Self-Sovereign Identity (SSI) is the next step in the evolution of the digital identity management systems. The blockchain technology and distributed ledgers have provided necessary building blocks and facilities, that bring us closer to the realisation of an ideal Self-Sovereign Identity. But what exactly is an ideal Self-Sovereign Identity? What are the characteristics? Trade-offs? Here, we propose the framework and methodology that can be used to evaluate, describe, and compare SSI systems. Based on our comparison criteria and the evaluation framework, we present a systematic analytical study of existing SSI systems: uPort, Sovrin, ShoCard, Civic, and Blockstack.
Abylay Satybaldy, Mariusz Nowostawski, Jørgen Ellingsen
Privacy in Location-Based Services and Their Criticality Based on Usage Context
Abstract
Location based services are an important trend for smart city services, mobility and navigation services, fitness apps and augmented reality applications. Because of the growing significance of location-based services, location privacy is an important aspect. Typical use cases are identified and investigated based on user perceptions of usefulness and intrusiveness. In addition criticality of services is evaluated taking the typical technical realization into account. In the context of this analysis the implication of privacy patterns is investigated. An overall criticality rating based on applied location privacy patterns is proposed and thoroughly discussed, while taking the decrease of usability into consideration.
Tom Lorenz, Ina Schiering
Backmatter
Metadaten
Titel
Privacy and Identity Management. Data for Better Living: AI and Privacy
herausgegeben von
Dr. Michael Friedewald
Melek Önen
Prof. Dr. Eva Lievens
Stephan Krenn
Samuel Fricker
Copyright-Jahr
2020
Electronic ISBN
978-3-030-42504-3
Print ISBN
978-3-030-42503-6
DOI
https://doi.org/10.1007/978-3-030-42504-3