Skip to main content
Top

2022 | Book

Data Privacy Management, Cryptocurrencies and Blockchain Technology

ESORICS 2021 International Workshops, DPM 2021 and CBT 2021, Darmstadt, Germany, October 8, 2021, Revised Selected Papers

Editors: Prof. Joaquin Garcia-Alfaro, Jose Luis Muñoz-Tapia, Guillermo Navarro-Arribas, Miguel Soriano

Publisher: Springer International Publishing

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

This book constitutes the refereed proceedings and revised selected papers from the 16th International Workshop on Data Privacy Management, DPM 2021, and the 5th International Workshop on Cryptocurrencies and Blockchain Technology, CBT 2021, which were held online on October 8, 2021, in conjunction with ESORICS 2021. The workshops were initially planned to take place in Darmstadt, Germany, and changed to an online event due to the COVID-19 pandemic.

The DPM 2021 workshop received 25 submissions and accepted 7 full and 3 short papers for publication. These papers were organized in topical sections as follows: Risks and privacy preservation; policies and regulation; privacy and learning.

For CBT 2021 6 full papers and 6 short papers were accepted out of 31 submissions. They were organized in topical sections as follows: Mining, consensus and market manipulation; smart contracts and anonymity.

Table of Contents

Frontmatter

DPM Workshop: Risks and Privacy Preservation

Frontmatter
Best Security Measures to Reduce Cyber-Incident and Data Breach Risks
Abstract
Corporations plan to adopt appropriate combinations of data privacy managements to mitigate the risk of data breach. Examples of such well-established measures include the certification of an information security management system, a periodic security auditing, and dedicated positions such as a Chief Information Officer (CIO). However, the effectiveness of introducing each of these measures to reduce the risk of data breach is unclear. To assess the effective risk reduction, this work combines the big data of cyber incidents with the attributes of corporations and computes the relative risk with respect to these security measures. Our analysis of five-year data from about 6,000 corporations reveals a negative effect for most measures. The results must be biased by industry characteristics associated with the risk of cyber incidents such as business style and company scale, which are known confounding factors. After investigating company attributes individually, we identify the significant confounding factors that represent obstacles to risk analysis. Using hypothesis testing and multiple logistic regression analysis, we adjust odds ratios for 17 security measures, social responsibilities, environmental conditions, and employment arrangements. The results confirm that an environmental auditing reduces the risk by one-third at a statistically significant level.
Hiroaki Kikuchi, Michihiro Yamada, Kazuki Ikegami, Koji Inui
Synthesizing Privacy-Preserving Location Traces Including Co-locations
Abstract
Location traces are useful for various types of geo-data analysis tasks, and synthesizing location traces is a promising approach to geo-data analysis while protecting user privacy. However, existing location synthesizers do not consider friendship information of users. In particular, a co-location between friends is an important factor for synthesizing more realistic location traces.
In this paper, we propose a novel location synthesizer that generates synthetic traces including co-locations between friends. Our synthesizer models the information about the co-locations by two parameters: friendship probability and co-location count matrix. The friendship probability represents a probability that two users will be a friend, whereas the co-location count matrix comprises a co-location count for each time instant and each location. Our synthesizer also provides DP (Differential Privacy) for training data. We evaluate our synthesizer using the Foursquare dataset. Our experimental results show that our synthesizer preserves the information about co-locations and other statistical information (e.g., population distribution, transition matrix) while providing DP with a reasonable privacy budget (e.g., smaller than 1).
Jun Narita, Yayoi Suganuma, Masakatsu Nishigaki, Takao Murakami, Tetsushi Ohki

DPM Workshop: Policies and Regulation

Frontmatter
Quantitative Rubric for Privacy Policy Analysis
Abstract
Privacy Policies are hard to read and even harder to understand - this is a widely accepted fact that tends to discourage review by the average consumer. In this paper, we created and applied a quantitative privacy evaluation rubric to evaluate 10 distinct categories from the combined privacy policies (PP) and terms of service (ToS) from 188 companies in order to test whether those documents actually give an indication to the consumer as to their planned use and protections of personal information. This analysis was performed as part of a larger experiment aimed at tracing personal information propagation across the Internet, which has led to an independently collected baseline of personal information use and abuse, measured via email, text, and phone activity generated from one-time Internet transactions. We did not see any correlation between any metrics generated from either our privacy policy analysis or by the results from our fake identities. In our analysis of 177 company documents, as 11 companies did not have any policies, we confirm the length and difficulty in reading in reading policies and find that companies adhere to jurisdiction-based regulations in addition to finding weak industry-based trends in our scoring outputs. This paper uses quantitative privacy policy metrics as a start towards helping consumers know how their data will be used.
Paul O’Donnell, Joe Harrison, Joshua Lyons, Lauren Anderson, Lauren Maunder, Sarah Ramboyong, Alan J. Michaels
Rethinking the Limits of Mobile Operating System Permissions
Abstract
Since the introduction of the iPhone in 2007, smartphones continue to have a more disruptive role in our society. The average person spends over five hours per day using their device and research has shown intentional addictive design elements in popular applications to maximize user interaction time. While smartphones have provided new capabilities that did not exist previously, it has also allowed the limitless collection of personal data that is both sensed, inferred, and stored on the device. With millions of applications available in both the App Store and Google Play, research has shown mobile applications frequently abuse granted permissions and are not truthful in permission requests. Given a coarse-grained permission model, applications can retrieve and transmit data as frequent as possible without limit, and send data to any service without the user being aware. Only recently did mobile operating system producers start to introduce more fine-grained controls. In this paper, we examine the evolution of these controls since the widespread adoption of smartphones and examine the current trends. We describe research that has provided both an improved awareness of privacy and supplemental controls for users. We also describe the shortcomings of these solutions and provide suggestions to the current permission model to limit the amount of data that can be accessed and transmitted from the device. Given the data that is available from mobile devices, it is imperative that users have more transparency in how mobile applications use their data, and that users are able to place limits on this use.
Brian Krupp
Interdependent Privacy Issues Are Pervasive Among Third-Party Applications
Abstract
Third-party applications are popular: they improve and extend the features offered by their respective platforms, whether being mobile OS, browsers or cloud-based tools. Although some privacy concerns regarding these apps have been studied in detail, the phenomenon of interdependent privacy, when a user shares others’ data with an app without their knowledge and consent. Through careful analysis of permission models and multiple platform-specific datasets, we show that interdependent privacy risks are enabled by certain permissions in all platforms studied, and actual apps request these permissions instantiating these risks. We also identify potential risk signals, and discuss solutions which could improve transparency and control for users, developers and platform owners.
Shuaishuai Liu, Barbara Herendi, Gergely Biczók

DPM Workshop: Privacy and Learning

Frontmatter
SPGC: An Integrated Framework of Secure Computation and Differential Privacy for Collaborative Learning
Abstract
Achieving differential privacy and utilizing secure multiparty computation are the two major approaches used for ensuring privacy in privacy-preserving machine learning. However, the privacy guarantee by existing integration protocols of both approaches for collaborative learning weakens when more participants join the protocols. In this work, we present Secure and Private Gradient Computation (SPGC), a novel collaborative learning framework with a strong privacy guarantee independent of the number of participants while providing high accuracy. The main idea of SPGC is to create noise for the differential privacy within secure multiparty computation. We also created an implementation of SPGC and used it in experiments to measure its accuracy and training time. The results show that SPGC is more accurate than a naive protocol based on local differential privacy by up to 5.6%.
Kazuki Iwahana, Naoto Yanai, Jason Paul Cruz, Toru Fujiwara
A k-Anonymised Federated Learning Framework with Decision Trees
Abstract
We propose a privacy-preserving framework using Mondrian k-anonymity with decision trees in a Federated Learning (FL) setting for the horizontally partitioned data. Data heterogeneity in FL makes the data non-IID (Non-Independent and Identically Distributed). We use a novel approach to create non-IID partitions of data by solving an optimization problem. In this work, each device trains a decision tree classifier. Devices share the root node of their trees with the aggregator. The aggregator merges the trees by choosing the most common split attribute and grows the branches based on the split values of the chosen split attribute. This recursive process stops when all the nodes to be merged are leaf nodes. After the merging operation, the aggregator sends the merged decision tree to the distributed devices. Therefore, we aim to build a joint machine learning model based on the data from multiple devices while offering k-anonymity to the participants.
Saloni Kwatra, Vicenç Torra
Anonymizing Machine Learning Models
Abstract
There is a known tension between the need to analyze personal data to drive business and the need to preserve the privacy of data subjects. Many data protection regulations, including the EU General Data Protection Regulation (GDPR) and the California Consumer Protection Act (CCPA), set out strict restrictions and obligations on the collection and processing of personal data. Moreover, machine learning models themselves can be used to derive personal information, as demonstrated by recent membership and attribute inference attacks. Anonymized data, however, is exempt from the obligations set out in these regulations. It is therefore desirable to be able to create models that are anonymized, thus also exempting them from those obligations, in addition to providing better protection against attacks.
Learning on anonymized data typically results in significant degradation in accuracy. In this work, we propose a method that is able to achieve better model accuracy by using the knowledge encoded within the trained model, and guiding our anonymization process to minimize the impact on the model’s accuracy, a process we call accuracy-guided anonymization. We demonstrate that by focusing on the model’s accuracy rather than generic information loss measures, our method outperforms state of the art k-anonymity methods in terms of the achieved utility, in particular with high values of k and large numbers of quasi-identifiers.
We also demonstrate that our approach has a similar, and sometimes even better ability to prevent membership inference attacks as approaches based on differential privacy, while averting some of their drawbacks such as complexity, performance overhead and model-specific implementations. In addition, since our approach does not rely on making modifications to the training algorithm, it can even work with “black-box” models where the data owner does not have full control over the training process, or within complex machine learning pipelines where it may be difficult to replace existing learning algorithms with new ones. This makes model-guided anonymization a legitimate substitute for such methods and a practical approach to creating privacy-preserving models.
Abigail Goldsteen, Gilad Ezov, Ron Shmelkin, Micha Moffie, Ariel Farkash

DPM Workshop: Short Papers

Frontmatter
A New Privacy Enhancing Beacon Scheme in V2X Communication
Abstract
We propose a new privacy-enhancing beacon scheme in Vehicle-to-Everything (V2X) communication systems and evaluate its effectiveness based on a simulation. With this scheme, vehicles dynamically adjusts their periodic transmission of Cooperative Awareness Message (CAM) and Basic Safety Message (BSM) messages based on the observation of surroundings and transmit these messages only when it is necessary. This new scheme addresses the gap in standards where continuous transmission of broadcast-based unencrypted vehicle information is assumed but may not be needed under certain circumstances. Our beacon message does not convey any privacy-linking information. This way, this new scheme enhances privacy protection of vehicle owners by limiting the transmission of information that can be linked to a particular vehicle. The complexity of its processing in both transmitting and receiving ends is kept at minimum and is simpler than CAM and BSM processing. Our simulation result indicates that this new scheme is highly effective if the density of vehicles with V2X technology is limited.
Takahito Yoshizawa, Dave Singelée, Bart Preneel
Next Generation Data Masking Engine
Abstract
This paper introduces Magen, an advanced masking engine. Magen is a policy-based masking engine that supports a wide range of payloads and use cases. Our graph-based policies and engine support the masking of composite payloads and recursively handles nested payloads based on their type (e.g., json in xml). The engine supports a myriad of advanced masking methods such as format preserving encryption and format preserving tokenization, enabling on-the-fly dynamic masking of payloads as well as the static masking of large data sets. Magen allows users to easily define their own policies for the masking process and specify their formats (data classes).
This engine was developed as part of a multi-year effort and supports real life scenarios such as: conditional masking, robustness to illegal values, enforcement of both format and masking restrictions, and semantic data fabrication. Magen has been integrated as a cloud SaaS within IBM Data and AI offerings and has proved its value in various use cases.
Micha Moffie, Dan Mor, Sigal Asaf, Ariel Farkash
Towards a Formal Approach for Data Minimization in Programs (Short Paper)
Abstract
As more and more processes are digitized, the protection of personal data becomes increasingly important for individuals, agencies, companies, and society in general. One principle of data protection is data minimization, which limits the processing and storage of personal data to the minimum necessary for the defined purpose. To adhere to this principle, an analysis of what data are needed by a piece of software is required. In this paper, we present an idea for a program analysis which connects data minimization with secure information flow to assess which personal data are required by a program: A program is decomposed into two programs. The first projects the original input, keeping only the minimal amount of required data. The second computes the original output from the projected input. Thus, we achieve a program variant which is compliant with data minimization. We define the approach, show how it can be used for different scenarios, and give examples for how to compute such a decomposition.
Florian Lanzinger, Alexander Weigl

CBT Workshop: Mining, Consensus and Market Manipulation

Frontmatter
Virtual ASICs: Generalized Proof-of-Stake Mining in Cryptocurrencies
Abstract
In proof-of-work based cryptocurrencies, miners invest computing power to maintain a distributed ledger. One known drawback of such a consensus protocol is its immense energy consumption. To prevent this waste of energy various consensus mechanism such as proof-of-space or proof-of-stake have been proposed. In proof-of-stake, block creators are selected based on the amounts of currency they stake instead of their expanded computing power.
In this work we study Virtual ASICs–a generalization of proof-of-stake. Virtual ASICs are essentially a virtualized version of proof-of-work. Miners can buy on-chain virtual mining machines which can be powered by virtual electricity. Similar to their physical counterparts, each powered virtual ASIC has a certain chance to win the right to create the next block. In the boundary case where virtual electricity is free, the protocol corresponds to proof-of-stake using an ASIC token which is separate from the currency itself (the amount of stake equals your virtual computing power). In the other boundary case where virtual computers are free, we get a proof-of-burn equivalent. That is, a consensus mechanism in which miners ‘burn’ currency to obtain lottery tickets for the right to create the next block.
From a technical point of view, we provide the following contributions:
  • We design cryptographic protocols that allow to sell Virtual ASICs in sealed-bid auctions on-chain. We ensure that as long as a majority of the miners in the system mine honestly, bids remain both private and binding, and that miners cannot censor the bids of their competitors;
  • In order to implement our auction protocol, we introduce a novel all-or-nothing broadcast functionality in blockchains that allows to “encrypt values to the future” and could be of independent interest.
  • Finally, we provide a consensus protocol based on Virtual ASICs by generalizing existing protocols for proof-of-stake consensus.
Chaya Ganesh, Claudio Orlandi, Daniel Tschudi, Aviv Zohar
Asymmetric Asynchronous Byzantine Consensus
Abstract
An important element of every blockchain network is its protocol for reaching consensus. In traditional, permissioned consensus protocols, all involved processes adhere to a global, symmetric failure model, typically only defined by bounds on the number of faulty processes. More flexible trust assumptions have recently been considered, especially in connection with blockchains. With asymmetric trust, in particular, a process is free to choose which other processes it trusts and which ones might collude against it.
Cachin and Tackmann (OPODIS 2019) introduced asymmetric quorum systems as a generalization of Byzantine quorum systems, which are the key abstraction for realizing consensus in a system with symmetric trust. This paper shows how to realize randomized signature-free asynchronous Byzantine consensus with asymmetric quorums. This results in an optimal consensus protocol with subjective, asymmetric trust and constant expected running time, which is suitable for applications in blockchain networks.
Christian Cachin, Luca Zanolini
Using Degree Centrality to Identify Market Manipulation on Bitcoin
Abstract
In 2014, the Mt.Gox Bitcoin exchange had its internal dataset hacked and leaked. After that, some studies employ this dataset to evaluate if Mt.Gox was doing market manipulation on Bitcoin. Also, they identify patterns of this manipulation. Based on these studies, this paper analyzes the Bitcoin blockchain in the period where Mt.Gox was active. We model the transactions in the blockchain as a graph and evaluate the degree centrality of each node. We thus analyze how the ranking of nodes with the highest centrality values changes over time. Our conclusions indicate that top nodes are stable, but there is a period where it changes. To better understand this behavior, we simulate the insertion of transactions in the network and verify how the ranking changes. As a result, we provide indications that we can use ranking changes to detect malicious activities. We also show a case study using this ranking to predict abnormal behavior in the network.
Daiane M. Pereira, Rodrigo S. Couto

CBT Workshop: Smart Contracts and Anonymity

Frontmatter
Augmenting MetaMask to Support TLS-endorsed Smart Contracts
Abstract
Users in blockchain systems are exposed to address replacement attacks due to the weak binding between websites and smart contracts, as they have no way to verify the authenticity of obtained addresses. Prior research introduced TLS-endorsed Smart Contracts (TeSC) that equip Smart Contracts with authentication information, proving the relation to the domain name of the respective website. For an efficient and user-friendly approach, this technology needs to be integrated with wallets. Based on the analysis of browser warnings regarding TLS-certificates, we augment MetaMask with the ability to detect TeSC and warn users if attack scenarios are detected. To evaluate our work, we conduct a study with 40 participants to show the effectiveness of TeSC to prevent address-replacement attacks and ensure the safe interaction of users and addresses.
Ulrich Gallersdörfer, Jonas Ebel, Florian Matthes
Smart Contracts for Incentivized Outsourcing of Computation
Abstract
Outsourcing computation allows a resource limited client to expand its computational capabilities by outsourcing computation to other computing nodes or clouds. A basic requirement of outsourcing is providing assurance that the computation result is correct. We consider a smart contract based outsourcing system that achieves assurance by replicating the computation on two servers, and accepts the computation result if the two responses match. Correct computation result is obtained by using incentivization to instigate correct behaviour in servers. We show that all previous replication based incentivized outsourcing protocols with proven correctness fail when automated by a smart contract, because of the copy attack where a contractor simply copies the submitted response of the other contractor. We then design an incentivization mechanism that uses two lightweight challenge-response protocols that are used when the submitted results are compared, and employs monetary rewards, fines, and bounties to incentivize correct computation. We use game theory to model and analyze our mechanism, and prove that with appropriate choices of the mechanism parameters, there is a single Nash equilibrium corresponding to the contractors’ strategy of correctly computing the result. Our work provides a foundation for replicated incentivized computation in smart contract setting, and opens new research directions.
Alptekin Küpçü, Reihaneh Safavi-Naini
Anonymous Sidechains
Abstract
Sidechains allow two or more blockchains to communicate with each other by transferring coins (or other ledger assets) from one to the other. Their functionalities set sidechains as one of the most prominent solutions towards blockchain scalability and interoperability.
A number of sidechain constructions have already been proposed on the literature presenting ways to securely move assets between blockchains for different types of underlying consensus mechanisms (PoW and PoS). In this work we study the problem of sidechains in the anonymous setting by demonstrating how multiple anonymous blockchains can interact with each other. We present the first formal definition for an anonymous sidechain and provide a first construction for privacy-preserving Zerocash [5] cross-ledger transactions.
Foteini Baldimtsi, Ian Miers, Xinyuan Zhang

CBT Workshop: Short Papers

Frontmatter
Filling the Tax Gap via Programmable Money
Abstract
We discuss the problem of facilitating tax auditing assuming “programmable money”, i.e., digital monetary instruments that are managed by an underlying distributed ledger. We explore how a taxation authority can verify the declared returns of its citizens and create a counter-incentive to tax evasion by two distinct mechanisms. First, we describe a design which enables auditing it as a built-in feature with minimal changes on the underlying ledger’s consensus protocol. Second, we offer an application-layer extension, which requires no modification in the underlying ledger’s design. Both solutions provide a high level of privacy, ensuring that, apart from specific limited data given to the taxation authority, no additional information—beyond the information already published on the underlying ledger—is leaked.
Dimitris Karakostas, Aggelos Kiayias
Impact of Delay Classes on the Data Structure in IOTA
Abstract
In distributed ledger technologies (DLTs) with a directed acyclic graph (DAG) data structure, a message-issuing node can decide where to append that message and, consequently, how to grow the DAG. This DAG data structure can typically be decomposed into two pools of messages: referenced messages and unreferenced messages (tips). The selection of the parent messages to which a node appends the messages it issues, depends on which messages it considers as tips. However, the exact time that a message enters the tip pool of a node depends on the delay of that message. In previous works, it was considered that messages have the same or similar delay; however, this generally may not be the case. We introduce the concept of classes of delays, where messages belonging to a certain class have a specific delay, and where these classes coexist in the DAG. We provide a general model that predicts the tip pool size for any finite number of different classes.
This categorisation and model is applied to the first iteration of the IOTA 2.0 protocol (a.k.a. Coordicide), where two distinct classes, namely value and data messages, coexist. We show that the tip pool size depends strongly on the dominating class that is present. Finally, we provide a methodology for controlling the tip pool size by dynamically adjusting the number of references a message creates.
Andreas Penzkofer, Olivia Saa, Daria Dziubałtowska
Secure Static Content Delivery for CDN Using Blockchain Technology
Abstract
A Content Distribution Network (CDN) is a new kind of network to distribute services and content spatially relative to end-users, providing high availability and high performance. The Origin server uses several replicas to reach this goal, but trust issues are present between them and between servers and clients.
In this work, we present a proof-of-concept for secure static content delivery (e.g., documents, images) by using Blockchain, a technology with the capability to ensure reliability and trust without a central authority. To test our proposal’s feasibility, we developed a system prototype on the Ethereum private network. The test shows the system’s goodness and the ability to create a new content distribution model over the Internet.
Mauro Conti, P. Vinod, Pier Paolo Tricomi
Lattice-Based Proof-of-Work for Post-Quantum Blockchains
Abstract
Proof of Work (PoW) protocols, originally proposed to circumvent DoS and email spam attacks, are now at the heart of the majority of recent cryptocurrencies. Current popular PoW protocols are based on hash puzzles. These puzzles are solved via a brute force search for a hash output with particular properties, such as a certain number of leading zeros. By considering the hash as a random function, and fixing a priori a sufficiently large search space, Grover’s search algorithm gives an asymptotic quadratic advantage to quantum machines over classical machines. In this paper, as a step towards a fuller understanding of post quantum blockchains, we propose a PoW protocol for which quantum machines have a smaller asymptotic advantage. Specifically, for a lattice of rank \(n\) sampled from a particular class, our protocol provides as the PoW an instance of the Hermite Shortest Vector Problem (Hermite-SVP) in the Euclidean norm, with a small approximation factor. Asymptotically, the best known classical and quantum algorithms that directly solve SVP type problems are heuristic lattice sieves, which run in time \(2^{0.292n + o(n)}\) and \(2^{0.265n + o(n)}\) respectively. We discuss recent advances in SVP type problem solvers and give examples of where the impetus provided by a lattice based PoW would help explore often complex optimization spaces.
Rouzbeh Behnia, Eamonn W. Postlethwaite, Muslum Ozgur Ozmen, Attila Altay Yavuz
Blockchain-Based Two-Factor Authentication for Credit Card Validation
Abstract
The widespread adoption of the e-commerce and web-based business has brought great increase in credit card utilization for online transactions which in turn resulted in sophisticated fraud attempts. Accurate fraud prevention and detection is a key concern in cashless economy. Multifactor authentication among others such as machine learning based behavioral analysis, data mining, black listing is one of the effective methods augmenting primary information checking. SMS messages are sent to registered phone in addition to credit card information as a second level protection. However, this information might be vulnerable to various attacks as some third party services are in the game. This paper proposes adoption of blockchain as a secure platform to store the second factor security information. User’s mobile device signature attested by the bank is stored in a permissioned blockchain. This information is accessed by the merchant through user-friendly QR-code reading interface in order to verify that the user has the registered device. We present system design along with potential threats and security analysis.
Suat Mercan, Mumin Cebe, Kemal Akkaya, Julian Zuluaga
Homomorphic Decryption in Blockchains via Compressed Discrete-Log Lookup Tables
Abstract
Many privacy preserving blockchain and e-voting systems are based on the modified ElGamal scheme that supports homomorphic addition of encrypted values. For practicality reasons though, decryption requires the use of precomputed discrete-log (dlog) lookup tables along with algorithms like Shanks’s baby-step giant-step and Pollard’s kangaroo. We extend the Shanks approach as it is the most commonly used method in practice due to its determinism and simplicity, by proposing a truncated lookup table strategy to speed up decryption and reduce memory requirements. While there is significant overhead at the precomputation phase, these costs can be parallelized and only paid once and for all. As a starting point, we evaluated our solution against the widely-used secp family of elliptic curves and show that we can achieve storage reduction by 7x–14x, depending on the group size. Our algorithm can be immediately imported to existing works, especially when the range of encrypted values is known, such as in Zether, PGC and Solidus protocols.
Panagiotis Chatzigiannis, Konstantinos Chalkias, Valeria Nikolaenko
Backmatter
Metadata
Title
Data Privacy Management, Cryptocurrencies and Blockchain Technology
Editors
Prof. Joaquin Garcia-Alfaro
Jose Luis Muñoz-Tapia
Guillermo Navarro-Arribas
Miguel Soriano
Copyright Year
2022
Electronic ISBN
978-3-030-93944-1
Print ISBN
978-3-030-93943-4
DOI
https://doi.org/10.1007/978-3-030-93944-1

Premium Partner