Tools and Algorithms for the Construction and Analysis of Systems
28th International Conference, TACAS 2022, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022, Munich, Germany, April 2–7, 2022, Proceedings, Part I
- Open Access
- 2022
- Open Access
- Book
- Editors
- Dr. Dana Fisman
- Grigore Rosu
- Book Series
- Lecture Notes in Computer Science
- Publisher
- Springer International Publishing
About this book
This open access book constitutes the proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2022, which was held during April 2-7, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022.
The 46 full papers and 4 short papers presented in this volume were carefully reviewed and selected from 159 submissions. The proceedings also contain 16 tool papers of the affiliated competition SV-Comp and 1 paper consisting of the competition report.
TACAS is a forum for researchers, developers, and users interested in rigorously based tools and algorithms for the construction and analysis of systems. The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, exibility, and efficiency of tools and algorithms for building computer-controlled systems.
Table of Contents
-
Synthesis
-
Frontmatter
-
HOLL: Program Synthesis for Higher Order Logic Locking
- Open Access
Download PDF-versionThe chapter discusses the threat of counterfeiting and piracy in the semiconductor industry and introduces Higher Order Logic Locking (HOLL) as a solution. HOLL hides a higher-order relation instead of a sequence of independent key bits, making it more secure against attacks like the SAT attack. The authors propose a new attack, SynthAttack, to evaluate the security of HOLL and demonstrate its effectiveness. The chapter also covers the implementation and optimization of HOLL, highlighting its low overhead and resistance to existing attacks.AI Generated
This summary of the content was generated with the help of AI.
AbstractLogic locking “hides” the functionality of a digital circuit to protect it from counterfeiting, piracy, and malicious design modifications. The original design is transformed into a “locked” design such that the circuit reveals its correct functionality only when it is “unlocked” with a secret sequence of bits—the key bit-string. However, strong attacks, especially the SAT attack that uses a SAT solver to recover the key bit-string, have been profoundly effective at breaking the locked circuit and recovering the circuit functionality.We lift logic locking to Higher Order Logic Locking (HOLL) by hiding a higher-order relation, instead of a key of independent values, challenging the attacker to discover this key relation to recreate the circuit functionality. Our technique uses program synthesis to construct the locked design and synthesize a corresponding key relation. HOLL has low overhead and existing attacks for logic locking do not apply as the entity to be recovered is no more a value. To evaluate our proposal, we propose a new attack (SynthAttack) that uses an inductive synthesis algorithm guided by an operational circuit as an input-output oracle to recover the hidden functionality. SynthAttack is inspired by the SAT attack, and similar to the SAT attack, it is verifiably correct, i.e., if the correct functionality is revealed, a verification check guarantees the same. Our empirical analysis shows that SynthAttack can break HOLL for small circuits and small key relations, but it is ineffective for real-life designs. -
The Complexity of LTL Rational Synthesis
- Open Access
Download PDF-versionThe chapter delves into the complexity of LTL rational synthesis, a process that automatically constructs reactive systems to meet their specifications even in rational environments. It builds on earlier work to tighten upper bounds for settings with three or more players and concurrent games. The complexity analysis is parametric, considering the game graph, objectives of system components, and objectives of environment components. The study generalizes rational synthesis by combining cooperative and non-cooperative approaches, extending the complexity analysis to this broader definition. Notably, it introduces algorithms that reduce rational synthesis to the nonemptiness problem of tree automata, providing a detailed and nuanced understanding of the computational challenges involved.AI Generated
This summary of the content was generated with the help of AI.
AbstractIn rational synthesis, we automatically construct a reactive system that satisfies its specification in all rational environments, namely environments that have objectives and act to fulfill them. We complete the study of the complexity of LTL rational synthesis. Our contribution is threefold. First, we tighten the known upper bounds for settings that were left open in earlier work. Second, our complexity analysis is parametric, and we describe tight upper and lower bounds in each of the problem parameters: the game graph, the objectives of the system components, and the objectives of the environment components. Third, we generalize the definition of rational synthesis, combining the cooperative and non-cooperative approaches studied in earlier work, and extend our complexity analysis to the general definition. -
Synthesis of Compact Strategies for Coordination Programs
- Open Access
Download PDF-versionThe chapter 'Synthesis of Compact Strategies for Coordination Programs' by Kedar S. Namjoshi and Nisarg Patel addresses the crucial issue of synthesizing coordination strategies that are guaranteed not to initiate unnecessary actions in multi-agent settings like IoT and robotics. Current methods may produce strategies that issue useless actions, making coordination an attractive target for automated program synthesis. The authors formalize the notion of 'compactness' and propose a solution that transforms a given temporal logic specification using automata-theoretic constructions to incorporate a notion of minimality. The central result is that the winning strategies for the transformed specification are precisely the compact strategies for the original. This allows the application of known synthesis methods to produce compact strategies. The chapter reports on prototype implementations and experiments that demonstrate the feasibility and effectiveness of the proposed methods. It also discusses the relationship to quantitative synthesis and presents approximation methods to handle the complexity of synthesizing compact strategies. The work brings attention to the need for compactness in program synthesis and provides a precise formulation and practical algorithms to achieve it.AI Generated
This summary of the content was generated with the help of AI.
AbstractIn multi-agent settings, such as IoT and robotics, it is necessary to coordinate the actions of independent agents in order to achieve a joint behavior. While it is often easy to specify the desired joint behavior, programming the necessary coordination can be difficult. In this work, we develop theory and methods to synthesize coordination strategies that are guaranteed not to initiate unnecessary actions. We refer to such strategies as being “compact.” We formalize the intuitive notion of compactness; show that existing methods do not guarantee compactness; and propose a solution. The solution transforms a given temporal logic specification, using automata-theoretic constructions, to incorporate a notion of minimality. The central result is that the winning strategies for the transformed specification are precisely the compact strategies for the original. One can therefore apply known synthesis methods to produce compact strategies. We report on prototype implementations that synthesize compact strategies for temporal logic specifications and for specifications of multi-robot coordination. -
ZDD Boolean Synthesis
- Open Access
Download PDF-versionThe chapter discusses the application of Zero-Suppressed Binary Decision Diagrams (ZDDs) in boolean synthesis, a process crucial for electronic circuits and computing. It compares ZDDs with Binary Decision Diagrams (BDDs), demonstrating that ZDDs can offer more compact representations and faster compilation times. The authors present a full investigation comparing the sizes and performance of ZDDs and BDDs, focusing on realizability, witness construction, and end-to-end synthesis. Experimental results show that while neither approach dominates across all benchmarks, ZDDs excel in certain scalable families of formulas. This makes the chapter a valuable resource for researchers and practitioners in boolean logic and circuit design, offering insights into the potential of ZDDs as a complementary tool in boolean synthesis.AI Generated
This summary of the content was generated with the help of AI.
AbstractMotivated by applications in boolean-circuit design, boolean synthesis is the process of synthesizing a boolean function with multiple outputs, given a relation between its inputs and outputs. Previous work has attempted to solve boolean functional synthesis by converting a specification formula into a Binary Decision Diagram (BDD) and quantifying existentially the output variables. We make use of the fact that the specification is usually given in the form of a Conjunctive Normal Form (CNF) formula, and we can perform resolution on a symbolic representation of a CNF formula in the form of a Zero-suppressed Binary Decision Diagram (ZDD). We adapt the realizability test to the context of CNF and ZDD, and show that the Cross operation defined in earlier work can be used for witness construction. Experiments show that our approach is complementary to BDD-based Boolean synthesis.
-
-
Verification
-
Frontmatter
-
Comparative Verification of the Digital Library of Mathematical Functions and Computer Algebra Systems
- Open Access
Download PDF-versionThe chapter focuses on the comparison and verification of the Digital Library of Mathematical Functions (DLMF) and two general-purpose computer algebra systems (CAS), Maple and Mathematica. The authors introduce a novel approach to translate mathematical expressions from the DLMF to these CAS using their previously developed tool, LACAST. The study highlights the challenges and solutions in translating mathematical expressions, particularly focusing on the semantic ambiguity of mathematical notations. The authors address issues related to the translation of sums, products, integrals, limits, and differentiations, thereby enhancing the reliability and coverage of the translations. The paper also provides an in-depth evaluation of the DLMF using both symbolic and numeric evaluations in Maple and Mathematica. The results demonstrate the feasibility and limitations of the translation approach, showcasing the potential for improving and verifying digital mathematical libraries and CAS. The chapter concludes with the provision of open access to all translations and the source code of LACAST, making it a valuable resource for researchers and practitioners in the field.AI Generated
This summary of the content was generated with the help of AI.
AbstractDigital mathematical libraries assemble the knowledge of years of mathematical research. Numerous disciplines (e.g., physics, engineering, pure and applied mathematics) rely heavily on compendia gathered findings. Likewise, modern research applications rely more and more on computational solutions, which are often calculated and verified by computer algebra systems. Hence, the correctness, accuracy, and reliability of both digital mathematical libraries and computer algebra systems is a crucial attribute for modern research. In this paper, we present a novel approach to verify a digital mathematical library and two computer algebra systems with one another by converting mathematical expressions from one system to the other. We use our previously developed conversion tool (referred to as
) to translate formulae from the NIST Digital Library of Mathematical Functions to the computer algebra systems MapleandMathematica. The contributions of our presented work are as follows: (1) we present the most comprehensive verification of computer algebra systems and digital mathematical libraries with one another; (2) we significantly enhance the performance of the underlying translator in terms of coverage and accuracy; and (3) we provide open access to translations forMapleandMathematicaof the formulae in the NIST Digital Library of Mathematical Functions. -
Verifying Fortran Programs with CIVL
- Open Access
Download PDF-versionThe chapter discusses the extension of the CIVL verification framework to handle Fortran programs, a language widely used in computational science and high-performance computing. The extension allows CIVL to directly consume Fortran source files, supporting features like array slicing and reshaping, and finding program violations specific to Fortran. The authors demonstrate the tool's effectiveness on a verification benchmark suite and real-world application kernels, showcasing its ability to verify functional correctness properties and detect defects. The chapter also highlights the challenges of preserving defects during translation and the importance of static analysis in the verification process.AI Generated
This summary of the content was generated with the help of AI.
AbstractFortran is widely used in computational science, engineering, and high performance computing. This paper presents an extension to the CIVL verification framework to check correctness properties of Fortran programs. Unlike previous work that translates Fortran to C, LLVM IR, or other intermediate formats before verification, our work allows CIVL to directly consume Fortran source files. We extended the parsing, translation, and analysis phases to support Fortran-specific features such as array slicing and reshaping, and to find program violations that are specific to Fortran, such as argument aliasing rule violations, invalid use of variable and function attributes, or defects due to Fortran’s unspecified expression evaluation order. We demonstrate the usefulness of our tool on a verification benchmark suite and kernels extracted from a real world application. -
NORMA: a tool for the analysis of Relay-based Railway Interlocking Systems
- Open Access
Download PDF-versionThe chapter introduces Norma, a tool designed to analyze Relay-based Railway Interlocking Systems (RRIS). Developed with funding from the Italian Railway Network, Norma supports the reverse engineering and migration of legacy RRIS to modern computer-based systems. It features a graphical modeling interface with over 200 configurable components, and uses formal verification techniques to provide syntactic and semantic checks, simulation, and property extraction. Norma's internal representation is translated into highly optimized Timed nuXmv models, enabling efficient verification and analysis. The tool has been experimentally evaluated on real-world RRIS schematics, demonstrating its effectiveness in supporting modelers and reducing verification times. Norma's unique approach to component-based modeling and its integration of powerful reasoning capabilities make it a standout tool in the field.AI Generated
This summary of the content was generated with the help of AI.
AbstractWe present Norma, a tool for the modeling and analysis of Relay-based Railways Interlocking Systems (RRIS). Norma is the result of a research project funded by the Italian Railway Network, to support the reverse engineering and migration to computer-based technology of legacy RRIS. The frontend fully supports the graphical modeling of Italian RRIS, with a palette of over two hundred basic components, stubs to abstract RRIS subcircuits, and requirements in terms of formal properties. The internal component based representation is translated into highly optimized Timed nuXmv models, and supports various syntactic and semantic checks based on formal verification, simulation and test case generation. Norma is experimentally evaluated, demonstrating the practical support for the modelers, and the effectiveness of the underlying optimizations. -
Efficient Neural Network Analysis with Sum-of-Infeasibilities
- Open Access
Download PDF-versionThe chapter introduces a novel method for analyzing neural network verification queries using the Sum-of-Infeasibilities (SoI) technique. Inspired by methods in convex optimization, the SoI function measures the error in the activation functions and guides the search for satisfying assignments in a convex relaxation of the network. The authors propose DeepSoI, a stochastic procedure that minimizes the SoI using Markov chain Monte Carlo sampling, and demonstrate its integration into a complete verification procedure. The chapter highlights the benefits of the SoI-based approach through extensive experimental evaluation, showing significant performance improvements over existing methods. Additionally, the authors present an interesting use case for improving the perturbation bounds found by adversarial attack algorithms.AI Generated
This summary of the content was generated with the help of AI.
AbstractInspired by sum-of-infeasibilities methods in convex optimization, we propose a novel procedure for analyzing verification queries on neural networks with piecewise-linear activation functions. Given a convex relaxation which over-approximates the non-convex activation functions, we encode the violations of activation functions as a cost function and optimize it with respect to the convex relaxation. The cost function, referred to as the Sum-of-Infeasibilities (SoI), is designed so that its minimum is zero and achieved only if all the activation functions are satisfied. We propose a stochastic procedure, DeepSoI, to efficiently minimize the SoI. An extension to a canonical case-analysis-based complete search procedure can be achieved by replacing the convex procedure executed at each search state with DeepSoI. Extending the complete search with DeepSoI achieves multiple simultaneous goals: 1) it guides the search towards a counter-example; 2) it enables more informed branching decisions; and 3) it creates additional opportunities for bound derivation. An extensive evaluation across different benchmarks and solvers demonstrates the benefit of the proposed techniques. In particular, we demonstrate that SoI significantly improves the performance of an existing complete search procedure. Moreover, the SoI-based implementation outperforms other state-of-the-art complete verifiers. We also show that our technique can efficiently improve upon the perturbation bound derived by a recent adversarial attack algorithm.
-
-
Blockchain
-
Frontmatter
-
Formal Verification of the Ethereum 2.0 Beacon Chain
- Open Access
Download PDF-versionThe chapter 'Formal Verification of the Ethereum 2.0 Beacon Chain' by Franck Cassez, Joanne Fuller, and Aditya Asgaonkar delves into the formal verification of the Ethereum 2.0 Beacon Chain, a pivotal component of the new Proof-of-Stake Ethereum network. The authors detail their methodology using the verification-friendly language Dafny to verify the absence of runtime errors in the reference implementation. They uncovered several issues, proposed verified fixes, and synthesized functional correctness specifications. The chapter highlights the importance of formal verification in ensuring the safety and reliability of the Beacon Chain, which manages a vast amount of assets and is crucial for the Ethereum 2.0 ecosystem. The authors also discuss the challenges and lessons learned during the verification process, providing valuable insights into the practical application of formal methods in blockchain technology.AI Generated
This summary of the content was generated with the help of AI.
AbstractWe report our experience in the formal verification of the reference implementation of the Beacon Chain. The Beacon Chain is the backbone component of the new Proof-of-Stake Ethereum 2.0 network: it is in charge of tracking information about the validators, their stakes, their attestations (votes) and if some validators are found to be dishonest, to slash them (they lose some of their stakes). The Beacon Chain is mission-critical and any bug in it could compromise the whole network. The Beacon Chain reference implementation developed by the Ethereum Foundation is written in Python, and provides a detailed operational description of the state machine each Beacon Chain’s network participant (node) must implement. We have formally specified and verified the absence of runtime errors in (a large and critical part of) the Beacon Chain reference implementation using the verification-friendly language Dafny. During the course of this work, we have uncovered several issues, proposed verified fixes. We have also synthesised functional correctness specifications that enable us to provide guarantees beyond runtime errors. Our software artefact with the code and proofs in Dafny is available at https://github.com/ConsenSys/eth2.0-dafny. -
Fast and Reliable Formal Verification of Smart Contracts with the Move Prover
- Open Access
Download PDF-versionThe Move Prover (MVP) is a formal verification tool designed for smart contracts written in the Move programming language, which is used in the Diem blockchain. The chapter discusses the challenges and solutions in making MVP fast and reliable, including an alias-free memory model, fine-grained invariant checking, and monomorphization. These improvements have significantly reduced verification times and timeouts, making MVP practical for routine use by developers. The chapter also explores the design of MVP and its specification language, which supports expressive specifications and invariants. The Move Prover has been successfully applied to verify large parts of the Diem framework and is integrated into the continuous integration process, ensuring the reliability of smart contracts.AI Generated
This summary of the content was generated with the help of AI.
AbstractThe Move Prover (MVP) is a formal verifier for smart contracts written in the Move programming language. MVP has an expressive specification language, and is fast and reliable enough that it can be run routinely by developers and in integration testing. Besides the simplicity of smart contracts and the Move language, three implementation approaches are responsible for the practicality of MVP: (1) an alias-free memory model, (2) fine-grained invariant checking, and (3) monomorphization. The entirety of the Move code for the Diem blockchain has been extensively specified and can be completely verified by MVP in a few minutes. Changes in the Diem framework must be successfully verified before being integrated into the open source repository on GitHub. -
A Max-SMT Superoptimizer for EVM handling Memory and Storage
- Open Access
Download PDF-versionThe chapter introduces GASOLv2, a superoptimizer for Ethereum smart contracts that optimizes both gas consumption and byte-size. It leverages a Max-SMT approach to handle memory and storage operations, addressing challenges that were previously left unresolved. The tool is designed to work with the Solidity compiler solc and has shown significant improvements in optimization. The chapter discusses the architecture and components of GASOLv2, including the synthesis of stack and memory specifications, simplification rules, and the use of a Max-SMT solver for optimization. The authors present experimental results demonstrating the effectiveness of GASOLv2 in achieving substantial gains in gas consumption and byte-size optimization. The chapter also highlights the practical implications of handling memory operations in superoptimization and the potential for further improvements in the field.AI Generated
This summary of the content was generated with the help of AI.
AbstractSuperoptimization is a compilation technique that searches for the optimal sequence of instructions semantically equivalent to a given (loop-free) initial sequence. With the advent of SMT solvers, it has been successfully applied to LLVM code (to reduce the number of instructions) and to Ethereum EVM bytecode (to reduce its gas consumption). Both applications, when proven practical, have left out memory operations and thus missed important optimization opportunities. A main challenge to superoptimization today is handling memory operations while remaining scalable. We present \(\textsf {GASOL}^{v2}\), a gas and bytes-size superoptimization tool for Ethereum smart contracts, that leverages a previous Max-SMT approach for only stack optimization to optimize also wrt. memory and storage. \(\textsf {GASOL}^{v2}\) can be used to optimize the size in bytes, aligned with the optimization criterion used by the Solidity compiler solc, and it can also be used to optimize gas consumption. Our experiments on 12,378 blocks from 30 randomly selected real contracts achieve gains of 16.42% in gas wrt. the previous version of the optimizer without memory handling, and gains of 3.28% in bytes-size over code already optimized by solc.
-
-
Grammatical Inference
-
Frontmatter
-
A New Approach for Active Automata Learning Based on Apartness
- Open Access
Download PDF-versionThe chapter introduces a new method for active automata learning called L#, which focuses on establishing apartness rather than equivalence of observations. This approach simplifies the learning process by directly operating on an observation tree, avoiding the need for auxiliary data structures like observation tables. The L# algorithm has the same asymptotic query and symbol complexities as the best existing learning algorithms but integrates adaptive distinguishing sequences to boost performance. Experiments with a prototype implementation suggest that L# is competitive with highly optimized algorithms. The chapter also discusses related work, including other approaches to active automata learning and the integration of conformance testing techniques to enhance learning efficiency.AI Generated
This summary of the content was generated with the help of AI.
AbstractWe present \(L^{\#}\), a new and simple approach to active automata learning. Instead of focusing on equivalence of observations, like the \(L^{*}\) algorithm and its descendants, \(L^{\#}\) takes a different perspective: it tries to establish apartness, a constructive form of inequality. \(L^{\#}\) does not require auxiliary notions such as observation tables or discrimination trees, but operates directly on tree-shaped automata. \(L^{\#}\) has the same asymptotic query and symbol complexities as the best existing learning algorithms, but we show that adaptive distinguishing sequences can be naturally integrated to boost the performance of \(L^{\#}\) in practice. Experiments with a prototype implementation, written in Rust, suggest that \(L^{\#}\) is competitive with existing algorithms. -
Learning Realtime One-Counter Automata
- Open Access
Download PDF-versionThe chapter introduces a novel algorithm for learning realtime one-counter automata, which are more expressive models than finite-state automata. The algorithm extends Angluin’s L∗ algorithm by incorporating counter value and partial equivalence queries, enabling it to handle more complex systems. The authors prove that their algorithm runs in exponential time and space, and they evaluate it through experiments on random benchmarks and a use case involving JSON-stream validation. The chapter also discusses the challenges and future work in refining the algorithm and its applications.AI Generated
This summary of the content was generated with the help of AI.
AbstractWe present a new learning algorithm for realtime one-counter automata. Our algorithm uses membership and equivalence queries as in Angluin’s \({L}^*\) algorithm, as well as counter value queries and partial equivalence queries. In a partial equivalence query, we ask the teacher whether the language of a given finite-state automaton coincides with a counter-bounded subset of the target language. We evaluate an implementation of our algorithm on a number of random benchmarks and on a use case regarding efficient JSON-stream validation. -
Scalable Anytime Algorithms for Learning Fragments of Linear Temporal Logic
- Open Access
Download PDF-versionThe chapter introduces a novel algorithm for learning formulas in fragments of Linear Temporal Logic (LTL) without the U-operator, addressing the scalability and computational resource issues of existing methods. The algorithm is anytime, meaning it can produce separating formulas even if it times out, and it uses a combination of directed formulas and Boolean combinations to construct these formulas efficiently. The authors demonstrate the algorithm's effectiveness and scalability through empirical evaluations, showing that it outperforms existing tools in both speed and the size of formulas it can learn. The chapter also discusses the theoretical guarantees of the algorithm and its potential for future extensions to full LTL.AI Generated
This summary of the content was generated with the help of AI.
AbstractLinear temporal logic (LTL) is a specification language for finite sequences (called traces) widely used in program verification, motion planning in robotics, process mining, and many other areas. We consider the problem of learning formulas in fragments of LTL without the \(\mathbf {U}\)-operator for classifying traces; despite a growing interest of the research community, existing solutions suffer from two limitations: they do not scale beyond small formulas, and they may exhaust computational resources without returning any result. We introduce a new algorithm addressing both issues: our algorithm is able to construct formulas an order of magnitude larger than previous methods, and it is anytime, meaning that it in most cases successfully outputs a formula, albeit possibly not of minimal size. We evaluate the performances of our algorithm using an open source implementation against publicly available benchmarks. -
Learning Model Checking and the Kernel Trick for Signal Temporal Logic on Stochastic Processes
- Open Access
Download PDF-versionThe chapter discusses the application of the kernel trick to Signal Temporal Logic (STL) formulae, enabling efficient learning and prediction of formula satisfaction on stochastic processes. The authors introduce a kernel function for STL that allows for the circumvention of manual feature extraction, demonstrating the high precision and computational efficiency of their approach. The chapter also provides a theoretical analysis with a PAC bound, ensuring the close-to-optimal performance of their predictor. The authors evaluate their method through experiments and showcase its practical applicability to various stochastic processes and real-world formulae.AI Generated
This summary of the content was generated with the help of AI.
AbstractWe introduce a similarity function on formulae of signal temporal logic (STL). It comes in the form of a kernel function, well known in machine learning as a conceptually and computationally efficient tool. The corresponding kernel trick allows us to circumvent the complicated process of feature extraction, i.e. the (typically manual) effort to identify the decisive properties of formulae so that learning can be applied. We demonstrate this consequence and its advantages on the task of predicting (quantitative) satisfaction of STL formulae on stochastic processes: Using our kernel and the kernel trick, we learn (i) computationally efficiently (ii) a practically precise predictor of satisfaction, (iii) avoiding the difficult task of finding a way to explicitly turn formulae into vectors of numbers in a sensible way. We back the high precision we have achieved in the experiments by a theoretically sound PAC guarantee, ensuring our procedure efficiently delivers a close-to-optimal predictor.
-
- Title
- Tools and Algorithms for the Construction and Analysis of Systems
- Editors
-
Dr. Dana Fisman
Grigore Rosu
- Copyright Year
- 2022
- Publisher
- Springer International Publishing
- Electronic ISBN
- 978-3-030-99524-9
- Print ISBN
- 978-3-030-99523-2
- DOI
- https://doi.org/10.1007/978-3-030-99524-9
Accessibility information for this book is coming soon. We're working to make it available as quickly as possible. Thank you for your patience.