Skip to main content
Top

2016 | Book

Principles of Performance and Reliability Modeling and Evaluation

Essays in Honor of Kishor Trivedi on his 70th Birthday

insite
SEARCH

About this book

This book presents the latest key research into the performance and reliability aspects of dependable fault-tolerant systems and features commentary on the fields studied by Prof. Kishor S. Trivedi during his distinguished career. Analyzing system evaluation as a fundamental tenet in the design of modern systems, this book uses performance and dependability as common measures and covers novel ideas, methods, algorithms, techniques, and tools for the in-depth study of the performance and reliability aspects of dependable fault-tolerant systems. It identifies the current challenges that designers and practitioners must face in order to ensure the reliability, availability, and performance of systems, with special focus on their dynamic behaviors and dependencies, and provides system researchers, performance analysts, and practitioners with the tools to address these challenges in their work. With contributions from Prof. Trivedi's former PhD students and collaborators, many of whom are internationally recognized experts, to honor him on the occasion of his 70th birthday, this book serves as a valuable resource for all engineering disciplines, including electrical, computer, civil, mechanical, and industrial engineering as well as production and manufacturing.

Table of Contents

Frontmatter

Phase Type Distributions, Expectation Maximization Algorithms, and Probabilistic Graphical Models

Frontmatter
Phase Type and Matrix Exponential Distributions in Stochastic Modeling
Abstract
Since their introduction, properties of Phase Type (PH) distributions have been analyzed and many interesting theoretical results found. Thanks to these results, PH distributions have been profitably used in many modeling contexts where non-exponentially distributed behavior is present. Matrix Exponential (ME) distributions are distributions whose matrix representation is structurally similar to that of PH distributions but represent a larger class. For this reason, ME distributions can be usefully employed in modeling contexts in place of PH distributions using the same computational techniques and similar algorithms, giving rise to new opportunities. They are able to represent different dynamics, e.g., faster dynamics, or the same dynamics but at lower computational cost. In this chapter, we deal with the characteristics of PH and ME distributions, and their use in stochastic analysis of complex systems. Moreover, the techniques used in the analysis to take advantage of them are revised.
Andras Horvath, Marco Scarpa, Miklos Telek
An Analytical Framework to Deal with Changing Points and Variable Distributions in Quality Assessment
Abstract
Nonfunctional properties such as dependability and performance have growing impact on the design of a broad range of systems and services, where tighter constraints and stronger requirements have to be met. This way, aspects such as dependencies or interference, quite often neglected, now have to be taken into account due to the higher demand in terms of quality. In this chapter, we associate such aspects with operating conditions for a system, proposing an analytical framework to evaluate the effects of condition changing to the system quality properties. Starting from the phase type expansion technique, we developed a fitting algorithm able to catch the behavior of the system at changing points, implementing a codomain memory policy forcing the continuity of the observed quantity when operating conditions change. Then, to also deal with the state-space explosion problem of the underlying stochastic process, we resort to Kronecker algebra providing a tool able to evaluate, both in transient and steady states, nonfunctional properties of systems affected by variable operating conditions. Some examples from different domains are discussed to demonstrate the effectiveness of the proposed framework and its suitability to a wide range of problems.
Dario Bruneo, Salvatore Distefano, Francesco Longo, Marco Scarpa
Fitting Phase-Type Distributions and Markovian Arrival Processes: Algorithms and Tools
Abstract
This chapter provides a comprehensive survey of PH (phase-type) distribution and MAP (Markovian arrival process) fitting. The PH distribution and MAP are widely used in analytical model-based performance evaluation because they can approximate non-Markovian models with arbitrary accuracy as Markovian models. Among a number of past research results on PH/MAP fitting, we present the mathematical definition of the PH distribution and MAP, and summarize the most recent state-of-the-art results on the fitting methods. We also offer an overview of the software tools for PH/MAP fitting.
Hiroyuki Okamura, Tadashi Dohi
Constant-Stress Accelerated Life-Test Models and Data Analysis for One-Shot Devices
Abstract
In reliability analysis, accelerated life-tests are commonly used for inducing rapid failures, thus producing more lifetime information in a relatively short period of time. A link function relating stress levels and lifetimes is then utilized to extrapolate lifetimes of units from accelerated conditions to normal operating conditions. In the context of one-shot device testing, encountered commonly in testing devices such as munitions, rockets, and automobile air bags, either left- or right-censored data are collected instead of actual lifetimes of the devices under test. In this chapter, we study binary response data of one-shot devices collected from constant-stress accelerated life-tests, and discuss the analysis of such one-shot device testing data under accelerated life-tests based on parametric and semi-parametric models. In addition, a competing risks model is introduced into the one-shot device testing analysis under constant-stress accelerated life-test setting. Finally, some numerical examples are presented to illustrate the models and inferential results discussed here.
Narayanaswamy Balakrishnan, Man Ho Ling, Hon Yiu So
Probabilistic Graphical Models for Fault Diagnosis in Complex Systems
Abstract
In this chapter, we discuss the problem of fault diagnosis for complex systems in two different contexts: static and dynamic probabilistic graphical models of systems. The fault diagnosis problem is represented using a tripartite probabilistic graphical model. The first layer of this tripartite graph is composed of components of the system, which are the potential sources of failures. The condition of each component is represented by a binary state variable which is zero if the component is healthy and one otherwise. The second layer is composed of tests with binary outcomes (pass or fail) and the third layer is the noisy observations associated with the test outcomes. The cause–effect relations between the states of components and the observed test outcomes can be compactly modeled in terms of detection and false alarm probabilities. For a failure source and an observed test outcome, the probability of fault detection is defined as the probability that the observed test outcome is a fail given that the component is faulty, and the probability of false alarm is defined as the probability that the observed test outcome is a fail given that the component is healthy. When the probability of fault detection is one and the probability of false alarm is zero, the test is termed perfect; otherwise, it is deemed imperfect. In static models, the diagnosis problem is formulated as one of maximizing the posterior probability of component states given the observed fail or pass outcomes of tests. Since the solution to this problem is known to be NP-hard, to find near-optimal diagnostic solutions, we use a Lagrangian (dual) relaxation technique, which has the desirable property of providing a measure of suboptimality in terms of the approximate duality gap. Indeed, the solution would be optimal if the approximate duality gap is zero. The static problem is discussed in detail and some interesting properties, such as the reduction of the problem to a set covering problem in the case of perfect tests, are discussed. We also visualize the dual function graphically and introduce some insights into the static fault diagnosis problem. In the context of dynamic probabilistic graphical models, it is assumed that the states of components evolve as independent Markov chains and that, at each time epoch, we have access to some of the observed test outcomes. Given the observed test outcomes at different time epochs, the goal is to determine the most likely evolution of the states of components over time. The application of dual relaxation techniques results in significant reduction in the computational burden as it transforms the original coupled problem into separable subproblems, one for each component, which are solved using a Viterbi decoding algorithm. The problems, as stated above, can be regarded as passive monitoring, which relies on synchronous or asynchronous availability of sensor results to infer the most likely state evolution of component states. When information is sequentially acquired to isolate the faults in minimum time, cost, or other economic factors, the problem of fault diagnosis can be viewed as active probing (also termed sequential testing or troubleshooting). We discuss the solution of active probing problems using the information heuristic and rollout strategies of dynamic programming. The practical applications of passive monitoring and active probing to fault diagnosis problems in automotive, aerospace, power, and medical systems are briefly mentioned.
Ali Abdollahi, Krishna R. Pattipati, Anuradha Kodali, Satnam Singh, Shigang Zhang, Peter B. Luh

Principles of Performance and Reliability Modeling and Evaluation

Frontmatter
From Performability to Uncertainty
Abstract
Starting from the expertise in reliability and in performance evaluation, we present the notion of performability introduced by John Meyer in his famous paper. We recall that in the past, few industry leaders believed in stochastic models, most of them placing greater confidence in the development of deterministic models and the use of coefficients of security to take into account the different uncertainties. But now, the notion of risk has been emphasized by the development of new technologies, the generalization of insurance policies, and the practice of service level agreements. Therefore, this is the time to consider stochastic models, where former deterministic parameters are replaced by random variables, with the encouragement of industrial leaders. We illustrate these latter models through two variants of a case study.
Raymond A. Marie
Sojourn Times in Dependability Modeling
Abstract
We consider Markovian models of computing or communication systems, subject to failures and, possibly, repairs. The dependability properties of such systems lead to metrics that can all be described in terms of the time that the Markov chain spends in subsets of its state space. Some examples of such metrics are MTTF and MTTR, reliability or availability at a point in time, the mean or the distribution of the interval availability in a fixed time interval, and more generally different performability versions of these measures. This chapter reviews this point of view and its consequences, and discusses some new results related to it.
Gerardo Rubino, Bruno Sericola
Managed Dependability in Interacting Systems
Abstract
A digital ICT infrastructure must be considered as a system of systems in itself, but also in interaction with other critical infrastructures such as water distributions, transportation (e.g. Intelligent Transport Systems) and Smart Power Grid control. These systems are characterised by self-organisation, autonomous sub-systems, continuous evolution, scalability and sustainability, providing both economic and social value. Services delivered involve a chain of stakeholders that share the responsibility, providing robust and secure services with stable and good performance. One crucial challenge for the different operation/control centres of the stakeholders is to manage dependability during normal operation, which may be characterised by many failures of minor consequence. In seeking to optimise the utilisation of the available resources with respect to dependability, new functionality is added with the intension to help assist in obtaining situational awareness, and for some parts enable autonomous operation. This new functionality adds complexity, such that the complexity of the (sub)systems and their operation will increase. As a consequence of adding a complex system to handle complexity, the frequency and severity of the consequences of such events may increase. Furthermore, as a side-effect of this, the preparedness will be reduced for restoration of services after a major event (that might involves several stakeholders), such as common software breakdown, security attacks, or natural disaster. This chapter addresses the dependability challenges related to the above-mentioned system changes. It is important to understand how adding complexity to handle complexity will influence the risks, both with respect to the consequences and the probabilities. In order to increase insight, a dependability modelling approach is taken, where the goal is to combine and extend the existing modelling approaches in a novel way. The objective is to quantify different strategies for management of dependability in interacting systems. Two comprehensive system examples are used to illustrate the approach. A software-defined networking example addresses the effect of moving control functionality from being distributed and embedded with the primary function, to be separated and (virtually) centralised. To demonstrate and discuss the consequences of adding more functionality both in the distributed entities serving the primary function, and centralised in the control centre, a Smart Grid system example is studied.
Poul E. Heegaard, Bjarne E. Helvik, Gianfranco Nencioni, Jonas Wäfler
30 Years of GreatSPN
Abstract
GreatSPN is a tool for the stochastic analysis of systems modeled as (stochastic) Petri nets. This chapter describes the evolution of the GreatSPN framework over its life span of 30 years, from the first stochastic Petri net analyzer implemented in Pascal, to the current, fancy, graphical interface that supports a number of different model analyzers. This chapter reviews, with the help of a manufacturing system example, how GreatSPN is currently used for an integrated qualitative and quantitative analysis of Petri net systems, ranging from symbolic model checking techniques to a stochastic analysis whose efficiency is boosted by lumpability.
Elvio Gilberto Amparore, Gianfranco Balbo, Marco Beccuti, Susanna Donatelli, Giuliana Franceschinis
WebSPN: A Flexible Tool for the Analysis of Non-Markovian Stochastic Petri Nets
Abstract
This chapter describes WebSPN, a modeling tool for the analysis of non-Markovian stochastic Petri nets (NMSPNs). WebSPN is a flexible tool, providing different solution techniques to deal with the complexity of the stochastic process underlying a NMSPN. The first solution technique that was developed within WebSPN is based on a discrete-time approximation of the stochastic behavior of the marking process which enables the analysis of a broad class of NMSPN models with preemptive repeat different (prd), preemptive resume (prs), and preemptive repeat identical (pri) concurrently enabled generally distributed transitions. One of the main drawbacks of the discrete state space expansion approach is the state space explosion that limits the tractability of complex models. For such a reason, a new solution technique has been implemented in the WebSPN tool, which is based on the use of multiterminal multi-valued decision diagram (MTMDD) and Kronecker matrices to store the expanded process. Such a solution works in the continuous time domain and enables the analysis of much more complex NMSPNs with prd and prs concurrently enabled generally distributed transitions. Finally, WebSPN also implements a simulative solution, thus providing a complete and powerful tool for modeling and analysis of real complex systems.
Francesco Longo, Marco Scarpa, Antonio Puliafito
Modeling Availability Impact in Cloud Computing
Abstract
Internet-based services have become critical to several businesses in which many aspects of our lives depend on (e.g., online banking, collaborative work, videoconferencing). Business continuity is a remarkable property and it is a chief concern for many companies, since service disruption may cause huge revenue and market share losses. In recent years, cloud computing has turned into a remarkable alternative due to its resource on-demand and pay-as-you-go models. More specifically, additional resources, such as virtual machines (VMs), are only allocated when disaster takes place, and the automated virtual platform also performs a transparent recovery to minimize the service time to restore. This chapter presents availability models to evaluate cloud computing infrastructures.
Paulo Romero Martins Maciel
Scalable Assessment and Optimization of Power Distribution Automation Networks
Abstract
In this chapter, we present a novel state space exploration method for distribution automation power grids built on top of an analytical survivability model. Our survivability model-based approach enables efficient state space exploration in a principled way using random-greedy heuristic strategies. The proposed heuristic strategies aim to maximize survivability under budget constraints, accounting for cable undergrounding and tree trimming costs, with load constraints per feeder line. The heuristics are inspired by the analytical results of optimal strategies for simpler versions of the allocation problem. Finally, we parameterize our models using historical data of recent large storms. We have looked into the named storms that occurred during the 2012 Atlantic hurricane season as provided by the U.S. Government National Hurricane Center and numerically evaluated the proposed heuristics with data derived from our abstraction of the Con Edison overhead distribution power grid in Westchester county.
Alberto Avritzer, Lucia Happe, Anne Koziolek, Daniel Sadoc Menasche, Sindhu Suresh, Jose Yallouz
Model Checking Two Layers of Mean-Field Models
Abstract
Recently, many systems that consist of a large number of interacting objects have been analysed using the mean-field method, which allows a quick and accurate analysis of such systems, while avoiding the state-space explosion problem. To date, the mean-field method has primarily been used for classical performance evaluation purposes. In this chapter, we discuss model-checking mean-field models. We define and motivate two logics, called Mean-Field Continuous Stochastic Logic (MF-CSL) and Mean-Field Logic (MFL), to describe properties of systems composed of many identical interacting objects. We present model-checking algorithms and discuss the differences in the expressiveness of these two logics and their combinations.
Anna Kolesnichenko, Anne Remke, Pieter-Tjerk de Boer, Boudewijn R. Haverkort

Checkpointing and Queueing

Frontmatter
Standby Systems with Backups
Abstract
This chapter presents a numerical methodology to model and evaluate reliability, expected mission completion time, and expected total mission cost of 1-out-of-N: G standby sparing systems subject to periodic or non-periodic backup actions. The backups are performed to facilitate effective system recovery in the case of the occurrence of an online operating element failure. The methodology is applicable to dynamic data backup and retrieval times as well as nonidentical system elements with different time-to-failure distributions, different performance, and different standby modes. This chapter also presents applications of the methodology to a set of optimization problems that find the optimal backup distribution and/or element activation sequence, maximizing mission reliability or minimizing expected mission completion time or minimizing total mission cost. Examples are provided to illustrate the presented methodology as well as optimized solutions.
Gregory Levitin, Liudong Xing
Reliability Analysis of a Cloud Computing System with Replication: Using Markov Renewal Processes
Abstract
Cloud computing is an important infrastructure for many industries. There are increasing needs for such new techniques in data protection, short response time, reduced management cost, etc. A cloud computing system with distributed information and communication processing capabilities has been proposed [13] which consists of some intelligent nodes as well as a data center. A short response time in delivering data could be made by using multiple intelligent nodes near each client rather than a data center. In order to protect client data, all of intelligent nodes need to transmit the database content to a data center via a network link, which is called replication. There are two replication strategies which are known as synchronous and asynchronous schemes [46]. Using techniques of Markov renewal processes, this chapter summarizes reliability analyses of a cloud computing system with the above two replications, and focuses on some optimization problems to make regular backups of client data from all of the intelligent nodes to a data center.
Mitsutaka Kimura, Xufeng Zhao, Toshio Nakagawa
Service Reliability Enhancement in Cloud by Checkpointing and Replication
Abstract
Virtual machines (VMs) are used in cloud computing systems to handle user requests for service. A user’s request cannot be completed if the VM fails. Replication mechanisms can be used to mitigate the impact of VM failures. In this chapter, we are primarily interested in characterizing the failure–recovery behavior of a VM in the cloud under different replication schemes. We use a service-oriented dependability metric called Defects Per Million (DPM), defined as the number of user requests dropped out of a million due to VM failures. We present an analytical modeling approach for computing the DPM metric in different replication schemes on the basis of the checkpointing method. The effectiveness of replication schemes are demonstrated through experimental results. To verify the validity of the proposed analytical modeling approach, we extend the widely used cloud simulator CloudSim and compare the simulation results with analytical solutions.
Subrota K. Mondal, Fumio Machida, Jogesh K. Muppala
Linear Algebraic Methods in RESTART Problems in Markovian Systems
Abstract
A task with ideal execution time \(\ell \) is handled by a Markovian system with features similar to the ones in classical reliability. The Markov states are of two types, UP and DOWN, such that the task only can be processed in an UP state. Upon entrance to a DOWN state, processing is stopped and must be restarted from the beginning upon the next entrance to an UP state. The total task time \(X=X_\mathrm {r}(\ell )\) (including restarts and pauses in failed states) is investigated with particular emphasis on the expected value \({\pmb {\mathbb {E}}}[X_\mathrm {r}(\ell )]\), for which an explicit formula is derived that applies for all relevant systems. In general, transitions between UP and DOWN states are interdependent, but simplifications are pointed out when the UP to DOWN rate matrix (or the DOWN to UP) has rank one. A number of examples are studied in detail and an asymptotic exponential form \(\exp (\beta _\mathrm {m} \ell )\) is found for the expected total task time \({\pmb {\mathbb {E}}}[X(\ell )]\) as \(\ell \rightarrow \infty \). Also, the asymptotic behavior of the total distribution, \(H_\mathrm {r}(x|\ell )\rightarrow \exp (-x\gamma (\ell ))\), as \(x\rightarrow \infty \) is discussed.
Stephen Thompson, Lester Lipsky, Søren Asmussen
Vacation Queueing Models of Service Systems Subject to Failure and Repair
Abstract
We consider a queueing system that can randomly fail either when it is idle or while serving a customer. The system can be modeled by a vacation queueing system since it cannot serve customers when it is down; that is, the server is on a forced vacation when the system fails. We provide the availability and performance analysis of this system in this chapter.
Oliver C. Ibe

Software Simulation, Testing, Workloads, Aging, Reliability, and Resilience

Frontmatter
Combined Simulation and Testing Based on Standard UML Models
Abstract
The development of complex software and embedded systems is usually composed of a series of design, implementation, and testing phases. Challenged by their continuously increasing complexity and high-performance requirements, model-driven development approaches are gaining in popularity. Modeling languages like UML (Unified Modeling Language) cope with the system complexity and also allow for advanced analysis and validation methods. The approach of Test-driven Agile Simulation (TAS) combines novel model-based simulation and testing techniques in order to achieve an improved overall quality during the development process. Thus, the TAS approach enables the simulation of a modeled system and the simulated execution of test cases, such that both system and test models can mutually be validated at early design stages prior to expensive implementation and testing on real hardware. By executing system specifications in a simulation environment, the TAS approach also supports a cheap and agile technique for quantitative assessments and performance estimates to identify system bottlenecks and for system improvements at different abstraction levels. In this chapter we will present the current status of the TAS approach, a software tool realization based on the Eclipse RCP, and a detailed example from the image processing domain illustrating the methodology.
Vitali Schneider, Anna Deitsch, Winfried Dulz, Reinhard German
Workloads in the Clouds
Abstract
Despite the fast evolution of cloud computing, up to now the characterization of cloud workloads has received little attention. Nevertheless, a deep understanding of their properties and behavior is essential for an effective deployment of cloud technologies and for achieving the desired service levels. While the general principles applied to parallel and distributed systems are still valid, several peculiarities require the attention of both researchers and practitioners. The aim of this chapter is to highlight the most relevant characteristics of cloud workloads as well as identify and discuss the main issues related to their deployment and the gaps that need to be filled.
Maria Carla Calzarossa, Marco L. Della Vedova, Luisa Massari, Dana Petcu, Momin I. M. Tabash, Daniele Tessera
Reproducibility of Software Bugs
Basic Concepts and Automatic Classification
Abstract
Understanding software bugs and their effects is important in several engineering activities, including testing, debugging, and design of fault containment or tolerance methods. Dealing with hard-to-reproduce failures requires a deep comprehension of the mechanisms leading from bug activation to software failure. This chapter surveys taxonomies and recent studies about bugs from the perspective of their reproducibility, providing insights into the process of bug manifestation and the factors influencing it. These insights are based on the analysis of thousands of bug reports of a widely used open-source software, namely MySQL Server. Bug reports are automatically classified according to reproducibility characteristics, providing figures about the proportion of hard to reproduce bug their features, and evolution over releases.
Flavio Frattini, Roberto Pietrantuono, Stefano Russo
Constraint-Based Virtualization of Industrial Networks
Abstract
In modern industrial solutions, Ethernet-based communication networks have been replacing bus technologies. Ethernet is no longer found only in inter-controller or manufacturing execution systems, but has penetrated into the real-time sensitive automation process (i.e., close to the machines and sensors). Ethernet itself adds many advantages to industrial environments where digitalization also means more data-driven IT services interacting with the machines. However, in order to cater to the needs of both new and more automation-related communication, a better restructuring of the network and resources among multitenant systems needs to be carried out. Various Industrial Ethernet (IE) standards already allow some localized separation of application flows with the help of Quality of Service (QoS) mechanisms. These technologies also expect some planning or engineering of the system which takes place by estimating worst-case scenarios of possible traffic generated by all assumed applications. This approach, however, lacks the flexibility to add new services or to extend the system participants on the fly without a major redesign and reconfiguration of the whole network. Network virtualization and segmentation is used to satisfy these requirements of more support for dynamic scenarios, while keeping and protecting time-critical production traffic. Network virtualization allows slicing of the real physical network connecting a set of applications and end devices into logically separated portions or Slices. A set of resource demands and constraints is defined on a Slice or Virtual Network level. Slice links are then mapped over physical paths starting from end devices through forwarding devices that can guarantee these demands and constraints. In this chapter, the modeling of virtual industrial network constraints is addressed with a focus on communication delay. For evaluation purposes, the modeled network and mapping criteria are implemented in the Virtual Network Embedding (VNE) traffic-engineering platform ALEVIN [1].
Waseem Mandarawi, Andreas Fischer, Amine Mohamed Houyou, Hans-Peter Huth, Hermann de Meer
Component-Oriented Reliability Assessment Approach Based on Decision-Making Frameworks for Open Source Software
Abstract
At present, the open source software (OSS) development paradigm is rapidly spreading. In order to consider the effect of each software component on the reliability of a system developed in a distributed environment such as an open source software project, we apply AHP (Analytic Hierarchy Process) and ANP (Analytic Network Process) which are well-established decision-making methods. We also propose a method of reliability assessment based on the software reliability growth models incorporating the interaction among the components. Moreover, we analyze actual software fault count data to show numerical examples of software reliability assessment for a concurrent distributed development environment. Furthermore, we consider an efficient and effective method of software reliability assessment for actual OSS projects.
Shigeru Yamada, Yoshinobu Tamura
Measuring the Resiliency of Extreme-Scale Computing Environments
Abstract
This chapter presents a case study on how to characterize the resiliency of large-scale computers. The analysis focuses on the failures and errors of Blue Waters, the Cray hybrid (CPU/GPU) supercomputer at the University of Illinois at Urbana-Champaign. The characterization is performed by a joint analysis of several data sources, which include workload and error/failure logs as well as manual failure reports. We describe LogDiver, a tool to automate the data preprocessing and metric computation that measure the impact of system errors and failures on user applications, i.e., the compiled programs launched by user jobs that can execute across one or more XE (CPU) or XK (CPU\(+\)GPU) nodes. Results include (i) a characterization of the root causes of single node failures; (ii) a direct assessment of the effectiveness of system-level failover and of memory, processor, network, GPU accelerator, and file system error resiliency; (iii) an analysis of system-wide outages; (iv) analysis of application resiliency to system-related errors; and (v) insight into the relationship between application scale and resiliency across different error categories.
Catello Di Martino, Zbigniew Kalbarczyk, Ravishankar Iyer
Metadata
Title
Principles of Performance and Reliability Modeling and Evaluation
Editors
Lance Fiondella
Antonio Puliafito
Copyright Year
2016
Electronic ISBN
978-3-319-30599-8
Print ISBN
978-3-319-30597-4
DOI
https://doi.org/10.1007/978-3-319-30599-8