Sie können Operatoren mit Ihrer Suchanfrage kombinieren, um diese noch präziser einzugrenzen. Klicken Sie auf den Suchoperator, um eine Erklärung seiner Funktionsweise anzuzeigen.
Findet Dokumente, in denen beide Begriffe in beliebiger Reihenfolge innerhalb von maximal n Worten zueinander stehen. Empfehlung: Wählen Sie zwischen 15 und 30 als maximale Wortanzahl (z.B. NEAR(hybrid, antrieb, 20)).
Findet Dokumente, in denen der Begriff in Wortvarianten vorkommt, wobei diese VOR, HINTER oder VOR und HINTER dem Suchbegriff anschließen können (z.B., leichtbau*, *leichtbau, *leichtbau*).
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Model-driven engineering (MDE) was elaborated over two decades ago based on a very limited set of seminal concepts: system, model, metamodel, and seminal relationships (i.e., representation of, and conforms to), as shown in Fig. 1.
Since the early development of MDE, the modeling community has produced a large body of knowledge about conformity. This also allows metamodeling techniques to be used as definitions of the modeling languages themselves. Beyond a pure conformity relation between model and metamodel, various detailed “typing” relationships have been explored to support a better decoupling and modularity of models, and thus a certain degree of reuse. Some flexibility in conformity has been explored to support different levels of formality during modeling activities, hence supporting the process from informal to formal modeling.
Anzeige
However, very little literature can be found on the representativity of the model with regard to the system under study, raising the question: “How well does a given model actually represent the system?” Indeed, the core modeling activity has been conceptually defined, e.g., by Stachowiack in terms of characteristics of a model: a model is an abstraction of an original (i.e., the “system”), and fulfill a specific purpose with regard to this original. Several relaxations have been identified. For example, the system might still be under construction and does not exist yet, a model may have many purposes (dependent on the different stakeholders) and a model may be composed of other models. However, all of these techniques do not provide a formal way to capture and reason about the actual representativity of the models built. In this editorial, we discuss some ways of qualifying the representativity relationship in terms of fidelity and underspecification, leading to some degrees of (possibly explicit) uncertainty.
Borrowing a concept from math, the representativity is formally defined as a refinement, but this view is often too narrow and usually fails if we consider a complex real-world system, either a software system or a cyber-physical system. Fidelity is a useful concept in this context: the model represents the system with a level of fidelity depending on the available information on the system, as well as the abstractions used to describe the model. As such, the fidelity can be seen as a distance between a (descriptive) model and the system under study. Many forms of fidelity can be found, such as the model capturing the system correctly within a percentage of actions versus time, average and maximal latency (delay) of reactions, and other manifestations of the concept.
Furthermore, a model can voluntarily capture an underspecification of the system, i.e., information that is not precisely captured in the (prescriptive) model, possibly leading to a broader design space. Such underspecification can be captured easily with math (e.g., a value in an interval) or state machines (e.g., several transitions nondeterministically triggered by the same stimulus). However, the underspecification of a system presents challenges in executable (and thus deterministic) models. Underspecification leaves open interpretations wherever it is controlled by the developer (through design decisions) or uncontrollable runtime behavior of the system. In practice, an underspecification must also be regarded as uncontrollable when it is “controlled” and decided by the context of the component (i.e., in other development teams), when product variants may have different realizations, or when later upgrades or component replacements change the design decisions.
Both the fidelity level and uncontrollable underspecification lead to the notion of uncertainty in modeling. Uncertainty in Software Engineering is a widely studied subject. Padulo and Guenov summarized previous research as twofold: uncertainty about the problem (the epistemic view), and uncertainty within the problem (the ontological view).
Anzeige
Epistemic Uncertainty includes misunderstanding of the user's needs, design space variability, unexpected behavior of APIs, or variable behavior among functionally similar libraries. It may also appear as subtle differences in the semantics of programming languages or even inside the same programming language, such as undefined behavior in C. Thus, epistemic uncertainty is mainly a design topic and needs adequate development actions to reduce or control this kind of uncertainty.
Ontological Uncertainty includes noise in the input data of a program, its memory layout, network delays, the internal state of the processor, the ambient temperature and even the age of the processor (i.e., a given processor microarchitecture evolves over time due to failing subsystems that are compensated). Ontological uncertainty, therefore, needs to be addressed adequately by the resulting system. Consequently, this means that the models representing the system must embody mechanisms to accept, control, and handle ontological uncertainty, e.g., by offering underspecification mechanisms within the model. Thus, instead of reasoning over uncertainty in models, we might “syntactify” uncertainty concepts and embed them within the design models. The models become more complicated, but also more adequate to serve as better formal notions of refinement and fidelity.
Reasoning explicitly on the representativity of a model is of utmost importance to consider the accuracy of the evaluation result of such a model (e.g., simulation). Moreover, capturing explicitly the representativity through fidelity and underspecification is also required to reason over the substitutability of one model by another. This is useful in many contexts, such as in a multi-fidelity simulation, or when there is a need for trading off accuracy and performance, but also when replacing a cyber-physical system (CPS) component in a larger, long-living CPS system (e.g., a car or an airplane).
Finally, representativity is also important in the context of model composition scenarios. In that context, model refinement is very important to define composed model/component structures and identify reusable model assets from libraries to insert them in these architectures. This will allow a lively MDE approach to become feasible.
SoSyM welcomes more contributions in this direction because we believe this is strongly needed to make MDE successful.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.