Skip to main content

2016 | Buch

Conceptual Modeling

35th International Conference, ER 2016, Gifu, Japan, November 14-17, 2016, Proceedings

herausgegeben von: Isabelle Comyn-Wattiau, Katsumi Tanaka, Il-Yeol Song, Shuichiro Yamamoto, Motoshi Saeki

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 345h International Conference on Conceptual Modeling, ER 2016, held in Gifu, Japan, in November 2016.

The 23 full and 18 short papers presented together with 3 keynotes were carefully reviewed and selected from 113 submissions. The papers are organized in topical sections on Analytics and Conceptual Modeling; Conceptual Modeling and Ontologies; Requirements Engineering; Advanced Conceptual Modeling; Semantic Annotations; Modeling and Executing Business Processes; Business Process Management and Modeling; Applications and Experiments of Conceptual Modeling; Schema Mapping; Conceptual Modeling Guidance; and Goal Modeling.

Inhaltsverzeichnis

Frontmatter

Keynotes

Frontmatter
Improving the Correctness of Some Database Research Using ORA-Semantics

We refer to the concepts of object class, relationship type, and attribute of object class and relationship type in the ER model as ORA-semantics. Common database models such as the relational model and the XML data model do not capture these ORA-semantics which leads to many serious problems in relational and XML database design, data and schema integration, and keyword query processing in these databases. In this paper, we highlight the limitations and problems of current database research in these areas, and discuss how ORA-semantics can be utilized to resolve these problems.

Tok Wang Ling, Zhong Zeng, Mong Li Lee, Thuy Ngoc Le
Conceptual Modeling of Life: Beyond the Homo Sapiens

Our strong capability of conceptualization makes us, human beings, different from any other species in our planet. We, as conceptual modelers, should apply in the right direction such fascinating capability to make it play an essential role in the design of the world to come. What does it mean that “right direction” requiresa challenging discussion. Halfway between the need of having a sound philosophical characterization and an effective, practical computer science application, conceptual modeling emerges as the ideal discipline needed for understanding life and improving our life style. This keynote explores this argument by delimiting the notion and scope of conceptual modeling, and by introducing and discussing two possible scenarios of fruitful application. The first one is oriented to better understand why conceptual modeling can help to manage the social challenges of the world of the emerging information era, and how this world that comes could benefit from it. The second one focuses on how understanding the human genome can open new ways to go beyond what we can consider “traditional Homo Sapiens capabilities”, with especial implications in the health domain and the new medicine of precision.

Oscar Pastor

Analytics and Conceptual Modeling

Frontmatter
A Conceptual Modeling Framework for Business Analytics

Data analytics is an essential element for success in modern enterprises. Nonetheless, to effectively design and implement analytics systems is a non-trivial task. This paper proposes a modeling framework (a set of metamodels and a set of design catalogues) for requirements analysis of data analytics systems. It consists of three complementary modeling views: business view, analytics design view, and data preparation view. These views are linked together and act as a bridge between enterprise strategies, analytics algorithms, and data preparation activities. The framework comes with a set of catalogues that codify and represent an organized body of business analytics design knowledge. The framework has been applied to three real-world case studies and findings are discussed.

Soroosh Nalchigar, Eric Yu, Rajgopal Ramani
NOSQL Design for Analytical Workloads: Variability Matters

Big Data has recently gained popularity and has strongly questioned relational databases as universal storage systems, especially in the presence of analytical workloads. As result, co-relational alternatives, commonly known as NOSQL (Not Only SQL) databases, are extensively used for Big Data. As the primary focus of NOSQL is on performance, NOSQL databases are directly designed at the physical level, and consequently the resulting schema is tailored to the dataset and access patterns of the problem in hand. However, we believe that NOSQL design can also benefit from traditional design approaches. In this paper we present a method to design databases for analytical workloads. Starting from the conceptual model and adopting the classical 3-phase design used for relational databases, we propose a novel design method considering the new features brought by NOSQL and encompassing relational and co-relational design altogether.

Victor Herrero, Alberto Abelló, Oscar Romero
Translating Bayesian Networks into Entity Relationship Models

Big data analytics applications drive the convergence of data management and machine learning. But there is no conceptual language available that is spoken in both worlds. The main contribution of the paper is a method to translate Bayesian networks, a main conceptual language for probabilistic graphical models, into usable entity relationship models. The transformed representation of a Bayesian network leaves out mathematical details about probabilistic relationships but unfolds all information relevant for data management tasks.

Frank Rosner, Alexander Hinneburg
Key Performance Indicator Elicitation and Selection Through Conceptual Modelling

Key Performance Indicators (KPIs) operationalize ambiguous enterprise goals into quantified variables with clear thresholds. Their usefulness has been established in multiple domains yet it remains a difficult and error-prone task to find suitable KPIs for a given strategic goal. A careful analysis of the literature on both strategic modeling, planning and management reveals that this difficulty is due to a number of factors. Firstly, there is a general lack of adequate conceptualizations that capture the subtle yet important differences between performance and result indicators. Secondly, there is a lack of integration between modelling and data analysis techniques that interleaves analysis with the modeling process. In order to tackle these deficiencies, we propose an approach for selecting explicitly KPIs and Key Result Indicators (KRIs). Our approach is comprised of (i) a novel modeling language that exploits the essential elements of indicators, covering KPIs, KRIs and measures, (ii) a data mining-based analysis technique for providing data-driven information about the elements in the model, thereby enabling domain experts to validate the KPIs selected, and (iii) an iterative process that guides the discovery and definition of indicators. In order to validate our approach, we apply our proposal to a real case study on water management.

Alejandro Maté, Juan Trujillo, John Mylopoulos

Conceptual Modeling and Ontologies

Frontmatter
Insights on the Use and Application of Ontology and Conceptual Modeling Languages in Ontology-Driven Conceptual Modeling

In this paper, we critically survey the existing literature in ontology-driven conceptual modeling in order to identify the kind of research that has been performed over the years and establish its current state of the art by describing the use and the application of ontologies in mapping phenomena to models. We are interested if there exist any connections between representing kinds of phenomena with certain ontologies and conceptual modeling languages. To understand and identify any gaps and research opportunities, our literature study is conducted in the form of a systematic mapping review, which aims at structuring and classifying the area that is being investigated. Our results indicate that there are several research gaps that should be addressed, which we translated into several future research opportunities.

Michael Verdonck, Frederik Gailly
An Ontological Approach for Identifying Software Variants: Specialization and Template Instantiation

Software is a crucial component of many products and often is a product in itself. Software artifacts are typically developed for particular needs. Often, identifying software variants is important for increasing reuse, reducing time and costs of development and maintenance, increasing quality and reliability, and improving productivity. We propose a method for utilizing variability mechanisms of Software Product Line Engineering (SPLE) to allow identification of variants of software artifacts. The method is based on an ontological framework for representing variability of behaviors. We demonstrate the feasibility of the method on two common variability mechanisms – specialization and template instantiation. The method has been implemented using reverse engineered code. This provides a proof-of-concept of its feasibility.

Iris Reinhartz-Berger, Anna Zamansky, Yair Wand
The Role of Ontology Design Patterns in Linked Data Projects

The contribution of this paper is twofold: (i) a UML stereotype for component diagrams that allows for representing ontologies as a set of interconnected Ontology Design Patterns, aimed at supporting the communication between domain experts and ontology engineers; (ii) an analysis of possible approaches to ontology reuse and the definition of four methods according to their impact on the sustainability and stability of the resulting ontologies and knowledge bases. To conceptually prove the effectiveness of our proposals, we present two real LOD projects.

Valentina Presutti, Giorgia Lodi, Andrea Nuzzolese, Aldo Gangemi, Silvio Peroni, Luigi Asprino
Bridging the IT and OT Worlds Using an Extensible Modeling Language

Enterprise Modeling is used to analyze and improve IT, as well as to make IT more suitable to the needs of the business. However, asset intensive organizations have an ample set of operational technologies (OT) that Enterprise Modeling does not account for. When trying to model such enterprises, there is no accurate form of showing components that belong to the world of OT nor is there a way to bridge the division between OT and IT. Existing languages fall short due to their limited focus that does not consider modeling operational technologies and even less relating them to the IT and Business dimensions. To address these issues, in this paper we present a new modeling language which extends ArchiMate. This language proposes a set of core elements for modeling OT components, based on existing OT standards and ontologies, and makes it possible to associate these components to business and IT elements. Introducing this language makes it possible to apply existing modeling and analysis techniques from Enterprise Modeling in settings that cover Business, OT and IT.

Paola Lara, Mario Sánchez, Jorge Villalobos

Requirements Engineering

Frontmatter
Possibilistic Cardinality Constraints and Functional Dependencies

Cardinality constraints and functional dependencies together can express many semantic properties for applications in which data is certain. However, modern applications need to process large volumes of uncertain data. So far, cardinality constraints and functional dependencies have only been studied in isolation over uncertain data. We investigate the more challenging real-world case in which both types of constraints co-occur. While more expressive constraints could easily be defined, they would not enjoy the computational properties we show to hold for our combined class. Indeed, we characterize the associated implication problem axiomatically and algorithmically in linear input time. We also show how to summarize any given set of our constraints as an Armstrong instance. These instances help data analysts consolidate meaningful degrees of certainty by which our constraints hold in the underlying application domain.

Tania K. Roblot, Sebastian Link
Exploring Views for Goal-Oriented Requirements Comprehension

Requirements documents and models need to be used by many stakeholders with different technological proficiency during software development. Each stakeholder may need to understand the entire (or simply part of the) requirements artifacts. To empower these stakeholders, views of the requirements should be configurable to their particular needs. This paper uses information visualization techniques to help in this process. It proposes different views aiming at highlighting information that is relevant for a particular stakeholder, helping him to query requirements artifacts. We offer three kinds of visualizations capturing language and domain elements, while providing a gradual model overview: the big picture view, the syntax-based view, and the concern-based view. We instantiate these views with i* models and introduce an implementation prototype in the iStarLab tool.

Lyrene Silva, Ana Moreira, João Araújo, Catarina Gralha, Miguel Goulão, Vasco Amaral
Keys with Probabilistic Intervals

Probabilistic databases accommodate well the requirements of modern applications that produce large volumes of uncertain data from a variety of sources. We propose an expressive class of probabilistic keys which empowers users to specify lower and upper bounds on the marginal probabilities by which keys should hold in a data set of acceptable quality. Indeed, the bounds help organizations balance the consistency and completeness targets for their data quality. For this purpose, algorithms are established for an agile schema- and data-driven acquisition of the right lower and upper bounds in a given application domain, and for reasoning about these keys. The efficiency of our acquisition framework is demonstrated theoretically and experimentally.

Pieta Brown, Jeeva Ganesan, Henning Köhler, Sebastian Link

Advanced Conceptual Modeling

Frontmatter
On Referring Expressions in Information Systems Derived from Conceptual Modelling

We apply recent work on referring expression types to the issue of identification in Conceptual Modelling. In particular, we consider how such types yield a separation of concerns in a setting where an Information System based on a conceptual schema is to be mapped to a relational schema plus SQL queries. We start from a simple object-centered representation (as in semantic data models), where naming is not an issue because everything is self-identified (possibly using surrogates). We then allow the analyst to attach to every class a preferred “referring expression type”, and to specify uniqueness constraints in the form of generalized functional dependencies. We show (1) how a number of well-formedness conditions concerning an assignment of referring expressions can be efficiently diagnosed, and (2) how the above types attached to classes allow a concrete relational schema and SQL queries over it to be derived from a combination of the conceptual schema and queries over it.

Alexander Borgida, David Toman, Grant Weddell
DeepTelos: Multi-level Modeling with Most General Instances

Multi-level modeling aims to reduce redundancy in data models by defining properties at the right abstraction level and inheriting them to more specific levels. We revisit one of the earliest such approaches, Telos, and investigate what needs to be added to its axioms to get a true multi-level modeling language. Unlike previous approaches, we define levels not with numeric potencies but with hierarchies of so-called most general instances.

Manfred A. Jeusfeld, Bernd Neumayr
Pragmatic Quality Assessment for Automatically Extracted Data

Automatically extracted data is rarely “clean” with respect to pragmatic (real-world) constraints—which thus hinders applications that depend on quality data. We proffer a solution to detecting pragmatic constraint violations that works via a declarative and semantically enabled constraint-violation checker. In conjunction with an ensemble of automated information extractors, the implemented prototype checks both hard and soft constraints—respectively those that are satisfied or not and those that are satisfied probabilistically with respect to a threshold. An experimental evaluation shows that the constraint checker identifies semantic errors with high precision and recall and that pragmatic error identification can improve results.

Scott N. Woodfield, Deryle W. Lonsdale, Stephen W. Liddle, Tae Woo Kim, David W. Embley, Christopher Almquist
UnifiedOCL: Achieving System-Wide Constraint Representations

Constraint definitions tend to be distributed across the components of an information system using a variety of technology-specific representations. We propose an approach where constraints are managed in a single place using OCL with extensions for technology-specific concepts. These constraints are then mapped to technology-specific representations which are validated at runtime. Bi-directional translations of constraint definitions allows existing components to be easily integrated into the system. We present an implementation of the approach and report on a user study with developers from industry and research.

David Weber, Jakub Szymanek, Moira C. Norrie

Semantic Annotations

Frontmatter
Building Large Models of Law with NómosT

Laws and regulations impact the design of software systems, as they introduce new requirements and constrain existing ones. The analysis of a software system and the degree to which it complies with applicable laws can be greatly facilitated by models of applicable laws. However, laws are inherently voluminous, often consisting of hundreds of pages of text, and so are their models, consisting of thousands of concepts and relationships. This paper studies the possibility of building models of law semi-automatically by using the NómosT tool. Specifically, we present the NómosT architecture and the process by which a user constructs a model of law semi-automatically, by first annotating the text of a law and then generating from it a model. We then evaluate the performance of the tool relative to building a model of a piece of law manually. In addition, we offer statistics on the quality of the final output that suggest that tool supported generation of models of law reduces substantially human effort without affecting the quality of the output.

N. Zeni, E. A. Seid, P. Engiel, S. Ingolfo, J. Mylopoulos
An Efficient and Simple Graph Model for Scientific Article Cold Start Recommendation

Since there is little history information for the newly published scientific articles, it is difficult to recommend related new articles for users. Although tags of articles can provide important information for new articles, they are ignored by existing solutions. Moreover, the efficiency of these solutions is unsatisfactory, especially on the big data situation. In this paper, we propose an efficient and simple bi-relational graph for new scientific article recommendation called user-article based graph model with tags (UAGMT), which can integrate various valuable information (e.g., readership, tag, content and citation) into the graph for new article recommendation. Since the structure of the bi-relational graph model is simple and the model incorporates only a few similarity relationships, it can ensure high efficiency. Besides, the tags’ information of articles which summarizes the main content is integrated to enhance the reliability of the similarity of articles. It is especially helpful for improving the cold start recommendation performance. A series of experiments on CiteULike dataset show that the recommendation efficiency is greatly improved by using our UAGMT with the guaranteed performance on the cold-start situation.

Tengyuan Cai, Hongrong Cheng, Jiaqing Luo, Shijie Zhou
Keyword Queries over the Deep Web

The Deep Web is constituted by data that are accessible through Web pages, but not indexable by search engines as they are returned in dynamic pages. In this paper we propose a conceptual framework for answering keyword queries on Deep Web sources represented as relational tables with so-called access limitations. We formalize the notion of optimal answer and characterize queries for which an answer can be found.

Andrea Calì, Davide Martinenghi, Riccardo Torlone
Sensor Observation Service Semantic Mediation: Generic Wrappers for In-Situ and Remote Devices

In-situ and remote sensors produce data that fit different data modeling paradigms, namely, Entity/Relationship paradigm for the former and Multidimensional Array paradigm for the latter. Besides, different standardized data access services are used in practice. Therefore their integrated access is still a major challenge. This paper describes a solution for the development of generic semantic data access wrappers for observation datasets generated by in-situ and remote sensing devices. Those wrappers are key components of data mediation architectures designed for the semantic integrated publishing of observation data.

Manuel A. Regueiro, José R. R. Viqueira, Christoph Stasch, José A. Taboada

Modeling and Executing Business Processes

Frontmatter
Probabilistic Evaluation of Process Model Matching Techniques

Process model matching refers to the automatic identification of corresponding activities between two process models. It represents the basis for many advanced process model analysis techniques such as the identification of similar process parts or process model search. A central problem is how to evaluate the performance of process model matching techniques. Often, not even humans can agree on a set of correct correspondences. Current evaluation methods, however, require a binary gold standard, which clearly defines which correspondences are correct. The disadvantage of this evaluation method is that it does not take the true complexity of the matching problem into account and does not fairly assess the capabilities of a matching technique. In this paper, we propose a novel evaluation method for process model matching techniques. In particular, we build on the assessment of multiple annotators to define probabilistic notions of precision and recall. We use the dataset and the results of the Process Model Matching Contest 2015 to assess and compare our evaluation method. We find that our probabilistic evaluation method assigns different ranks to the matching techniques from the contest and allows to gain more detailed insights into their performance.

Elena Kuss, Henrik Leopold, Han van der Aa, Heiner Stuckenschmidt, Hajo A. Reijers
Context-Aware Workflow Execution Engine for E-Contract Enactment

An e-contract is a contract that is specified, modeled and executed by a software system. E-contract business processes are modeled using workflows and their enactment is mostly dependent on the execution context. Existing e-contract systems lack context-awareness, and thus often face difficulties in enacting when context and requirements of e-contracts change at run-time. In this paper, we (a) present an approach for context pattern discovery and build a context-aware workflow execution engine, (b) develop an approach for context-aware execution-workflow to instantiate and execute context-based workflow instances and (c) provide a framework for context-aware e-contract enactment system. We also demonstrate the viability of our approach using a government contract.

Himanshu Jain, P. Radha Krishna, Kamalakar Karlapalem
Annotating and Mining for Effects of Processes

We provide a novel explicit annotation of a process model by way of accumulating effects of individual tasks specified by analysts using belief bases and computing the accumulated effect up to the point of execution of the process model in an automated manner. This technique permits the analyst to specify immediate effect annotations in a practitioner-accessible simple propositional logic formulas and generates a sequence of tasks along with cumulative effects, called effect logs. Further we propose and solve an effect mining problem, that is, given an effect log discover the process model with effect annotations of individual tasks which is close to the original annotated process model.

Suman Roy, Metta Santiputri, Aditya Ghose

Business Process Management and Modeling

Frontmatter
Automated Discovery of Structured Process Models: Discover Structured vs. Discover and Structure

This paper addresses the problem of discovering business process models from event logs. Existing approaches to this problem strike various tradeoffs between accuracy and understandability of the discovered models. With respect to the second criterion, empirical studies have shown that block-structured process models are generally more understandable and less error-prone than unstructured ones. Accordingly, several automated process discovery methods generate block-structured models by construction. These approaches however intertwine the concern of producing accurate models with that of ensuring their structuredness, sometimes sacrificing the former to ensure the latter. In this paper we propose an alternative approach that separates these two concerns. Instead of directly discovering a structured process model, we first apply a well-known heuristic that discovers more accurate but sometimes unstructured (and even unsound) process models, and then transform the resulting model into a structured one. An experimental evaluation shows that our “discover and structure” approach outperforms traditional “discover structured” approaches with respect to a range of accuracy and complexity measures.

Adriano Augusto, Raffaele Conforti, Marlon Dumas, Marcello La Rosa, Giorgio Bruno
Detecting Drift from Event Streams of Unpredictable Business Processes

Existing business process drift detection methods do not work with event streams. As such, they are designed to detect inter-trace drifts only, i.e. drifts that occur between complete process executions (traces), as recorded in event logs. However, process drift may also occur during the execution of a process, and may impact ongoing executions. Existing methods either do not detect such intra-trace drifts, or detect them with a long delay. Moreover, they do not perform well with unpredictable processes, i.e. processes whose logs exhibit a high number of distinct executions to the total number of executions. We address these two issues by proposing a fully automated and scalable method for online detection of process drift from event streams. We perform statistical tests over distributions of behavioral relations between events, as observed in two adjacent windows of adaptive size, sliding along with the stream. An extensive evaluation on synthetic and real-life logs shows that our method is fast and accurate in the detection of typical change patterns, and performs significantly better than the state of the art.

Alireza Ostovar, Abderrahmane Maaradji, Marcello La Rosa, Arthur H. M. ter Hofstede, Boudewijn F. V. van Dongen
Modeling Structured and Unstructured Processes: An Empirical Evaluation

Imperative process languages, such as BPMN, describe business processes in terms of collections of activities and control flows among them. Despite their popularity, such languages remain useful mostly for structured processes whose flow of activities is well-known and does not vary greatly. For unstructured processes, on the other hand, the verdict is still out as to the best way to represent them. In our previous work, we have proposed Azzurra, a specification language for business processes founded on social concepts, such as roles, agents and commitments. In this paper, we present the results of an experiment that comparatively evaluates Azzurra and BPMN in terms of their ability to represent structured and unstructured processes. Our results suggest that Azzurra is better suited than BPMN for unstructured business processes.

Evellin Cardoso, Katsiaryna Labunets, Fabiano Dalpiaz, John Mylopoulos, Paolo Giorgini

Applications and Experiments of Conceptual Modeling

Frontmatter
MetaScience: An Holistic Approach for Research Modeling

Conferences have become primary sources of dissemination in computer science research, in particular, in the software engineering and database fields. Assessing the quality, scope and community of conferences is therefore crucial for any researcher. However, digital libraries and online bibliographic services offer little help on this, thus providing only basic metrics. Researchers are instead forced to resort to the tedious task of manually browsing different sources (e.g., DBLP, Google Scholar or conference sites) to gather relevant information about a given venue. In this paper we propose a conceptual schema providing a holistic view of conference-related information (e.g., authors, papers, committees and topics). This schema is automatically and incrementally populated with data available online. We show how this schema can be used as a single information source for a variety of complex queries and metrics to characterize the ER conference. Our approach has been implemented and made available online.

Valerio Cosentino, Javier Luis Cánovas Izquierdo, Jordi Cabot
Comparison and Synergy Between Fact-Orientation and Relation Extraction for Domain Model Generation in Regulatory Compliance

Modern enterprises need to treat regulatory compliance in a holistic and maximally automated manner, given the stakes and complexity involved. The ability to derive the models of regulations in a given domain from natural language texts is vital in such a treatment. Existing approaches automate regulatory rule extraction with a restricted use of domain models counting on the knowledge and efforts of domain experts. We present a semi-automated treatment of regulatory texts by automating in unison, the key steps in fact-orientation and relation extraction. In addition, we utilize the domain models in learning to identify rules from the text. The key benefit of our approach is that it can be applied to any legal text with a considerably reduced burden on domain experts. Early results are encouraging and pave the way for further explorations.

Sagar Sunkle, Deepali Kholkar, Vinay Kulkarni
Development of a Modeling Language for Capability Driven Development: Experiences from Meta-modeling

Changing business environments related to constant variations in customers’ demand, situational conditions, regulations, emerging security threats, etc. may be addressed by approaches that integrate organizational development with information system development taking into account changes in the application context. This paper presents experiences from Method Engineering of the Capability Driven Development (CDD) methodology with a focus on the CDD meta-model and the modeling activities that led to it. CDD consists of several method components. Hence, a conceptual meta-model of CDD and a meta-model of the modeling language based on the 4EM approach are presented together with a number of lessons learned.

Janis Stirna, Jelena Zdravkovic
Applying Conceptual Modeling to Better Understand the Human Genome

The objective of the work is to present the benefits of the application of Conceptual Modeling (CM) in complex domains, such as genomics. This paper explains the evolution of a Conceptual Schema of the Human Genome (CSHG), which seeks to provide a clear and precise understanding of the human genome. We want to highlighting all the advantages of the application of CM in a complex domain such as Genomic Information Systems (GeIS). We show how over time this model has evolved, thus we have discovered better forms of representation. As we advanced in exploring the domain, we understood that we should be extending and incorporating the new concepts detected into our model. Here we present and discuss the evolution to reach the current version (CSHG v2). A solution based on conceptual models gives a clear definition of the domain with direct implications for the medical context.

José F. Reyes Román, Óscar Pastor, Juan Carlos Casamayor, Francisco Valverde

Schema Mapping

Frontmatter
Data Analytics: From Conceptual Modelling to Logical Representation

In recent years, data analytics has been studied in a broad range of areas, such as health-care, social sciences, and commerce. In order to accurately capture user requirements for enhancing communication between analysts, domain experts and users, conceptualising data analytics tasks to provide a high level of modelling abstraction becomes increasingly important. In this paper, we discuss the modelling of data analytics and how a conceptual framework for data analytics applications can be transformed into a logical framework that supports a simple yet expressive query language for specifying data analytics tasks. We have also implemented our modelling method into a unified data analytics platform, which allows to incorporate analytics algorithms as plug-ins in a flexible and open manner, We present case studies on three real-world data analytics applications and our experimental results on an unified data analytics platform.

Qing Wang, Minjian Liu
UMLtoGraphDB: Mapping Conceptual Schemas to Graph Databases

The need to store and manipulate large volume of (unstructured) data has led to the development of several NoSQL databases for better scalability. Graph databases are a particular kind of NoSQL databases that have proven their efficiency to store and query highly interconnected data, and have become a promising solution for multiple applications. While the mapping of conceptual schemas to relational databases is a well-studied field of research, there are only few solutions that target conceptual modeling for NoSQL databases and even less focusing on graph databases. This is specially true when dealing with the mapping of business rules and constraints in the conceptual schema. In this article we describe a mapping from UML/OCL conceptual schemas to Blueprints, an abstraction layer on top of a variety of graph databases, and Gremlin, a graph traversal language, via an intermediate Graph metamodel. Tool support is fully available.

Gwendal Daniel, Gerson Sunyé, Jordi Cabot
Facilitating Data-Metadata Transformation by Domain Specialists in a Web-Based Information System Using Simple Correspondences

We seek to empower domain specialists and non-technical web designers to be able to design and configure their system directly, without necessarily requiring interaction with a software developer or DB specialist. We observe that structured information shown on a web page presents a conceptual model of the information shown; and, that such web pages make a variety of choices regarding whether or not application information is presented in the data (with or without schema labels) or in the metadata (schema). Also, the same application may present the same data in different schemas on different pages. In this paper, we extend our earlier work—on providing generic widgets for structured information that can be easily used and configured by domain specialists—to also include data/metadata transformation. Thus, we put data/metadata transformation (from one conceptual model to another) in the hands of domain specialists without database expertise. The contributions of this paper are: showing how our approach can be used to support data/metadata transformation in both directions and demonstrating this capability in a non-trivial case study. The paper also provides evidence that non-expert users can successfully provide simple correspondences through the results of a small-scale user study.

Scott Britell, Lois M. L. Delcambre, Paolo Atzeni

Conceptual Modeling Guidance

Frontmatter
Visualizing User Story Requirements at Multiple Granularity Levels via Semantic Relatedness

The majority of practitioners express software requirements using natural text notations such as user stories. Despite the readability of text, it is hard for people to build an accurate mental image of the most relevant entities and relationships. Even converting requirements to conceptual models is not sufficient: as the number of requirements and concepts grows, obtaining a holistic view of the requirements becomes increasingly difficult and, eventually, practically impossible. In this paper, we introduce and experiment with a novel, automated method for visualizing requirements—by showing the concepts the text references and their relationships—at different levels of granularity. We build on two pillars: (i) clustering techniques for grouping elements into coherent sets so that a simplified overview of the concepts can be created, and (ii) state-of-the-art, corpus-based semantic relatedness algorithms between words to measure the extent to which two concepts are related. We build a proof-of-concept tool and evaluate our approach by applying it to requirements from four real-world data sets.

Garm Lucassen, Fabiano Dalpiaz, Jan Martijn E. M. van der Werf, Sjaak Brinkkemper
User Progress Modelling in Counselling Systems: An Application to an Adaptive Virtual Coach

Counselling systems such as recommendation systems and virtual coaches assist users to gradually achieve their goals. For that purpose, it is usual to devise a progression plan consisting of intermediate, possibly interrelated, tasks or goals to be accomplished in order to guide counselees from their current state to a (desirable) target state, whilst taking into account their circumstances and needs. Users may also strive to progress in several dimensions or aspects at the same time. However, existing goal modelling proposals are mainly focused on processes and object flows, and do not reflect the variable manner in which user progression may actually take place. In this paper, we propose a user-goal oriented metamodel to represent progressions between user states that serves as a knowledge basis for the construction of adaptive counselling systems. The proposal is exemplified with the design of a virtual coach to promote active ageing in which personalization plays a key role.

Nuria Medina-Medina, Zoraida Callejas, Kawtar Benghazi, Manuel Noguera
Stepwise Refinement of Software Development Problem Analysis

The Problem Frames approach has attracted attention because it enables developers to carefully analyze problems in a reasonable manner. Despite that this approach decomposes a problem into subproblems before the analysis is conducted, developers are still faced with a complex analysis when they consider interactions between the various subproblems. Moreover, progressive evolution of requirements is important for flexible development. In this paper, we propose methods to analyze multiple abstraction layers of a problem. Our methods help developers to construct abstract versions of a problem and find relationships between abstract problems and concrete problems. Moreover, our methods support refinement of arguments such that the properties of the abstract problem are preserved in the concrete problem. Therefore, our methods enable developers to divide up arguments into multiple abstraction layers and thus mitigate the complexity of argumentation. We carried out preliminary experiments on abstracting problems and constructing reasonable arguments. Our methods are expected to enable developers to analyze problems in a reasonable manner with less complexity and thus make problem analysis easier.

Tsutomu Kobayashi, Fuyuki Ishikawa, Shinichi Honiden
Tailoring User Interfaces to Include Gesture-Based Interaction with gestUI

The development of custom gesture-based user interfaces requires software engineers to be skillful in the use of the tools and languages needed to implement them. gestUI, a model-driven method, can help them achieve these skills by defining custom gestures and including gesture-based interaction in existing user interfaces. Up to now, gestUI has used the same gesture catalogue for all software users, with gestures that could not be subsequently redefined. In this paper, we extend gestUI by including a user profile in the metamodel that permits individual users to define custom gestures and to include gesture-based interaction in user interfaces. Using tailoring mechanisms, each user can redefine his custom gestures during the software runtime. Although both features are supported by models, the gestUI tool hides its technical complexity from the users. We validated these gestUI features in a technical action research in an industrial context. The results showed that these features were perceived as both useful and easy to use when defining/redefining custom gestures and including them in a user interface.

Otto Parra, Sergio España, Oscar Pastor
Unlocking Visual Understanding: Towards Effective Keys for Diagrams

Diagrams are (meant to be) effective communication supports to convey information to stakeholders. Being communication supports, they have to be quickly and accurately understood. To enable immediateness, many disciplines such as cartography rely on keys, which categorise diagram symbols and bind them to their meaning. Software engineering extensively relies on visual languages such as UML to communicate amongst the many stakeholders involved in information systems’ life-cycle. Yet, keys are barely used in these diagrams, hindering (immediate) understanding and limiting it to language experts. We provide a disciplined approach to design effective keys, by adapting graphic semiology theory and cartographers’ know-how to software diagrams. We illustrate our method on a UML class diagram. Designing effective keys raises questions about the concerns and tasks to be addressed by the diagram, and even, reveals issues about the language itself.

Nicolas Genon, Gilles Perrouin, Xavier Le Pallec, Patrick Heymans

Goal Modeling

Frontmatter
MEMO GoalML: A Context-Enriched Modeling Language to Support Reflective Organizational Goal Planning and Decision Processes

Conceptual models of goal systems promise to provide an apt basis for planning, analyzing, monitoring, and (re-)considering goals as part of management processes in the organization. But although a great deal of conceptual goal modeling languages are available, these take only limited account of the organizational dimension of goals, including authorization rights, responsibilities, resources, and, in particular, related decision processes. This paper presents a goal modeling language which is integrated with a method for multi-perspective enterprise modeling, such that context-enriched models of goal systems can be constructed. Aside from organizational aspects, particular emphasis is placed on conceptualizations that clearly distinguish different (meta) levels of abstraction.

Alexander Bock, Ulrich Frank
Can Goal Reasoning Techniques Be Used for Strategic Decision-Making?

Business strategies aim to operationalize an enterprise’s mission and visions by defining initiatives and choosing among alternative courses of action through some form of strategic analysis. However, existing analysis techniques (e.g., SWOT analysis, Five Forces Model) are invariably informal and sketchy, in sharp contrast to the formal and algorithmic counterparts developed in Conceptual Modeling and Software Engineering. Among such techniques, goal models and goal reasoning have become very prominent over the past twenty years, helping to model stakeholder requirements and the alternative ways these can be fulfilled. In this work we explore the applicability of goal models to conceptualize strategic business problems and capture viable alternatives in support of formal strategic decision-making. We show through a comparative study how analysis can be conducted on a realistic case adopted from the literature using existing goal modeling techniques, and identify their weaknesses and limitations that need to be addressed in order to accommodate strategic business analysis.

Elda Paja, Alejandro Maté, Carson Woo, John Mylopoulos
Requirements Evolution and Evolution Requirements with Constrained Goal Models

We are interested in supporting software evolution caused by changing requirements and/or changes in the operational environment of a software system. For example, users of a system may want new functionality or performance enhancements to cope with growing user population (changing requirements). Alternatively, vendors of a system may want to minimize costs in implementing requirements changes (evolution requirements). We propose to use Constrained Goal Models (CGMs) to represent the requirements of a system, and capture requirements changes in terms of incremental operations on a goal model. Evolution requirements are then represented as optimization goals that minimize implementation costs or customer value. We can then exploit reasoning techniques to derive optimal new specifications for an evolving software system. CGMs offer an expressive language for modelling goals that comes with scalable solvers that can solve hybrid constraint and optimization problems using a combination of Satisfiability Modulo Theories (SMT) and Optimization Modulo Theories (OMT) techniques. We evaluate our proposal by modeling and reasoning with a goal model for the meeting scheduling exemplar.

Chi Mai Nguyen, Roberto Sebastiani, Paolo Giorgini, John Mylopoulos
RationalGRL: A Framework for Rationalizing Goal Models Using Argument Diagrams

Goal modeling languages, such as i* and the Goal-oriented Requirements Language (GRL), capture and analyze high-level goals and their relationships with lower level goals and tasks. However, in such models, the rationalization behind these goals and tasks and the selection of alternatives are usually left implicit. To better integrate goal models and their rationalization, we develop the RationalGRL framework, in which argument diagrams can be mapped to goal models. Moreover, we integrate the result of the evaluation of arguments and their counterarguments with GRL initial satisfaction values. We develop an interface between the argument web tools OVA and TOAST and the Eclipse-based tool for GRL called jUCMNav. We demonstrate our methodology with a case study from the Schiphol Group.

Marc van Zee, Diana Marosin, Floris Bex, Sepideh Ghanavati
Backmatter
Metadaten
Titel
Conceptual Modeling
herausgegeben von
Isabelle Comyn-Wattiau
Katsumi Tanaka
Il-Yeol Song
Shuichiro Yamamoto
Motoshi Saeki
Copyright-Jahr
2016
Electronic ISBN
978-3-319-46397-1
Print ISBN
978-3-319-46396-4
DOI
https://doi.org/10.1007/978-3-319-46397-1