Skip to main content

2014 | Buch

Conceptual Modeling

33rd International Conference, ER 2014, Atlanta, GA, USA, October 27-29, 2014. Proceedings

herausgegeben von: Eric Yu, Gillian Dobbie, Matthias Jarke, Sandeep Purao

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 32nd International Conference on Conceptual Modeling, ER 2014, held in Atlanta, GA, USA. The 23 full and 15 short papers presented were carefully reviewed and selected from 80 submissions. Topics of interest presented and discussed in the conference span the entire spectrum of conceptual modeling including research and practice in areas such as: data on the web, unstructured data, uncertain and incomplete data, big data, graphs and networks, privacy and safety, database design, new modeling languages and applications, software concepts and strategies, patterns and narratives, data management for enterprise architecture, city and urban applications.

Inhaltsverzeichnis

Frontmatter

Keynotes

A Semiotic Approach to Conceptual Modelling

The work on Conceptual Modelling performed by our group at PUC-Rio is surveyed, covering four mutually dependent research topics. Regarding databases as a component of information systems, we extended the scope of the Entity-Relationship model, so as to encompass facts, events and agents in a three-schemata specification method employing a logic programming formalism. Next we proceeded to render the specifications executable, by utilizing backward-chaining planners to satisfy the agents’ goals through sequences of fact-modification events. Thanks to the adoption of this plan-recognition / plan-generation paradigm, it became possible to treat both business-oriented and fictional narrative genres. To guide our conceptual modelling approach, we identified four semiotic relations, associated with the four master tropes that have been claimed to provide a system to fully grasp the world conceptually.

Antonio L. Furtado, Marco A. Casanova, Simone Diniz Junqueira Barbosa
Ontological Patterns, Anti-Patterns and Pattern Languages for Next-Generation Conceptual Modeling

This paper addresses the complexity of conceptual modeling in a scenario in which semantic interoperability requirements are increasingly present. It elaborates on the need for developing sound ontological foundations for conceptual modeling but also for developing complexity management tools derived from these foundations. In particular, the paper discusses three of these tools, namely, ontological patterns, ontological anti-patterns and pattern languages.

Giancarlo Guizzardi

Data on the Web

A Computer-Guided Approach to Website Schema.org Design

Schema.org offers to web developers the opportunity to enrich a website’s content with microdata and schema.org. For large websites, implementing microdata can take a lot of time. In general, it is necessary to perform two main activities, for which we lack methods and tools. The first consists in designing what we call the

website schema.org

, which is the fragment of schema.org that is relevant to the website. The second consists in adding the corresponding microdata tags to the web pages. In this paper, we describe an approach to the design of a website schema.org. The approach consists in using a human-computer task-oriented dialogue, whose purpose is to arrive at that design. We describe a dialogue generator that is domain-independent, but that can be adapted to specific domains. We propose a set of six evaluation criteria that we use to evaluate our approach, and that could be used in future approaches.

Albert Tort, Antoni Olivé
On Designing Archiving Policies for Evolving RDF Datasets on the Web

When dealing with dynamically evolving datasets, users are often interested in the state of affairs on previous versions of the dataset, and would like to execute queries on such previous versions, as well as queries that compare the state of affairs across different versions. This is especially true for datasets stored in the Web, where the interlinking aspect, combined with the lack of central control, do not allow synchronized evolution of interlinked datasets. To address this requirement the obvious solution is to store all previous versions, but this could quickly increase the space requirements; an alternative solution is to store adequate deltas between versions, which are generally smaller, but this would create the overhead of generating versions at query time. This paper studies the trade-offs involved in these approaches, in the context of archiving dynamic RDF datasets over the Web. Our main message is that a hybrid policy would work better than any of the above approaches, and describe our proposed methodology for establishing a cost model that would allow determining when each of the two standard methods (version-based or delta-based storage) should be used in the context of a hybrid policy.

Kostas Stefanidis, Ioannis Chrysakis, Giorgos Flouris
Ontology-Based Spelling Suggestion for RDF Keyword Search

We study the spelling suggestion problem for keyword search over RDF data, which provides users with alternative queries that may better express users’ search intention. In order to return the suggested queries more efficiently, we utilize the ontology information to reduce the search space of query candidates and facilitate the generation of suggested queries. Experiments with real datasets show the effectiveness and efficiency of our approach.

Sheng Li, Junhu Wang, Xin Wang

Unstructured Data

Schema-Independence in XML Keyword Search

XML keyword search has attracted a lot of interests with typical search based on lowest common ancestor (LCA). However, in this paper, we show that meaningful answers can be found beyond LCA and should be independent from schema designs of the same data content. Therefore, we propose a new semantics, called CR (

C

ommon

R

elative), which not only can find more answers beyond LCA, but the returned answers are independent from schema designs as well. To find answers based on the CR semantics, we propose an approach, in which we have new strategies for indexing and processing. Experimental results show that the CR semantics can improve the recall significantly and the answer set is independent from the schema designs.

Thuy Ngoc Le, Zhifeng Bao, Tok Wang Ling
Mapping Heterogeneous XML Document Collections to Relational Databases

XML web data is heterogeneous in terms of content and tagging of information. Integrating, querying, and presenting heterogeneous collections presents many challenges. The structure of XML documents is useful for achieving these tasks; however, not every XML document on the web includes a schema. We propose and implement a framework for efficient schema extraction, integration, and relational schema mapping from heterogeneous XML documents collected from the web. Our approach uses the Schema Extended Context Free Grammar (SECFG) to model XML schemas and transform them into relational schemas. Unlike other implementations, our approach is also able to identify and transform many XML constraints into relational schema constraints while supporting multiple XML schema languages, e.g., DTD or XSD, or no XML schema, as input. We compare our approach with other proposed approaches and conclude that we offer better functionality more efficiently and with greater flexibility.

Prudhvi Janga, Karen C. Davis
MKStream: An Efficient Algorithm for Processing Multiple Keyword Queries over XML Streams

In this paper, we tackle the problem of processing

various

keyword-based queries over XML

streams

in a scalable way, improving recent

multi-query processing

approaches. We propose a customized algorithm, called

MKStream

, that relies on parsing stacks designed for simultaneously matching several queries. Particularly, it explores the possibility of adjusting the number of parsing stacks for a better trade-off between processing time and memory usage. A comprehensive set of experiments evaluates its performance and scalability against the state-of-the-art, and shows that

MKStream

is the most efficient algorithm for keyword search services over XML streams.

Evandrino G. Barros, Alberto H. F. Laender, Mirella M. Moro, Altigran S. da Silva

Uncertain and Incomplete Data

Cardinality Constraints for Uncertain Data

Modern applications require advanced techniques and tools to process large volumes of uncertain data. For that purpose we introduce cardinality constraints as a principled tool to control the occurrences of uncertain data. Uncertainty is modeled qualitatively by assigning to each object a degree of possibility by which the object occurs in an uncertain instance. Cardinality constraints are assigned a degree of certainty that stipulates on which objects they hold. Our framework empowers users to model uncertainty in an intuitive way, without the requirement to put a precise value on it. Our class of cardinality constraints enjoys a natural possible world semantics, which is exploited to establish several tools to reason about them. We characterize the associated implication problem axiomatically and algorithmically in linear input time. Furthermore, we show how to visualize any given set of our cardinality constraints in the form of an Armstrong instance, whenever possible. Even though the problem of finding an Armstrong instance is precisely exponential, our algorithm computes an Armstrong instance with conservative use of time and space. Data engineers and domain experts can jointly inspect Armstrong instances in order to consolidate the certainty by which a cardinality constraint shall hold in the underlying application domain.

Henning Koehler, Sebastian Link, Henri Prade, Xiaofang Zhou
TopCrowd
Efficient Crowd-enabled Top-k Retrieval on Incomplete Data

Building databases and information systems over data extracted from heterogeneous sources like the Web poses a severe challenge: most data is incomplete and thus difficult to process in structured queries. This is especially true for sophisticated query techniques like Top-k querying where rankings are aggregated over several sources. The intelligent combination of efficient data processing algorithms with crowdsourced database operators promises to alleviate the situation. Yet the scalability of such combined processing is doubtful. We present TopCrowd, a novel crowd-enabled Top-k query processing algorithm that works effectively on incomplete data, while tightly controlling query processing costs in terms of response time and money spent for crowdsourcing. TopCrowd features probabilistic pruning rules for drastically reduced numbers of crowd accesses (up to 95%), while effectively balancing querying costs and result correctness. Extensive experiments show the benefit of our technique.

Christian Nieke, Ulrich Güntzer, Wolf-Tilo Balke
Web Services Composition in the Presence of Uncertainty

Recent years have witnessed a growing interest in using Web Services as a powerful means for data publishing and sharing on top of the Web. This class of services is commonly known as DaaS (Data-as-a-Service), or also data services. The data returned by a data service is often subject to uncertainty for various reasons (e.g., privacy constraints, unreliable data collection instruments, etc. In this paper, we revisit the basic activities related to (Web) data services that are impacted by uncertainty, including the service description, invocation and composition. We propose a probabilistic approach to deal with uncertainty in all of these activities.

Soumaya Amdouni, Mahmoud Barhamgi, Djamal Benslimane, Rim Faiz, Kokou Yetongnon

Big Data, Graphs and Networks

Domain Ontology As Conceptual Model for Big Data Management: Application in Biomedical Informatics

The increasing capability and sophistication of biomedical instruments has led to rapid generation of large volumes of disparate data that is often characterized as biomedical “big data”. Effective analysis of biomedical big data is providing new insights to advance healthcare research, but it is difficult to efficiently manage big data without a conceptual model, such as ontology, to support storage, query, and analytical functions. In this paper, we describe the Cloudwave platform that uses a domain ontology to support optimal data partitioning, efficient network transfer, visualization, and querying of big data in the neurology disease domain. The domain ontology is used to define a new JSON-based Cloudwave Signal Format (CSF) for neurology signal data. A comparative evaluation of the ontology-based CSF with existing data format demonstrates that it significantly reduces the data access time for query and visualization of large scale signal data.

Catherine Jayapandian, Chien-Hung Chen, Aman Dabir, Samden Lhatoo, Guo-Qiang Zhang, Satya S. Sahoo
Network Analytics ER Model – Towards a Conceptual View of Network Analytics

This paper proposes a conceptual modelling paradigm for network analysis applications, called the Network Analytics ER model (NAER). Not only data requirements but also query requirements are captured by the conceptual description of network analysis applications. This unified analytical framework allows us to flexibly build a number of topology schemas on the basis of the underlying core schema, together with a collection of query topics that describe topological results of interest. In doing so, we can alleviate many issues in network analysis, such as performance, semantic integrity and dynamics of analysis.

Qing Wang
Model-Driven Design of Graph Databases

Graph Database Management Systems (GDBMS) are rapidly emerging as an effective and efficient solution to the management of very large data sets in scenarios where data are naturally represented as a graph and data accesses mainly rely on traversing this graph. Currently, the design of graph databases is based on best practices, usually suited only for a specific GDBMS. In this paper, we propose a model-driven, system-independent methodology for the design of graph databases. Starting from a conceptual representation of the domain of interest expressed in the Entity-Relationship model, we propose a strategy for devising a graph database in which the data accesses for answering queries are minimized. Intuitively, this is achieved by aggregating in the same node data that are likely to occur together in query results. Our methodology relies a logical model for graph databases, which makes the approach suitable for different GDBMSs. We also show, with a number of experimental results over different GDBMSs, the effectiveness of the proposed methodology.

Roberto De Virgilio, Antonio Maccioni, Riccardo Torlone

Privacy and Safety

Utility-Friendly Heterogenous Generalization in Privacy Preserving Data Publishing

K

-anonymity is one of the most important anonymity models that have been widely investigated and various techniques have been proposed to achieve it. Among them generalization is a common technique. In a typical generalization approach, tuples in a table was first divided into many QI(quasi-identifier)-groups such that the size of each QI-group is larger than

K

. In general, utility of anonymized data can be enhanced if size of each QI-group is reduced. Motivated by this observation, we propose linking-based anonymity model, which achieves

K

-anonymity with QI-groups having size less than

K

. To implement linking-based anonymization model, we propose a simple yet efficient heuristic local recoding method. Extensive experiments on real data sets are also conducted to show that the utility has been significantly improved by our approach compared to the state-of-the-art methods.

Xianmang He, Dong Li, Yanni Hao, Huahui Chen
From Conceptual Models to Safety Assurance

Safety assurance or certification is one of the most costly and time-consuming tasks in automotive, railway, avionics, and other safety-critical domains. Different transport sectors have developed their own specific sets of safety standards, which creates a big challenge to reuse pre-certified components and share expertise between different transport sectors. In this paper, we propose to use conceptual models in the form of metamodels to support certification data reuse and facilitate safety compliance. A metamodel transformation approach is outlined to derive domain or project specific metamodels using a generic metamodel as basis. Furthermore, we present a metamodel refinement language, which is a domain-specific language that facilitates simple refinement of metamodels. Finally, we use two case studies from the automotive domain to demonstrate our approach and its ability to reuse metamodels across companies.

Yaping Luo, Mark van den Brand, Luc Engelen, Martijn Klabbers

Database Design

A New Approach for N-ary Relationships in Object Databases

In an object-oriented or object-relational database, an n-ary relationship among objects is normally represented in a relation that is separated from other properties of objects at the logical level. In order to use such a database, the user needs to know the structure of the database, especially what kind of relations and classes there are, how they are organized and related in order to manipulate and query object data. To make the logical level closer to the conceptual level so that the database is easier to use, we propose a novel approach that allows the user to represent n-ary relationships among objects in their class definitions so that the user can directly manipulate and query objects based on the class definitions, rather than explicitly join relations at the logical level. Based on the class definitions, the system can automatically generate the modified class/object relation definitions and the corresponding regular relation definition for the n-ary relationship at the physical level to reduce redundancy and convert data manipulation and query statements based at the logical level to ones at the physical level.

Jie Hu, Liu Chen, Shuang Qiu, Mengchi Liu
Database Design for NoSQL Systems

We propose a database design methodology for NoSQL systems. The approach is based on NoAM (NoSQL Abstract Model), a novel abstract data model for NoSQL databases, which exploits the commonalities of various NoSQL systems and is used to specify a system-independent representation of the application data. This intermediate representation can be then implemented in target NoSQL databases, taking into account their specific features. Overall, the methodology aims at supporting scalability, performance, and consistency, as needed by next-generation web applications.

Francesca Bugiotti, Luca Cabibbo, Paolo Atzeni, Riccardo Torlone
Fixing Up Non-executable Operations in UML/OCL Conceptual Schemas

An operation is executable if there is at least one information base in which its preconditions hold and such that the new information base obtained from applying its postconditions satisfies all the integrity constraints. A non-executable operation is useless since it may never be applied. Therefore, identifying non-executable operations and fixing up their definition is a relevant task that should be performed as early as possible in software development. We address this problem in the paper by proposing an algorithm to automatically compute the missing effects in postconditions that would ensure the executability of the operation.

Xavier Oriol, Ernest Teniente, Albert Tort
Generic Data Manipulation in a Mixed Global/Local Conceptual Model

Modern content management systems allow end-user schema creation, which can result in schema heterogeneity within a system. Building functionality to create and modify data must keep pace with this heterogeneity, but the cost of constant development is high. In this paper, we present a novel approach that extends our previous integration system that uses domain structures—global schema fragments—and local type and integration operators by introducing new local record operators and global insert and update operators. We present two widgets that use the new operators: (i) a generic clone widget that allows users to selectively clone records shown in a global widget while creating new local records; (ii) a generic clone exploration widget that allows users to browse the

CloneOf

relationships and reason about how different cloned records and structures have evolved. We demonstrate our system with a running example of the clone and exploration widgets in a robotics educational repository.

Scott Britell, Lois M. L. Delcambre, Paolo Atzeni

New Modeling Languages and Applications

Evaluating Modeling Languages: An Example from the Requirements Domain

Modeling languages have been evaluated through empirical studies, comparisons of language grammars, and ontological analyses. In this paper we take the first approach, evaluating the expressiveness and effectiveness of

Techne

, a requirements modeling language, by applying it to three requirements problems from the literature. We use our experiences to propose a number of language improvements for

Techne

, addressing challenges discovered during the studies. This work presents an example evaluation of modeling language expressiveness and effectiveness through realistic case studies.

Jennifer Horkoff, Fatma Başak Aydemir, Feng-Lin Li, Tong Li, John Mylopoulos
Nòmos 3: Legal Compliance of Roles and Requirements

The problem of regulatory compliance for a software system consists of ensuring through a systematic, tool-supported process that the system complies with all elements of a relevant law. To deal with the problem, we build a model of the law and contrast it with a model of the requirements of the system. In earlier work, we proposed a modelling language for law (Nòmos 2) along with a reasoning mechanism that answers questions about compliance. In this paper we extend Nòmos 2 to include the concepts of role and requirement so that we can reason about compliance in specific domains. Also, Nòmos 3 represents the distribution of responsibilities to roles, distinguishing social from legal roles. Nòmos 3 models allow us to reason about compliance of requirements and roles with the norms that constitute a law. A small case study is used to illustrate the elements of Nòmos 3 and the kinds of reasoning it supports.

Silvia Ingolfo, Ivan Jureta, Alberto Siena, Anna Perini, Angelo Susi
Towards an XBRL Ontology Extension for Management Accounting

The Extensible Business Reporting Language (XBRL) is used in several countries to share business data and it is largely diffused, for example, for balance sheets exchange and archival. On the other side, the analysis of management accounting data poses more interoperability issues. In particular, it requires more sophisticate information models in order to integrate data generated by different parties (both internal and external to the enterprise) in different formats and for different purposes. For the same reason, the “flat” mapping, performed by automatic mechanisms often used to import XBRL streams into Web Ontology Language (OWL) for further processing, is inadequate and more sophisticate options are required. In this perspective, we propose a modification and extension of an existing XBRL ontology, with the aim of better supporting the concepts and the operations needed for management accounting.

Barbara Livieri, Marco Zappatore, Mario Bochicchio
Representing Hierarchical Relationships in INM

Real-world organizations has various natural and complex relationships with all kinds of people and other organizations. Such relationships may form complex hierarchical or composite structures. Existing data models such as relational, object-oriented, or object-relational models oversimplify even ignore these relationships and their semantics so that the semantics has to be dealt with by the applications. To solve this problem, we present a concise but expressive language to naturally and directly represent the semantics of complex relationships in the real-world entities.

Mengchi Liu, Jie Hu, Liu Chen, Xuhui Li
Providing Foundation for User Feedback Concepts by Extending a Communication Ontology

The term user feedback is becoming widely used in requirements engineering (RE) research to refer to the comments and evaluations that users express upon having experienced the use of a software application or service. This explicit feedback takes place in virtual spaces (e.g., issue tracking systems, app stores), aiming, for instance, at reporting on discovered bugs or requesting new features. Founding the notion of explicit user feedback with the use of an ontology may support a deep understanding of the feedback nature, as well as contribute to the development of tool-components for its analysis at use of requirements analysts. In this paper, we present a user feedback ontology as an extension of an existing communication ontology. We describe how we built it, along with a set of competency questions, and illustrate its applicability on an example taken from a collaborative communication related to RE for software evolution.

Itzel Morales-Ramirez, Anna Perini, Renata Guizzardi
Towards a Conceptual Framework and Metamodel for Context-Aware Personal Cross-Media Information Management Systems

Information fragmentation is a well-known issue in personal information management (PIM). In order to overcome this problem, various PIM solutions have focussed on linking documents via semantic relationships. More recently, task-centered information management (TIM) has been introduced as an alternative PIM paradigm. While these two paradigms have their strengths and weaknesses, we aim for a new PIM system design approach to achieve better synergies with human memory. We further envision a cross-media solution where physical information is integrated with a user’s digital personal information space. We present the Object-Concept-Context (OC2) conceptual framework for context-aware personal cross-media information management combining the best of the two existing PIM paradigms and integrating the most relevant features of the human memory. Further, we outline how the OC2 framework has been implemented based on a domain-specific application of the Resource-Selector-Link (RSL) hypermedia metamodel.

Sandra Trullemans, Beat Signer

Software Concepts and Strategies

Software as a Social Artifact: A Management and Evolution Perspective

For many, software is just code, something intangible best defined in contrast with hardware, but it is not particularly illuminating. Microsoft Word turned 30 last year. During its lifetime it has been the subject of numerous changes, as its requirements, code and documentation have continuously evolved. Still a community of users recognizes it as “the same software product”, a persistent object undergoing several changes through a social process involving owners, developers, salespeople and users, and it is still producing recognizable effects that meet the same core requirements. It is this process that makes software something different than just a piece of code, and justifies its intrinsic nature as a social artifact. Building on Jackson’s and Zave’s seminal work on foundations of requirements engineering, we propose in this paper an ontology of software and related notions that accounts for such intuitions, and adopt it in software configuration management to provide a better understanding and control of software changes.

Xiaowei Wang, Nicola Guarino, Giancarlo Guizzardi, John Mylopoulos
Modelling Risks in Open Source Software Component Selection

Adopting Open Source Software (OSS) components is a decision that offers many potential advantages – such as cost effectiveness and reputation – but even introduces a potentially high number of risks, which span from the inability of the OSS community to continue the development over time, to a poor quality of code. Differently from commercial off-the-shelf components, to assess risk in OSS component adoption, we can rely on the public availability of measurable information about the component code and the developing communities. In the present paper, we present a risk evaluation technique that uses conceptual modelling to assess OSS component adoption risks. We root it in the existing literature on OSS risk assessment and validate it by means of our industrial partners.

Alberto Siena, Mirko Morandini, Angelo Susi
Modelling and Applying OSS Adoption Strategies

Increasing adoption of Open Source Software (OSS) in information system engineering has led to the emergence of different OSS business strategies that affect and shape organizations’ business models. In this context, organizational modeling needs to reconcile efficiently OSS adoption strategies with business strategies and models. In this paper, we propose to embed all the knowledge about each OSS adoption strategy into an

i*

model that can be used in the intentional modeling of the organization. These models describe the consequences of adopting one such strategy or another: which are the business goals that are supported, which are the resources that emerge, etc. To this aim, we first enumerate the main existing OSS adoption strategies, next we formulate an ontology that comprises the activities and resources that characterise these strategies, then based on the experience of 5 industrial partners of the RISCOSS EU-funded project, we explore how these elements are managed in each strategy and formulate the corresponding model using the

i*

framework.

Lidia López, Dolors Costal, Claudia P. Ayala, Xavier Franch, Ruediger Glott, Kirsten Haaland

Patterns and Narratives

Detection, Simulation and Elimination of Semantic Anti-patterns in Ontology-Driven Conceptual Models

The construction of large-scale reference conceptual models is a complex engineering activity. To develop high-quality models, a modeler must have the support of expressive engineering tools such as theoretically well-founded modeling languages and methodologies, patterns and anti-patterns and automated support environments. This paper proposes Semantic Anti-Patterns for ontology-driven conceptual modeling. These anti-patterns capture error prone modeling decisions that can result in the creation of models that allow for unintended model instances (representing undesired state of affairs). The anti-patterns presented here have been empirically elicited through an approach of conceptual models validation via visual simulation. The paper also presents a tool that is able to: automatically identify these anti-patterns in user’s models, provide visualization for its consequences, and generate corrections to these models by the automatic inclusion of OCL constraints.

Giancarlo Guizzardi, Tiago Prince Sales
Recall of Concepts and Relationships Learned by Conceptual Models: The Impact of Narratives, General-Purpose, and Pattern-Based Conceptual Grammars

Conceptual models are the means by which a designer expresses his or her understanding of an envisioned information system. This research investigates whether modeling experts or novices differ in understanding conceptual models represented by textual descriptions in the form of narratives, by general-purpose conceptual modeling languages, such as entity-relationship models or by pattern-based conceptual modeling languages. Cognitive science theories on memory systems are adopted and a cued recall experiment carried out. The experimental results suggest that narratives cannot be underestimated during learning processes in information systems design. Furthermore, general-purpose conceptual modeling languages tend to lack capabilities for supporting template-based learning. The results are differentiated between subjects with at least basic conceptual modeling skills and novices.

Wolfgang Maass, Veda C. Storey
Visual Maps for Data-Intensive Ecosystems

Data-intensive ecosystems are conglomerations of one or more databases along with software applications that are built on top of them. This paper proposes a set of methods for providing visual maps of data-intensive ecosystems. We model the ecosystem as a graph, with modules (tables and queries embedded in the applications) as nodes and data provision relationships as edges. We cluster the modules of the ecosystem in order to further highlight their interdependencies and reduce visual clutter. We employ three alternative, novel, circular graph drawing methods for creating a visual map of the graph.

Efthymia Kontogiannopoulou, Petros Manousis, Panos Vassiliadis

Data Management for Enterprise Architecture

A Framework for a Business Intelligence-Enabled Adaptive Enterprise Architecture

The environments in which businesses currently operate are dynamic and constantly changing, with influence from external and internal factors. When businesses evolve, leading to changes in business objectives, it is hard to determine and visualize what direct Information System responses are needed to respond to these changes. This paper introduces an enterprise architecture framework which allows for anticipating and supporting proactively, adaptation in enterprise architectures as and when the business evolves. This adaptive framework exploits and models relationships between

business objectives

of important stakeholders,

decisions

related to these objectives, and

Information Systems

that support these decisions. This framework exploits goal modeling in a Business Intelligence context. The tool-supported framework was assessed against different levels and types of changes in a real enterprise architecture of a Canadian government department, with encouraging results.

Okhaide Akhigbe, Daniel Amyot, Gregory Richards
Modeling Organizational Alignment

In the world of business, even small advantages make a difference. As such, establishing strategic goals becomes a very important practice. However, the big challenge is in the designing of processes aligned with the goals. Modeling goals and processes in an integrated way improves the traceability among strategic and operational layers, easing up the alignment problem.

Henrique Prado Sousa, Julio Cesar Sampaio do Prado Leite
Compliance with Multiple Regulations

With an increase in regulations, it is challenging for organizations to identify relevant regulations and ensure that their business processes comply with legal provisions. Multiple regulations cover the same domain and can interact with, complement or contradict each other. To overcome these challenges, a systematic approach is required. This paper proposes a thorough approach integrating the Eunomos knowledge and document management system with

Legal-URN

framework, a Requirements Engineering based framework for business process compliance).

Sepideh Ghanavati, Llio Humphreys, Guido Boella, Luigi Di Caro, Livio Robaldo, Leendert van der Torre
CSRML4BI: A Goal-Oriented Requirements Approach for Collaborative Business Intelligence

Collaborative Business Intelligence (BI) is a common practice in enterprises. Isolated decision makers have neither knowledge nor time required to gather and analyze all the information for making many decisions. Nevertheless, current practice for collaborative BI is still based on arbitrarily interchanging e-mails and documents between participants. In turn, information is lost, participants are missed, and the decision making task yields poor results. In this paper, we propose a framework, based on state of the art approaches for modelling collaborative systems and elicitating BI requirements, to carefully model and elicitate the participants, goals, and information needs, involved in collaborative BI. Therefore, we can easily keep track of all elements involved in a collaborative decision, avoiding losing information and facilitating the collaboration.

Miguel A. Teruel, Roberto Tardío, Elena Navarro, Alejandro Maté, Pascual González, Juan Trujillo, Rafael Muñoz-Terol
Embracing Pragmatics

In enterprise modelling, we witness numerous efforts to predefine and integrate perspectives and concepts for modelling some problem area, which result in standardised modelling languages (e.g. BPMN, ArchiMate). The empirical observations however indicate that, in actual use, standardising and integrating effect of such modelling languages erodes, due to the need to accommodate specific modelling contexts. Instead of designing yet another mechanism to control this phenomena, we argue it should first be fundamentally understood. To account for the functioning of a modelling language in a socio-pragmatic context of modelling, we claim it is necessary to go beyond a normative view often adopted in modelling language study. We present a developing explanatory theory as to why and how modelling languages are used in enterprise modelling. The explanatory theory relies on a conceptual framework on modelling developed as the critical synthesis of the existing theoretical work, and from the position of socio-pragmatic constructivism.

Marija Bjeković, Henderik A. Proper, Jean-Sébastien Sottet

City and Urban Applications

From Needs to Services: Delivering Personalized Water Assurance Services in Urban Africa

With rapid urbanization, a changing climate and increasingly strained centralized water systems, individuals and businesses across urban sub-Saharan Africa face unreliable water supplies, escalating water costs and health risks (e.g., from poor sanitation facilities or slow adoption of new practices). These deficiencies and risks are unevenly distributed over space and time. In some cases, low-income households may spend up to 20 percent of monthly income on water while others in the same geography may never see water prices that exceed one percent of household income. Several web/mobile applications have been launched in an attempt to address these deficiencies and risks. However, these applications are generally designed in a top-down manner and consequently fail to deliver personalized services. Furthermore, in many developing countries, these applications follow the

develop-and-deploy

paradigm. This implies that the end-user’s needs and goals are often neglected prior to the actual development of the system. This paper presents part of our ongoing work to model, analyze and develop personalized water services using goal-oriented requirements engineering techniques. We focus on conceptual modeling in order to identify the requirements needed to design a system for personalized water assurance services. Our modeling and analysis follows a bottom-up approach that starts from interactive engagement with the water ecosystem to the use of goal-oriented approaches for the analysis and design of requirements for personalization.

Kala Fleming, Komminist Weldemariam
Modeling Claim-Making Process in Democratic Deliberation

Online deliberation is a promising venue for rational-critical discourse in public spheres and has the potential to support participatory decision-making and collective intelligence. With regard to public issues, deliberation is characterized by comparing and integrating different positions through claim-making, and generating collective judgments. In this paper, we examine the claim-making process and propose a conceptual model to manage the knowledge entities (claims, issues, facts, etc.) in claim-making and their relationships. Extending prior works in argumentation models and issue-based information systems, our model is especially capable of depicting the formation and evolvement of collective judgments in deliberation context.

Ye Tian, Guoray Cai
Creating Quantitative Goal Models: Governmental Experience

Precision in goal models can be enhanced using quantitative rather than qualitative scales. Selecting appropriate values is however often difficult, especially when groups of stakeholders are involved. This paper identifies and compares generic and domain-specific group decision approaches for selecting quantitative values in goal models. It then reports on the use of two approaches targeting quantitative contributions, actor importance, and indicator definitions in the Goal-oriented Requirement Language. The approaches have been deployed in two independent branches of the Canadian government.

Okhaide Akhigbe, Mohammad Alhaj, Daniel Amyot, Omar Badreddin, Edna Braun, Nick Cartwright, Gregory Richards, Gunter Mussbacher
Backmatter
Metadaten
Titel
Conceptual Modeling
herausgegeben von
Eric Yu
Gillian Dobbie
Matthias Jarke
Sandeep Purao
Copyright-Jahr
2014
Verlag
Springer International Publishing
Electronic ISBN
978-3-319-12206-9
Print ISBN
978-3-319-12205-2
DOI
https://doi.org/10.1007/978-3-319-12206-9