Skip to main content

Über dieses Buch

This Festschrift volume, published in honor of John Mylopoulos on the occasion of his retirement from the University of Toronto, contains 25 high-quality papers, written by leading scientists in the field of conceptual modeling. The volume has been divided into six sections. The first section focuses on the foundations of conceptual modeling and contains material on ontologies and knowledge representation. The four sections on software and requirements engineering, information systems, information integration, and web and services, represent the chief current application domains of conceptual modeling. Finally, the section on implementations concentrates on projects that build tools to support conceptual modeling. With its in-depth coverage of diverse topics, this book could be a useful companion to a course on conceptual modeling.



John Mylopoulos: Sewing Seeds of Conceptual Modelling

John Mylopoulos: Sewing Seeds of Conceptual Modelling

In the summer of 1980 high in the Colorado Rockies the mountain flowers were blooming, as were ideas of multi-disciplinary conceptual modelling. The Pingree Park Workshop on Data Abstraction, Database, and Conceptual Modelling [17] marked a figurative and literal high point in expectations for the exchange between databases, programming languages, and artificial intelligence (AI) on conceptual modelling. Quietly hiking amongst the AI, database, and programming language luminaries such as Ted Codd, Mike Stonebraker, Mary Shaw, Stephen Zilles, Pat Hayes, Bob Balzer, and Peter Deutsch was John Mylopoulos, a luminary himself who inspired and guided, for over three decades, the branch of Conceptual Modelling chronicled in this paper.
Michael L. Brodie


Foundations of Temporal Conceptual Data Models

This chapter considers the different temporal constructs appeared in the literature of temporal conceptual models (timestamping and evolution constraints), and it provides a coherent model-theoretic formalisation for them. It then introduces a correct and succinct encoding in a subset of first-order temporal logic, namely \({{\mathcal DLR}_{\mathcal US}}\) – the description logic \({\mathcal DLR}\) extended with the temporal operators Since and Until. At the end, results on the complexity of reasoning in temporal conceptual models are presented.
Alessandro Artale, Enrico Franconi

Faceted Lightweight Ontologies

We concentrate on the use of ontologies for the categorization of objects, e.g., photos, books, web pages. Lightweight ontologies are ontologies with a tree structure where each node is associated a natural language label. Faceted lightweight ontologies are lightweight ontologies where the labels of nodes are organized according to certain predefined patterns which capture different aspects of meaning, i.e., facets. We introduce facets based on the Analytico-Synthetic approach, a well established methodology from Library Science which has been successfully used for decades for the classification of books. Faceted lightweight ontologies have a well defined structure and, as such, they are easier to create, to share among users, and they also provide more organized input to semantics based applications, such as semantic search and navigation.
Fausto Giunchiglia, Biswanath Dutta, Vincenzo Maltese

The Ontological Level: Revisiting 30 Years of Knowledge Representation

I revisit here the motivations and the main proposal of paper I published at the 1994 Wittgenstein Symposium, entitled “The Ontological Level”, in the light of the main results achieved in the latest 30 years of Knowledge Representation, since the well known “What’s in a link?” paper by Bill Woods. I will argue that, despite the explosion of ontologies, many problems are still there, since there is no general agreement about having ontological distinctions built in the representation language, so that assumptions concerning the basic constructs of representation languages remain implicit in the mind of the knowledge engineer, and difficult to express and to share. I will recap the recent results concerning formal ontological distinctions among unary and binary relations, sketching a basic ontology of meta-level categories representation languages should be aware of, and I will discuss the role of such distinctions in the current practice of knowledge engineering.
Nicola Guarino

Some Notes on Models and Modelling

Analytical models are a fundamental tool in the development of computer-based systems of every kind: their essential purpose is to support human understanding and reasoning in development. To support reasoning, models must be substantially formal. The relationship between a formal model and its—typically—non-formal subject demands care: particular attention must be paid to the model interpretation, which maps its formal terms to the phenomena of the subject. An analytical model is to be regarded not as an assertion, but as a predicate within a larger logical structure of reasoning. Analogical models, such as databases, act as run-time surrogates for some parts of the problem world; in their design the properties of the model itself must be carefully distinguished from those of its subject. Some models may be informal: informal models have many legitimate uses, but cannot serve as a basis for formal reasoning.
Michael Jackson

A Semantical Account of Progression in the Presence of Defaults

In previous work, we proposed a modal fragment of the situation calculus called \({\mathcal ES}\), which fully captures Reiter’s basic action theories. \({\mathcal ES}\) also has epistemic features, including only-knowing, which refers to all that an agent knows in the sense of having a knowledge base. While our model of only-knowing has appealing properties in the static case, it appears to be problematic when actions come into play. First of all, its utility seems to be restricted to an agent’s initial knowledge base. Second, while it has been shown that only-knowing correctly captures default inferences, this was only in the static case, and undesirable properties appear to arise in the presence of actions. In this paper, we remedy both of these shortcomings and propose a new dynamic semantics of only-knowing, which is closely related to Lin and Reiter’s notion of progression when actions are performed and where defaults behave properly.
Gerhard Lakemeyer, Hector J. Levesque

Social Modeling and i*

Many different types of models are used in various scientific and engineering fields, reflecting the subject matter and the kinds of understanding that is sought in each field. Conceptual modeling techniques in software and information systems engineering have in the past focused mainly on describing and analyzing behaviours and structures that are implementable in software. As software systems become ever more complex and densely intertwined with the human social environment, we need models that reflect the social characteristics of complex systems. This chapter reviews the approach taken by the i* framework, highlights its application in several areas, and outlines some open research issues.
Eric S. Yu

Information Systems

Data Modeling in Dataspace Support Platforms

Data integration has been an important area of research for several years. However, such systems suffer from one of the main drawbacks of database systems: the need to invest significant modeling effort upfront. Dataspace Support Platforms (DSSP) envision a system that offers useful services on its data without any setup effort, and improve with time in a pay-as-you-go fashion. We argue that in order to support DSSPs, the system needs to model uncertainty at its core. We describe the concepts of probabilistic mediated schemas and probabilistic mappings as enabling concepts for DSSPs.
Anish Das Sarma, Xin (Luna) Dong, Alon Y. Halevy

Conceptual Modeling: Past, Present and the Continuum of the Future

In this paper we give a brief history on conceptual modeling in computer science and we discuss state-of-the-art approaches. It is claimed that a number of problems remain to be solved. “Schema-first” is no longer a viable approach to meet the data needs in the dynamic world of the internet and web services. This is also true for the “schema-later” approach simply because data and data sources constantly change in depth and breadth and can neither wait for the schema to be completed nor for the data to be collected. As a solution we advocate for a new “schema-during” approach, in which the process of Conceptual Modeling is in a continuum with the operations in the database.
Nick Roussopoulos, Dimitris Karagiannis

On Conceptual Content Management

Interdisciplinary Insights beyond Computational Data
Instead of looking at content management as an extension of traditional database technology (“blobs”) we exploit – in close cooperation with our colleagues from Art History – the notion of symbol as defined by Ernst Cassirer [1] in the cultural sciences. Since context binding, its definition, application and control, is crucial to cultural symbol management, and, therefore, to content and concepts as the two closely intertwined sides of cultural symbols, its demands are detailed and designed in a λ-calculus framework. Cassirer’s request for open and dynamic symbol definition and modification leads us to a generator framework for the implementation of Conceptual Content Management Systems (CCMS). This presentation of our work is slightly allegoric and concentrates on the foundation, rationale and implications of this research.
Joachim W. Schmidt

Information Integration

Conceptual Modeling for Data Integration

The goal of data integration is to provide a uniform access to a set of heterogeneous data sources, freeing the user from the knowledge about where the data are, how they are stored, and how they can be accessed. One of the outcomes of the research work carried out on data integration in the last years is a clear architecture, comprising a global schema, the source schema, and the mapping between the source and the global schema. Although in many research works and commercial tools the global schema is simply a data structure integrating the data at the sources, we argue that the global schema should represent, instead, the conceptual model of the domain. However, to fully pursue such an approach, several challenging issues are to be addressed. The main goal of this paper is to analyze one of them, namely, how to express the conceptual model representing the global schema. We start our analysis with the case where such a schema is expressed in terms of a UML class diagram, and we end up with a proposal of a particular Description Logic, called \(\textit{DL-Lite}_{\mathcal A,id}\). We show that the data integration framework based on such a logic has several interesting properties, including the fact that both reasoning at design time, and answering queries at run time can be done efficiently.
Diego Calvanese, Giuseppe De Giacomo, Domenico Lembo, Maurizio Lenzerini, Riccardo Rosati

Clio: Schema Mapping Creation and Data Exchange

The Clio project provides tools that vastly simplify information integration. Information integration requires data conversions to bring data in different representations into a common form. Key contributions of Clio are the definition of non-procedural schema mappings to describe the relationship between data in heterogeneous schemas, a new paradigm in which we view the mapping creation process as one of query discovery, and algorithms for automatically generating queries for data transformation from the mappings. Clio provides algorithms to address the needs of two major information integration problems, namely, data integration and data exchange. In this chapter, we present our algorithms for both schema mapping creation via query discovery, and for query generation for data exchange. These algorithms can be used in pure relational, pure XML, nested relational, or mixed relational and nested contexts.
Ronald Fagin, Laura M. Haas, Mauricio Hernández, Renée J. Miller, Lucian Popa, Yannis Velegrakis

Heterogeneity in Model Management: A Meta Modeling Approach

As models are always abstractions of reality, we often need multiple modeling perspectives for analysis. The interplay of such modeling perspectives can take many forms and plays a role both at the design level, and during the operation of information systems. Examples include viewpoint resolution in requirements management, mapping between conceptual and implementation design in databases, and the integration or interoperation of multiple data and media sources. Starting from early experiences with our now 20-year old ConceptBase implementation of the Telos language, we describe a logic-based conceptual modeling and model management approach to these issues, focusing on recent work which employs a generic meta model to facilitate mappings and transformations between heterogeneous model representations both at the schema and the data level.
Matthias Jarke, Manfred A. Jeusfeld, Hans W. Nissen, Christoph Quix

Associativity and Commutativity in Generic Merge

A model is a formal description of a complex application artifact, such as a database schema, an application interface, a UML model, an ontology, or a message format. The problem of merging such models lies at the core of many meta data applications, such as view integration, mediated schema creation for data integration, and ontology merging. This paper examines the problem of merging two models given correspondences between them. In particular it concentrates on the associativity and commutativity of Merge, which are crucial properties if Merge is to be composed with other operators.
Rachel Pottinger, Philip A. Bernstein

Web and Services

The History of WebML Lessons Learned from 10 Years of Model-Driven Development of Web Applications

This work presents a retrospective analysis on the conceptual modeling language for Web applications called WebML, which was first defined about 10 years ago. WebML has been an incubator for research on conceptual modeling, exploiting existing experiences in the field and continuously addressing new challenges concerning abstractions, methods, tools, and technologies. People working on WebML are spread among universities, technology transfer centres, and a spin-off. In this paper, we illustrate the history of WebML, we summarize the essence of the approach, and we sketch the main research branches that spawned from the initial proposal. We describe how new trends in research, application development, methodology, and tool prototyping led to the continuous growth of the modeling language.
Stefano Ceri, Marco Brambilla, Piero Fraternali

GAMBUSE: A Gap Analysis Methodology for Engineering SOA-Based Applications

The objective of business service analysis is to identify candidate business processes and services, and provide an in-depth understanding of their functionality, scope, reuse, and granularity. Unfortunately, many of today’s service analysis and design techniques rely on ad-hoc and experience-based identification of value-creating business services and implicitly assume a “blue sky” situation focusing on the development of completely new services while offering very limited support for discovering candidate services from a varied inventory of pre-existing software assets. In this article, we introduce a novel business service engineering methodology that identifies and conceptualizes business services in a business domain. Moreover, our approach takes into account a realistic situation, in which pre-existing enterprise assets must be considered for the reuse to implement fragments of the newly conceived business services. A running example is provided to exemplify our approach.
Dinh Khoa Nguyen, Willem-Jan van den Heuvel, Mike P. Papazoglou, Valeria de Castro, Esperanza Marcos

Web Service Composition via the Customization of Golog Programs with User Preferences

We claim that user preferences are a key component of effective Web service composition, and one that has largely been ignored. In this paper we propose a means of specifying and intergrating user preferences into Web service composition. To this end, we propose a means of performing automated Web service composition by exploiting a flexible template of the composition in the form of a generic procedure. This template is augmented by a rich specification of user preferences that guide the instantiation of the template. We exploit the agent programming language Golog to represent our templates as Golog generic procedures and we exploit a first-order preference language to represent rich qualitative temporally-extended user preferences. From these we generate Web service compositions that realize a given generic procedure, satisfying the user’s hard constraints and optimizing for the user’s preferences. We prove our approach is sound and optimal. Our system, GologPref, is implemented and interacting with services on the Web. The language and techniques proposed in this paper can be integrated into a variety of approaches to Web or Grid service composition.
Shirin Sohrabi, Nataliya Prokoshyna, Sheila A. McIlraith

Software and Requirements Engineering

Dealing with Complexity Using Conceptual Models Based on Tropos

Since its establishment, the major objective of the Tropos methodology has been to develop an approach for the systematic engineering of agentoriented information systems. In this chapter we illustrate a number of approaches to deal with complexity, which address different activities in software development and are deemed to be used in combination. We begin with handling complexity at requirements levels. In particular we examine how modularization can be improved using some of Aspect Oriented Software Development Principles. We then examine how model-based testing applied in parallel to requirements analysis and design can support incremental validation and testing of software components, as well as help to clarify ambiguities in requirements. We also look at how Tropos can help to address complexity in social context when making design decisions. Last but not least, we show how to tackle complexity at the process modelling level. We explore iterative development extension to Tropos as well as perspectives taken from software project management. This allows us to deal with the complexity of large real world projects.
Jaelson Castro, Manuel Kolp, Lin Liu, Anna Perini

On Non-Functional Requirements in Software Engineering

Essentially a software system’s utility is determined by both its functionality and its non-functional characteristics, such as usability, flexibility, performance, interoperability and security. Nonetheless, there has been a lop-sided emphasis in the functionality of the software, even though the functionality is not useful or usable without the necessary non-functional characteristics. In this chapter, we review the state of the art on the treatment of non-functional requirements (hereafter, NFRs), while providing some prospects for future directions.
Lawrence Chung, Julio Cesar Sampaio do Prado Leite

Reasoning About Alternative Requirements Options

This paper elaborates on some of the fundamental contributions made by John Mylopoulos in the area of Requirements Engineering. We specifically focus on the use of goal models and their soft goals for reasoning about alternative options arising in the requirements engineering process. A personal account of John’s qualitative reasoning technique for comparing alternatives is provided first. A quantitative but lightweight technique for evaluating alternative options is then presented. This technique builds on mechanisms introduced by the qualitative scheme while overcoming some problems raised by it. A meeting scheduling system is used as a running example to illustrate the main ideas.
Axel van Lamsweerde

Supporting Requirements Elicitation through Goal/Scenario Coupling

Goals have long been recognized to be an essential component involved in the Requirements Engineering (RE) process. They have proved to be an effective way to support a number of requirements engineering activities such as requirements elicitation, systematic exploration of design choices, checking requirements completeness, ensuring requirements pre-traceability and helping in the detection of threats, conflicts, obstacles and their resolution. The leading role played by goals in the RE process led to a whole stream of research on goal modeling, goal specification/formulation and goal-based reasoning for the multiple aforementioned purposes. On the other hand, there is evidence that dealing with goal is not an easy task and presents a number of difficulties in practice. To overcome these difficulties, many authors suggest combining goals and scenarios. The reason is that they complement each other: there is a mutual mitigation of difficulties encountered with one by using the other. The paper reviews various research efforts undertaken in this line of research. It uses L’Ecritoire, an approach that guides the requirements elicitation and specification process through interleaved goal modelling and scenario authoring to illustrate the combined use of goals and scenarios to reason about requirements for the system To-Be.
Colette Rolland, Camille Salinesi

Enhancing Tropos with Commitments

A Business Metamodel and Methodology
This paper motivates a novel metamodel and methodology for specifying cross-organizational business interactions that is based on Tropos. Current approaches for business modeling are either high-level and semiformal or formal but low-level. Thus they fail to support flexible but rigorous modeling and enactment of business processes. This paper begins from the well-known Tropos approach and enhances it with commitments. It proposes a natural metamodel based on commitments and a methodology for specifying a business model. This paper includes an insurance industry case study that several researchers have previously used.
Pankaj R. Telang, Munindar P. Singh


“Reducing” CLASSIC to Practice: Knowledge Representation Theory Meets Reality

Most recent key developments in research on knowledge representation (KR) have been of the more theoretical sort, involving worst-case complexity results, solutions to technical challenge problems, etc. While some of this work has influenced practice in Artificial Intelligence, it is rarely—if ever—made clear what is compromised when the transition is made from relatively abstract theory to the real world. classic is a description logic with an ancestry of extensive theoretical work (tracing back over twenty years to kl-one), and several novel contributions to KR theory. Basic research on classic paved the way for an implementation that has been used significantly in practice, including by users not versed in KR theory. In moving from a pure logic to a practical tool, many compromises and changes of perspective were necessary. We report on this transition and articulate some of the profound influences practice can have on relatively idealistic theoretical work. We have found that classic has been quite useful in practice, yet still strongly retains most of its original spirit, but much of our thinking and many details had to change along the way.
Ronald J. Brachman, Alex Borgida, Deborah L. McGuinness, Peter F. Patel-Schneider

The KBMS Project and Beyond

The Knowledge Base Management Systems (KBMS) Project at the University of Toronto (1985-1995) was inspired by a need for advanced knowledge representation applications that require knowledge bases containing hundreds of thousands or even millions of knowledge units. The knowledge representation language Telos provided a framework for the project. The key results included conceptual modeling innovations in the use of semantic abstractions, representations of time and space, and implementation techniques for storage management, query processing, rule management, and concurrency control. In this paper, we review the key ideas introduced in the KBMS project, and connect them to some of the work since the conclusion of the project that is either closely related to or directly inspired by it.
Vinay K. Chaudhri, Igor Jurisica, Manolis Koubarakis, Dimitris Plexousakis, Thodoros Topaloglou

Using the ConGolog and CASL Formal Agent Specification Languages for the Analysis, Verification, and Simulation of i* Models

This chapter describes an agent-oriented requirements engineering approach that combines informal i* models with formal specifications in the multiagent system specification formalisms ConGolog and its extension CASL. This allows the requirements engineer to exploit the complementary features of the frameworks. i* can be used to model social dependencies between agents and how process design choices affect the agents’ goals. ConGolog or CASL can be used to model complex processes formally. We introduce an intermediate notation to support the mapping between i* models and ConGolog/CASL specifications. In the combined i*-CASL framework, agents’ goals and knowledge are represented as their subjective mental states, which allows for the formal analysis and verification of, among other things, complex agent interactions and incomplete knowledge. Our models can also serve as high-level specifications for multiagent systems.
Alexei Lapouchnian, Yves Lespérance


Weitere Informationen

Premium Partner