Skip to main content
main-content

Über dieses Buch

The Database and Expert Systems Application -DEXA - conferences are mainly oriented to establish a state-of-the art forum on Database and Expert System applications. But Practice without Theory has no sense, as Leonardo said five centuries ago. In this Conference we try a comprornise between these two complementary aspects. A total of 5 sessions are application-oriented, ranging from classical applications to more unusual ones in Software Engineering. Recent research aspects in Databases, such as activity, deductivity and/or Object Orientation are also present in DEXA 92, as weIl as the implication of the new "data models" such as OO-Model, Deductive Model, etc .. included in the Modelling sessions. Other areas of interest, such as Hyper-Text and Multimedia application, together with the classical field of Information Retrieval are also considered. FinaIly, Implementation Apects are reflected in very concrete fields. A total of of nearly 200 papers submitted from all over the world were sent to DEXA 92. Only 90 could be accepted. A Poster session has also been establishcd. DEXA 90 was held in Vienna, Austria; DEXA 91 in Berlin, Germany; and DEXA 92 will take place in Valencia, Spain, where we are celebrating the discovery of thc New World just five centurics ago, in Leonardo's age. Both the quality of the Conference and the compromise between Practice and Thcory are duc to the credit of all the DEXA 92 authors.

Inhaltsverzeichnis

Frontmatter

Invited Paper 1991

Applications (I)

Broad Path Decision in Vehicle System

This paper discusses how to decide a path for a vehicle to move to the destination. A space where vehicles move around is composed of space objects. The objects are structured in a hierarchical tree named a space tree. Objects at a higher level denote broader area than ones at a lower level in the space tree. In this paper, we present a method for each vehicle to decide a path whose part nearer to the vehicle is more detailed, and whose part which is farther from the vehicle is broader. A broader path means that it includes objects at higher level. Also, we model movement of the vehicle as an open nested transaction. Each vehicle locks objects in a decided path before passing through them at a level depending on the distance from the vehicle. The vehicle locks more strongly the object nearer to it. The strongness of the lock on the object represents how surely the vehicle can pass through the object. Also, we discuss the deadlock problem among multiple vehicles.

S. Misbah Deen, Satoshi Hamada, Makoto Takizawa

MAITE: An Operator Assistance Expert System for Troubleshooting Telecommunications Networks

MAITE is a knowledge-based operator assistance system for trouble-shooting a large telecommunications network. Maite is capable of responding in real time to multiple alarms coming from different digital switches in the network. It provides the telecommunications operator with advice and guidance for diagnosis and repair tasks. Maite is based on a multi-agent blackboard architecture. Agents embody general knowledge of diagnosis and repair or specific expertise in the various components of the network. They can work in parallel interpreting alarms and performing diagnostic and repair tasks. Agents’ reasoning methods include temporal, model based, and expectation-based reasoning. Coordination is assured by organisation and agent specialization. Overall control of the system is achieved by a control agent that supervises communication with external units and integrates partial results from specialist agents. It is robust, can provide detailed explanations of its actions, and is easily extensible.

Francisco J. Garijo, Donn Hoffman

Heuristics for Maintaining Term Structures for Relaxed Search

The ESPRIT project REBOOT addresses software reuse from a large library of object-oriented components. A faceted classification scheme with structured term spaces is used to provide relaxed search, in order to find the most appropriate candidates for reuse according to the users’ requests.We envision that one of the main problems with reuse based on a large component library will be the maintenance of this library, not just in terms of the quality of the individual components, but even more in terms of the library structure. With our approach, one of the most important tasks of the librarian will be to maintain the term spaces, which will have to undergo almost continuous change due to the quick evolution of the software component domain. In this paper, we suggest some heuristics for assisting the librarian with the term space maintenance. Although suggested specifically for reuse libraries, we believe that much of these heuristics will be applicable in a wider information retrieval context.

Guttorm Sindre, Even-André Karlsson, Patricia Paul

Deductive Databases and Database Programming Languages

An Equational Constraint Logic approach to Conceptual Modelling

One relevant approach for developing advanced database and knowledge-based systems advocates the use of logic programming technology [17,28,30]. Recently, the logic programming paradigm has been generalized to the framework of Constraint Logic Programming (CLP), a generic scheme for the introduction of constraints in logic programming defined in [24, 25] and refined in [20]. In this framework, logic and equational programming have been integrated to define, as an instance of the scheme, a new declarative programming language, CLP(H/E), specialized in solving equations in equational theories [1, 2]. In this paper we present, using the experimental language CLP(H/E), equational constraint logic programming techniques as an effective tool to support database applications. These techniques are able to operate with running specifications in two useful modes, parsing mode and generating mode, as they are fitted in themselves with an inferential capability which can be used for plangeneration [28,33,41].

María Alpuente, María José Ramírez

Financial Security Analysis and Portfolio Management with a Deductive Database System

Knowledge-based information systems have been increasingly recognized as tools for planning and cost reduction for various services in the financial world. We report on the development of a knowledge-based information system which acts as an aid for investment decision making. Furthermore, we show how a modular design approach is ideally suited for developing a knowledge-based information system (IS) in the financial security domain. The paper concentrates on the design technique used in developing the system with the Syllog Expert Database System.

Christoph Lell

Processing Knowledge Base Systems Containing Rules with Complex Bodies

The use of negative information and the unrestricted use of quantifiers in the body of a rule enhances the expressive power of deductive database systems. We consider a more relaxed version of conventional deductive databases called an acceptable database which generalizes the class of allowed databases by accepting occurrences of universal quantifiers for variables occurring in positive literals and existential quantifiers for variables occurring in negative literals in the body of an IDB rule. We propose a system capable of handling such extended databases in a deductive paradigm. We pay our special attention to negative rules and queries to develop a special technique in which the relevant attribute domain set from which to remove the negated set is automatically deduced.

Jonghoon Chun, Lawrence J. Henschen

An Analytical Method to Allocate Processors in High Performance Parallel Execution of Recursive Queries

This paper presents an analytical method to allocate processors in high performance parallel execution of recursive queries. The proposed method consists in computing (i) the number of tuples deduced by the transitive closure in account eventually of the selection clauses propagation and (ii) the number of economical processors. The main contribution of this paper is the presentation of an efficient method to compute the economical number of processors and the performance analysis which reveals the influence of DT on the allocation of processors number, response time and the generation of an execution plan.

A. Hameurlain, F. Morvan, E. Ceccato

Implementation Aspects (I)

Data Placement Strategy for a Parallel Database System

The EDS machine is a high performance parallel computer system being developed as part of an ESPRIT II proyect. The database system for EDS is its main application and provides an extended relational system with object-oriented capabilities. One aspect which is crucial to performance is data placement. For several reasons existing data placement strategies are not ideal for the EDS system and a new approach based on two existing strategies will be used. This approach is presented here with some comparisons of its performance.

M. B. Ibáñez-Espiga, M. H. Williams

A Performance Comparison for Priority-Based Protocols in Real-Time Databases

Real-time database systems must maintain consistency while minimizing the number of transactions that miss the deadline. To satisfy both the consistency and the realtime constraints, there is the need to integrate synchronization protocol with real-time priority scheduling protocols. This paper describes two models used to avoid the problem of priority inversion, these are priority inheritance and abortion, and makes a performance comparison. An analysis of results, together with conclusions are presented.

P. Blesa, R. J. Vidal

Duplicates Detection, Counting, and Removal

The need to detect and eliminate duplicate elements arises in many applications such as the processing of relational database operations, comparison of complex objects, transitive closure, and protocol verification. Given a multiset, the process of detecting duplicates, eliminating them, sorting the remaining distinct elements, and counting the number occurrences of each in the multiset is a computationally intensive task, especially when complex objects are involved. In this paper, the computational complexity of performing such a task is addressed. The computational complexity study is based on a modified comparison-based decision tree. It is shown that, to a multiset M(n,L) of n elements L of which are distinct, we can associate a decision tree of L!S(n,L) external nodes. S(n,L) represents Stirling number of the second kind. This result suggests that upper and lower bounds to perform such a task are different from those of sorting a set of the same cardinality. It is also shown that a comparison based sorting algorithm can be adapted to perform such a task. In addition, the analytical performance of the adapted sorting algorithm is addressed.

M. Abdelguerfi

Techniques for Indexing Large Numbers of Constraints and Rules in a Database System

This paper addresses the problem of indexing a large number of rules and constraints in a database system. The objective of such indexing is to be able to quickly identify the relevant constraints and rules, rather than search sequentially, every time insertions, deletions and modifications are made to the database. The constraints are represented as SQL queries which must return null answers. Each constraint is parsed and stored in one or more indexes. Algorithms for index maintenance and constraint retrieval are given.

Akhil Kumar

Information Retrieval (I)

Structuring Text within a Relational System

We introduce a preprocessor that uses a relational system and semantic modeling to impose structure on text. Our intent is to show that document retrieval applications can be easily developed within the relational model. We illustrate several operations that are typically found in information retrieval systems, and show how each can be performed in the relational model. These include keywording, proximity searches, and relevance ranking. We also include a discussion of an extension to relevance based on semantic modeling.

David A. Grossman, James R. Driscoll

The INQUERY Retrieval System

As larger and more heterogeneous text databases become available, information retrieval research will depend on the development of powerful, efficient and flexible retrieval engines. In this paper, we describe a retrieval system (INQUERY) that is based on a probabilistic retrieval model and provides support for sophisticated indexing and complex query formulation. INQUERY has been used successfully with databases containing nearly 400,000 documents.

James P. Callan, W. Bruce Croft, Stephen M. Harding

A Simple Speech-Act-Theoretic Indexing Scheme for Documents

This paper introduces and reports on the ICT framework for indexing document content. The framework is based on speech act theory, and has been implemented in a working prototype system for the U.S. Coast Guard. Very much remains to be learned about the framework and how best to use it, but the idea is very promising, and is fully formalized, thereby facilitating automated processing.

Steven O. Kimbrough

Integrity Maintenance

Foundations of Simplified Integrity Checking Reviewed

We review some fundamental concepts of simplified integrity checking in deductive databases. This is done on a sufficiently abstract level such that we do not have to depend on any particular method. Our main focus is on concepts of soundness and completeness of simplified integrity checking. These two concepts relates the declarative and procedural concepts of satisfaction. The former defined by well-established views of integrity satisfaction and the later by the methods. Also, we distinguish between generation and evaluation phases of integrity checking, and apply the concepts of soundness and completeness to each of the two phases.

M. Celma, L. Mota

Performance Evaluation of Integrity Control in a Parallel Main-Memory Database System

Integrity control is an important task of modern database management systems. One of the key problems impeding its general use in real-world applications is formed by the high processing costs associated with integrity constraint enforcement. Notwithstanding this observation, little attention has been paid in literature to the performance evaluation of integrity control mechanisms. This paper adresses this issue and has a threefold message. Firstly, it shows that integrity control can easily be integrated in a parallel, main-memory database system. Secondly, it demonstrates that parallelism and main-memory data storage are effective ways to deal with costly constraint enforcement. Thirdly, the overhead of constraint enforcement is shown to be acceptable compared to the execution of transactions without integrity control. The conclusion is drawn, that integrity control is well feasible in high-performance database systems.

Paul W. P. J. Grefen, Jan Flokstra, Peter M. G. Apers

Two-level Modeling Schemes for Temporal-Spatial Multimedia Data Representation

In the new multimedia applications, it is required to support not only fixed is-a and part-of relationships but also various dynamic relationships including complicate composition. Furthermore, multimedia data can not be fully described on class level, because new data of multimedia applications have unique structures and various relationships with other objects on instance object level. However, existing data models have weakness in representing those complex relationships.This paper presents two-level modeling schemes ORM for the multimedia composition in time and space with respect to object-relationships. ORM uses the tagged aggregation to abstract temporal-spatial structures on class level. ORM also supports descriptions of detailed relationships on instance object level. In particular, the proposed model is extensible to be easily enhanced for multimedia application needs. We also propose a conceptual diagram ORD for graphically describing a multimedia database modeled by ORM. This diagram overcomes the shortcomings of the ER diagram. The conceptual diagram can be easily converted to a logical object-oriented multimedia schema.

Yunmook Nah, Sukho Lee

Object-Oriented Modelling (I)

Designing Object-Oriented Databases with a Semantic Data Model and a Rule Model

This paper proposes a design methodology for object-oriented databases, based on a semantic data model combined with a rule model. The semantic data model is used to describe the static information of an application. First order logic formulas of the rule model are used to define general constraints and behavior information. A language, called External Definition Language (EDL), is proposed for the simultaneous design of structural and behavioral information within extended structures of a semantic data model. EDL structural definitions are mapped into an object-oriented database by preserving all access paths associated to EDL classes. Rule mapping generates methods associated with corresponding objects in the target object-oriented database.

Zahir Tari

A Framework for Managing Schema Versioning in Object-Oriented Databases

An approach to schema modification management is presented which also takes into concern that alternative schema perspectives seems to be needed, in addition to revision-like changes. The primary goal is to support change transparency when the schema is modified, such that existing application programs and objects need not be affected by changes to the schema.The approach is based on explicitly distinguishing Type (external interface of objects) and Class (object implementation and representation) — a Type is implemented by one or more Classes. Structural (representation) changes are accomplished by introducing new Classes (or Class versions), which does not require recompilation. Behavioral (Type) changes affect applications, as there may be incompatibilities between the Type version of the database object, and the application. The approach is based on automatically maintaining the totality of all versions’ interface for each Type, and provide mechanisms for defining new Class revisions to also implement this interface. In this way Type version incompatibilities are externally invisible. The paper presents the general mechanisms for managing schema changes, and identifies how the mechanisms are utilized.

Erik Odberg

OO-Method: An Object-Oriented Methodology for Software Production

The Automated Programming Paradigm appears in the eighties as an alternative for the production of reliable Software. We present in this paper a Methodology (OO-Method) that covers the classical development phases of Analysis, Design and Implementation using an Object-Oriented Model to express properly Information System concepts within a working environment based on the Automated Programming Paradigm proposal. A formal Specification is obtained from the Analysis results, using Oasis [Ram2, Pas2] as Specification Language.Being a Specification in Oasis equivalent to a formal First Order Theory, we can generate in an automated way the corresponding Logic Program (equational or clausal), having an executable System Prototype.We provide in this way a Software Development Life Cycle that, starting from a precise Information System description, lead us to a final Software Product according to the System’s properties.

Oscar Pastor, Arturo González

Legal Systems

The Juricas System in a Social Security Environment

After bringing six ready-made legal computer advice systems on the market, the Centre for Computers and Law is now selling its ‘empty’ JURICAS shell as well. The advantage is that from now on experts in organisations can build their own computer advice system, which makes it possible to build a system that fits all specific needs. One project regards a system for a social security service in the Netherlands. This social security service developed a JURICAS system on the basis of the Dutch Social Security Act. Implementation of the system at this social security service is took place May 1991.The intention of the authors was not to make a system that gives advice in all cases, but to build a system that could support the user in making routine decisions. The results of the social security system seem positive. However, the authors and the users want the Centre for Computers and Law to make some technical changes in the JURICAS system. Attention has to be paid also to the protection of the data within the organization.

C. van Noortwijk, P. A. W. Piepers, J. G. L. van der Wees

Slate: Specialized Legal Automated Term Extraction

SLATE is a new research project designed to extract important information from legal text. This information can then be used to automatically determine the value of various attributes significant to legal professionals such as the identity of the parties involved in a legal dispute, the outcome of a case and the amount of damages awarded. Several applications of the automatically extracted information are discussed including the efficient generation of legal case-based reasoning advisory systems, automated database maintenance and the provision at low cost of cross-reference information about cited and citing cases.

Cal Deedman, Daphne Gelbart, Morris Coleman

Improving Automated Litigation Support by Supplementing Rule-Based Reasoning with Case-Based Reasoning

We are currently developing concepts and software tools designed to aid legal practitioners in the process of statutory interpretation: the process of determining the meaning of a statute or regulation and applying it to a particular set of facts. This is being attempted through the integration of database and expert system technologies. Case-Based Reasoning (CBR) is being used to model legal precedents while Rule-Based Reasoning (RBR) modules are being used to model the legislation and other types of causal knowledge. It is hoped to generalise these findings and to develop a formal methodology for integrating case-based databases with rule-based expert systems in the legal domain.

George Vossos, John Zeleznikow

Envilex: An Integrated Environmental Law Expert System

This paper illustrates an initial implementation of an integrated expert system on environmental law. It is a system that, apart from having the performance of a typical expert system, offers the user the possibility of using additional non formalised knowledge through an advanced information retrieval system. The system, based on Italian regulations relating to air pollution, uses the FLEX LPA shell implemented in Prolog, a hybrid expert system toolkit, providing features and functionality normally associated with high cost expert systems. It represents a natural evolution after lengthy experimentation of expert systems in the legislative domain. This implementation is also not aimed merely at lawyers who are the user group our research is usually addressed to, but, given the domain here, at large sectors of the community.

Antonio Cammelli, Fiorenza Socci

User Interfaces

Proteus: a concept browsing interface towards conventional Information Retrieval Systems

In the access to the unstructured information, the representation of the relevant concepts is a fundamental issue for the sharing of the knowledge between indexers and users. The browsing of the concepts can play an important role in the user interface. We present an implementation of an interface that will give the possibility of interacting with a conceptual structure of the documents, and of making a graphical navigation on the thesauri available on the different fields.

O. Signore, A. M. Garibaldi, M. Greco

Semantic Constraints in a Syntactic Parser: Queries-Answering to Databases

In this paper we discuss the problems of solving word sense ambiguity into queries databases. We present a method for introducing semantic constraints into a spanish-syntactic parser based on Modular Logic Grammar. The efficiency of this method in comparison with the method of semantic constraints based on McCord type-trees is discussed.

L. Moreno, M. Palomar

Rapid Prototyping of Medical Graphic Interfaces

Rapid prototyping is an adequate methodology for the development of window based graphic interfaces because it allows an effective integration of experts in human factors and potential users in the process of the production of the interaction software. We have implemented an environment for the rapid prototyping of medical graphic interfaces based on the characteristic architecture of knowledge based systems. It provides a specification language for the declarative representation of the elementary dialogue components, including the visual aspect, behavior and content of the windows. This representation is directly executable and permits an incremental approach to the final interface. The environment provides a universal control mechanism and facilities for the integration of the interface with relational data bases and expert systems. The paper present a general description of our system and details of the method for the representation of the interaction.

R. Marín, M. Taboada, R. P. Otero, A. Barreiro, J. Mira, A. Delgado

A Standard Naming Method of Data Elements Using a Semantic Dictionary

This paper describes the problems in naming data elements and offers solutions to these problems. When using the popular Durell’s Naming Standard, it is difficult to compile a semantic dictionary necessary for that standard manually because there is no clear criterion for word segmenting and for deciding word types. As a result, the reliability of the dictionary suffers and the results of checking by Durell’s Naming Standard with this dictionary are not good enough. We have developed a tool for the automatic construction of the semantic dictionary that utilizes a natural language processing system named INDEXER. This tool eliminates the errors in the dictionary. Using it, non-specialists can compile a reliable, consistent dictionary automatically. We also augment Durell’s Naming Standard which allows us to naturally name our own data elements.

Jun Sekine, Masaru Nakagawa, Haruo Kimoto, Kiyoshi Kurokawa

Multimedia Database and Hypertext (I)

Linearising Hypertext through Target Graph Specifications

Linearisation is a special case of the navigation problem for hypertext whereby it is required that an orthodox linear text be produced from a hypertext network. Complications arise in solving this from the intricacy of the linkage structure that will be present in a typical hyperdocument. In addition, what constitutes an acceptable linearisation may depend heavily on the intended audience for the final linear text. Thus some readers may want only a precis of the hypertext content whereas others may wish to see an almost complete exposition of the textual material. For example in many cases it is normal practice to include a management summary and a full text. A variably linearised hypertext could in principle address both these and intermediate needs. In this paper we describe an approach to the development of hypertext linearisation algorithms that is capable of dealing with such problems. The approach insists that a specification of the target document structure be given as input together with the hypertext to be linearised. The intention of this specification is to prescribe the relationships between textual nodes in the hypertext to be included in the actual linear version.

T. J. M. Bench-Capon, P. E. S. Dunne, G. Staniford

Graphical Structure-Oriented Search in a Hypertext System

In this paper, we describe the query module of the hypertext system CONCORDE and its user interface. We have created a facility for searching the structure of the hypertext besides common contentrelated retrieval facilities. The user interface enables the user to create a query by direct manipulation of a graphical query interface. So, the user is able to exploit hypertext structures even for querying not just for browsing.

M. Hofmann, S. Schmezko

HyperIBIS — a Tool for Argumentative Problem Solving

Many problems in the field of politics and design cannot be solved by using straightforward algorithmic methods. These problems are characterized by the collision of different judgements and different interests. Typically those problems are solved by weighing up the different arguments and finding compromises. This is done by a discourse between the involved groups. A hypertext based system called HyperIBIS will be described that supports the structuring and documentation of discourse on such problems. The system rests upon a theoretical approach called IBIS that has been first introduced by Rittel and Kunz.

Severin Isenmann

Neural Networks and Image Data Management

This paper describes the research prototype of a postrelational DBMS CHINOOK being implemented at the University of Colorado at Colorado Springs. CHINOOK is intended to manage ultra-large databases of digitized images and digitized one-dimensional data as well as text and tables. This paper discusses our neural network based approach to the image data management in CHINOOK. We report our initial results on neural network generated image signatures which are the basis of CHINOOK image search and retrieval mechanism. We expect the image signatures to support the retrieval of images by their content.

D. Z. Badal

Applications (II)

A Cooperative-Architecture Expert System for Solving Large Time/Travel Assignment Problems

In this paper, we consider the problem of assigning tasks to operators according to a large set of constraints that include time sensitivity and travel optimization. Our practical instance of this problem combines computational complexity (scheduling the tasks for one technician is NP-hard and little is known about getting a solution [CK92]) and size (around 20000 tasks stored in a database). We present a solution that has been successfully implemented and tested, which we describe as an expert system where expertise is applied to constraint satisfaction. By combining a constraint solver and a rule-based domain expert, we have obtained a satisfactory level of efficiency while keeping the flexibility and extensibility of a constraint-based approach.

Yves Caseau, Peter Koppstein

Fuzzy — Phone: A Fuzzy Logic based Tool

This paper presents some concepts on how to plan, manage and automate tasks and activities with specific characteristics by describing these tasks in terms of fuzzy logic, transforming these descriptions into Prolog clauses and scheduling the tasks with a méta interpreter.Fuzzy Phone is a tool which applies the mentioned concept to the field of telephone call management. It integrates in a graphical user interface the specification of telephone calls, the core system for scheduling calls to be instantiated and an operational component for the automatic realization of telephone connections. We focus in this paper on the second item, the core system for scheduling.Fuzzy — Phone is based on the fuzzy concept theory founded by ZADEH in the mid-sixties and uses processing mechanisms, often denoted as fuzzy logic [1].

Dimitris Karagiannis, Rainer Staudte, Hans Grünberger

A Motion Picture Archiving Technique, and Its Application in an Ethnology Museum

This paper introduces database utilities for archiving motion pictures, and describes a prototype system called M-CIDB (Motion Color Image Data Base). There are two crucial problems in implementing motion picture archives: (1) scene indexing and database creation is time-consuming and error-prone, and (2) searching for scenes of interest is slow and awkward. The following techniques for solving the problems are proposed: (1) scene information structures including scene hierarchy and time-axis-based annotations for fast and random scene access, (2) automatic scene change detection and automatic representative frame image selection from all the original consecutive frame sequences, and (3) an interactive scene information editor for verifying and editing automatically detected scene information through a graphical user interface. A prototype system has been built and applied to an ethnology museum’s motion video in a preliminary feasibility study.

Jung-Kook Hong, Junichi Takahashi, Masahiro Kusaba

Hypertext, Databases and Computer Aided Instruction: where is the match?

In this paper, we sum up the results of our works concerning the use of the hypertext concept for Computer Aided Instruction purposes.First, we present an architecture for a hybrid system, in which hypermedia information can be combined with some directive forms aimed at direct instruction and control in a given field. This architecture enables an Educational Hypermedia software to act sometimes as a learning system, sometimes as a teaching one [1,2].Next, considering that’ standardization’ is one of the major aspects involved in the evolution of Hypermedia systems [9], we study a formalization for Educational Hypermedia systems previously defined: ACHE * model. This formalization, based on the Dexter Hypertext Reference model [6] takes into account the specificities of Educational Hyperdocuments ** (such as dialogue entities or CAI-Links) [3, 4].Issues in application of the ACHE model are discussed from both the exchange of information point of view and the data storage point of view.

T. Beltran

Advanced Databases

A Transition Net Formalism for Deductive Databases Efficiently Handling Quering and Integrity Constraints Aspects

This paper presents a formalism derived from high level Petri nets for deductive database representation called Deductive Transition Net (DTN). We study the DTN semantics and we show how this formalism can be used either for query answering or for integrity constraint evaluation. We propose some relaxation techniques that improve the operational efficiency of the model.

Kamel Barkaoui, Noureddine Boudriga, Amel Touzi

PI-DDBS: A Deductive Data Base System Based on C-PROLOG and INGRES

This paper describes the implementing techniques of the deductive database system, PI-DDBS, which is based on combining C-PROLOG with INGRES. The system connecting method to couple tightly PROLOG with RDBNS in the physical level is used to build PI-DDBS. According to analysing the systems of C-PROLOG and INGRES, the techniques of joinning the inference together the database query and optimizing the execution sequence of deductive query are designed to, raise the efficiency of deductive query. An environment of natural language understanding has been developed on PI-DDBS.

Zhu Yi-fen, Li Mei, Chen Fu-an

A Fuzzy Database System Considering Each User’s Subjectivity

Recently, computers that can handle imprecise information are badly needed. As an approach, fuzzy database systems which allow imprecise data have been presented. In these systems, imprecise data are represented by means of membership funtions. However, imprecise information are usually subjective, and differ among individuals. Since this subjectivity has not been considered, the users could not use the imprecise data stored in the conventional systems effectively. In this paper, we present a fuzzy database system which interprets each user’s subjectivity. This system enables users to store subjective data, and each user can refer to the data in the form of his/her subjectivity. Moreover, our system allows the use of non-numerical attributes, consequently enlarging the area of application.

H. Tashiro, T. Nomura, N. Ohki, T. Yokoyama, R. Kamekura, Y. Matsushita

Implementation Aspects (II)

Concurrency Control in the Interpolation-Based Grid File

The problem of supporting concurrent operations in Interpolation-Based Grid Files is studied. A systematic method for detecting conflict between processes is defined based on the organizational properties of this type of files. One important characteristic of these structures is the dynamic partitioning of the data space into regions and assignment of a unique identifier to each region. This identifier acts then as a surrogate for the region and its spatial properties. High process throughput is achieved by optimizing the number of locked regions. We show that only one or two locks are required in general and the probability that three locks become necessary is negligible. Algorithms to search for, to insert and delete data elements are presented and shown to be correct, deadlock free, and non-preemptive based on the restrictions imposed on the locking order and the reachability mechanism. Furthermore, we present a compression procedure that provides storage maintenance of the data structure. In our scheme, all processes readers, inserters, deleters and compressors can overtake each other.

M. Aris Ouksel, A. Ghazal, Otto Mayer

Parallel Simulated Annealing for Efficient Data Clustering

In this paper, we investigate parallel simulated annealing strategies for data clustering. We introduce an asynchronous error-free technique which distributes tuples of a database among processors such that moves are proposed and evaluated independently in distinct processors. Synchronization among the processors has been reduced significantly. Such an approach may thus achieve a speedup that is linear in the number of processors. Experimental results are also included. Our analysis considers both the quality of the optimizations and the efficiency of the execution strategies. Compared with a sequential simulated annealing technique, the proposed asynchronous approach achieves the same quality with a very impressive speedup. The asynchronous method was implemented on a 32-node shared memory system to investigate its scalability. Our extensive experimental measurements indicate that the technique and the shared memory paradigm form a coherent combination for efficient parallel implementation of simulated annealing for data clustering. Another feature of the asynchronous strategy is that it also conforms nicely to the computation requirements of message-passing systems.

Kien A. Hua, Wen K. Lee, S. D. Lang

Efficient Management of K-Level Transitive Closure

A k-level transitive closure of a directed graph is all pairs of vertices (x, y) such that there exists at least a path from x to y of length d, d≤k. Multiple edges between a pair of vertices are allowed in a graph. This paper presents a data structure to store materialized k-level transitive closure such that retrievals and updates of a k-level transitive closure with path information being kept may be performed efficiently.

Keh-Chang Guh, Pintsang Chang

Information Retrieval (II)

BRANT — An Approach for Knowledge Based Document Classification in the Information Retrieval Domain

Classical approaches, namely term weighting and statistical document clustering do not score to achieve substantial breakthroughs in information retrieval research. Information retrieval systems must be designed to overcome the syntactical barrier in document representation. This paper discusses the combination of classical term weighting approaches and probabilistic inference concepts to form a powerful semantic document classification model. BRANT is a prototype adapting inference strategies from medical differential diagnosis to the area of information retrieval.

Dieter Merkl, A. Min Tjoa, Stefan Vieweg

A Connexionist Model for Information Retrieval

This paper describes a connectionist architecture for an information retrieval system based on neural networks. This approach allows to define a “dynamic” thesaurus, in order to improve the construction of a documentary base and to perform associative information retrieval. We suggest a set of rules to activate cells in order to start an activation/propagation process on which the associative information retrieval is based. A learning mechanism is also started. These two notions allow to develop automatic reformulations of queries and dynamic restructuration of the information base.

M. Boughanem, C. Soulé-Dupuy

Workbench for Technical Documentation

Technical writing in the documentation area is becoming an increasing economic factor. Well designed, easy understandable documents need to be available in several languages without delay. This paper describes several tools to ease technical documentation and translation by supporting the terminology work.Therefore we introduce some tools: a termbank, an explanation browser, a computer cardbox and a translation memory. With the explanation browser the user may navigate through a network of interlinked texts which form a subject field. The CardBox is a computerized cardfile which enables the user to define, write and link cards. The Translation Memory stores texts with their translation in a statistical model and gives back translation of sentences which have been translated before.

Renate Mayer

Temporal Aspects

Temporal Object-Oriented Database: (II) Implementation

We have developed a temporal object model which is based on the object-centered object-orientation paradigm, to describe object evolution with time. A database based on this model has been designed and it is called a temporal object-oriented database (TOODB). In our model, the whole life of a real-world entity is modeled by a temporal object. In this paper, we concentrate on one implementation issue of the TOODB: clustering temporal object histories. After identifying new characteristics of objects which evolve in the context of time, a scheme for clustering historical data of a temporal object has been developed. Structural and temporal information about temporal objects, as well as users’ access patterns have all been taken into account in our scheme. The evaluation model introduced has captured various aspects that impact the performance of a clustering scheme. Through simulation experiments, the importance for selection of a suitable temporal partition in the optimization has been demonstrated.

Yan-Zhen Qu, Fereidoon Sadri, Pankaj Goyal

A Relational Algebra as a Query Language for Temporal DATALOG

This paper introduces a temporal relational algebra as a query language for temporal deductive databases, i. e., Temporal Datalog programs. In Temporal Datalog programs, temporal relationships among data are formalized through temporal operators, not by an explicit reference to time. The minimum model of a given Temporal Datalog program is regarded as the temporal database the program models intension ally. Users query temporal deductive databases using a temporal relational algebra (Tra), which is a point-wise extension of the relational algebra. During the evaluation of Tra expressions, portions of temporal relations are retrieved from a given temporal deductive database when needed. Bottom-up evaluation strategies such as the fixed point computation can be used to compute portions of temporal relations over intervals. An extension of Temporal Datalog with generic modules is also proposed. Through modules, temporal relations created during the evaluation of Tra expressions may be fed back to the deductive part for further manipulation.

Mehmet A. Orgun, William W. Wadge

Evolving Information Systems: Beyond Temporal Information Systems

Nowadays, in order for an organisation to be competitive, it must be able to adapt quickly to its dynamic environment. In this paper, we discuss the need for information systems which are capable to evolve to the same extent as organisations do. Requirements of evolving organisations on their information systems are identified, followed by alternative approaches to adequate information systems development life cycles. We adopt an evolutionary approach resulting in so-called evolving information systems.On the basis of requirements and an architecture for these evolving information systems, the distinction from traditional information systems is explained. Traditional information systems, including temporal information systems, appear to be degenerations of our evolving information systems. A conceptual framework for update in evolving information systems is derived from the requirements. An event level, a recording level and a correction level are distinguished in this framework for update.

E. D. Falkenberg, J. L. H. Oei, H. A. Proper

Object Oriented Modelling (II)

Schema Extensions in OODB

In this paper, we extend the schema evolution operations of a schema manager called Omnis, considering semantical aspects. The proposed object view integration method (Ovi), is intended to help the DBA and Omnis users in integrating new classes in an oodb schema, with automatic enforcement of database consistency and non redundancy. Ovi uses the generic data model of Omnis as conceptual model to gain independence from the persistent object-oriented programming language. It also exploits an OO methodology to provide extensibility. It is implemented in C++.

Patrick Valduriez, Marie-Jo Bellosta, Fabienne Viallet

Uncertainty Modeling In Object-Oriented Geographical Information Systems

The object-oriented data model has been utilized for geographical information applications, chiefly because of the rich set of modelling primitives it offers. Despite its representational capabilities, it still falls short in describing data typically seen in geographic information systems, i. e., imprecise, spatial or continuous valued data. In this paper, an extension to the object-oriented data model that permits the representation of imprecise data is discussed in the context of a soil information system. It is shown that this extension accommodates the querying and manipulation of spatial and continous valued data.

R. George, A. Yazici, F. E. Petry, B. P. Buckles

Visual Object Modelling

The major challenge for current software-engineering is to provide methodologies and tools for automated software construction. This paper presents a methodology for visual modelling, based upon the object-oriented approach. The basic idea is to identify the structure and behaviour of complex systems as a hierarchy of abstractions and views. From a certain point of view reality can be seen as a system of flows. In this context it is useful to distinguish between three kinds of objects. Firstly, objects representing the structure of the real world (Base Objects). Secondly, objects reflecting the flow of control (Control objects). Thirdly, objects providing a means of communication between the model and external sources of information, like humans, machines etc. This paper describes a visual language for modelling reality and mapping it onto these three kinds of objects.

Reinhard Schauer, Siegfried Schönberger

Graphical Interfaces

IQL: A Graphical Interface for Full SQL Queries

IQL (Interactive Query Language) provides interactive formulation and optimization of relational queries. IQL’s query interface supports fast and easy formulation of full SQL queries by using a novel object-oriented representation of query concepts. A query is a graph made up of a set of tree-like expressions (predicates, arithmetic and boolean expressions) specified at different graphic levels on subsets of relations of a database schema. The power and flexibility of query graphs derive from the independent metamodelization of implied formalisms, so complex queries may be expressed graphically in their entirety. Querying performances are improved in several ways: syntax and consistency issues are permanently verified during query formulation, a variety of abstraction mechanisms like the denomination of queries (or expressions) and the use of parameters, permit modular and generic reusability, even on different database schemas, and query optimization mechanisms are used to generate compiled versions of less-cost equivalent queries in relation to a given operational context (i. e., logic access and data schémas). IQL has been implemented as a part of an integrated environment for the design of databases using Graphtalk™, the object-oriented graph-based development tool from Rank Xerox.

Hugo B. Ramos

Structure Modeling Hypergraphs: a Complete Representation for Databases

The advent of low-cost high-resolution graphical workstations led to a new generation of interaction tools (often called Visual Languages) in information systems, where the use of graphics enhances the quality of the interaction. Unfortunately, in contrast to traditional (textual) query languages, no general framework exists where any visual language can be formally represented to evaluate its expressive power. Hypergraphs appear good candidates for being a very natural mathematical counterpart of arbitrarily complex visual structures. In this paper we present a particular kind of hypergraph, the Structure Modeling Hypergraph, as a representation tool able to capture the basic features of existing data models and well fit to define a set of basic graphical interaction primitives, in terms of which more complex interaction mechanisms may be described and compared.

Tiziana Catarci, Laura Tarantino

A Framework for Using Ad Hoc Queries to Geographic Databases as Visual Components of Interactive Maps

Hypermedia is a promising approach to handle multi-media data, while end-users have no way but to obtain pre-arranged information through fixed links. For geographic information systems, however they may need more freedom in retrieving information. Though ad hoc queries to databases enable users to retrieve information more freely, conventional query languages cannot be used in such an easy way as clicking visual buttons associated to particular information, as in hypermedia.This paper first clarifies the process of creating and updating interactive maps based on geographic databases. Then a framework for using ad hoc queries to geographic databases as abstract visual primitives of hypermedia is presented. In a prototype system, the results obtained from geographic databases through ad hoc queries are provided for users as interactive maps which are composed of display objects, such as visual buttons and slides. The display objects have their own behaviors against users’ messages such as mouse clicks. Conditions in ad hoc queries are also visualized as display objects on maps, together with data selected from databases.

Masatoshi Arikawa

A Comparison of Interfaces: Computer, Designer, and User

This paper compares three interfaces: the first is a traditional, information retrieval system handling SGML tagged text; the second, a menu-driven interface with multi-font output; the third, a hypertext interface where some graphical capability is sacrificed to improved information structure, reduced information load, and enhanced user control. The hypertext interface is innovative because it is a hybrid system: a hypertext front-end and post-processor are fully integrated with the original information retrieval package and the SGML text. The retrieved text is converted to hypertext format just before presentation to the user, who sees a consistent hypertext interface for query formulation, input, and browsing.

Eve Wilson

Active Aspects

Deriving Active Rules for Constraint Maintenance in an Object-Oriented Database

This paper presents a framework for translating constraints, declaratively stated by the user, into event-condition-action rules. Constraints are specified using a constraint equation approach and they are defined as a property of the attributes. A set of rules are generated based on the Horn logic counterpart of constraint equations. Being in an object-oriented context, the description of the event modifying a constraint not only includes the name of the method but also the name of the class. This idiosyncrasy of the object-oriented paradigm account for many of the differences with how this problem has been tackled in deductive databases and relational databases. Examples are shown of an early implementation in ADAM, an object-oriented database in Prolog.

Oscar Díaz

Towards a design theory for database triggers

Advances in the area of active database management systems require the development of a trigger design theory, which guides the user in the definition off well-behaving trigger based applications. The development of such a theory requires a formal definition of trigger semantics. This paper describes a framework for such a formalisation of triggers. That is, the parameters of trigger execution and the options for setting them are identified and discussed. Furthermore, the development of a trigger design theory is initiated with the formulation of a sufficient condition for trigger independence.

A. P. J. M. Siebes, M. H. van der Voort, M. L. Kersten

Making a Federated System Active

A Federated Database System consists of autonomous component database systems that participate in the federation to allow controlled sharing of their data. It usually provides a global integrated view, a federated schema, that describes data stored in the component database systems. A very important aspect for this type of systems is to maintain the autonomy of the components while preserving the correct semantics of the federated schema. In order to achieve this goal we propose a system that responds automatically to design changes made in a component database, that affect the federated schema, without user intervention. Further, our system provides each component that decides to participate in a federation with the opportunity to choose between assuming the default control offered to it, or customizing the control by defining declaratively the system responses. Both cases rely on the use of Event-Condition-Action rules supported by the system. We also present a federated system architecture in which this system is a component.

J. M. Blanco, A. Illarramendi, J. M. Perez, A. Goñi

A behaviour rule based approach for active object oriented DBMS design

The aim of this paper is to show how the basic principles of expert systems could be used to ease the design of an object oriented DBMS including active concepts. An active object is an object some of whose methods are performed not after an explicit invocation, but after a triggering event òccurs, thus becoming an independent agent in the base. In a first conceptual stage, we formalize object activity with what we call behaviour rules. Those rules are then transcribed into production rules that can be managed by an inference engine. This engine is slightly different from the engine dedicated to an expert system, though following the main principles of the deduction cycle. The inference is realized by querying the rule and the data bases.

Anne Tchounikine, Claude Chrisment

Multimedia Database and Hypertext (II)

Design and Specification of EVA: a language for multimedia database systems

We present EVA — a language that deals with the temporal and spatial aspect of multimedia information retrieval and delivery, in addition to providing the usual capabilities of the ordinary database languages. EVA is an extension of the query language Varqa, and provides the following capabilities for management and retrieval of multimedia information: query operators, update operators, computational operators, screen management operators, and temporal operators.EVA is a functional language whose notation is based on that of conventional set theory. It is formally defined using the mathematical framework of many sorted algebra. EVA is object oriented and supports objects, object classes, and relationships between objects (in the form of functions). The current implementation of EVA deals with textual data, images, and conventional data.

F. Golshani, N. Dimitrova

Using Hypertext to Interface to Legal Knowledge Based Systems

In this paper we describe a prototype system which uses hypertext to interface to a legal knowledge based system concerning Mobility Allowance, a UK Social Security benefit. We believe that the prototype demonstrates the advantages of this approach in that it allows the user to have more control over the course of the interaction and to obtain a deeper understanding of the domain than will be provided by conventional expert systems style explanations. The prototype combines the strength of a kbs — the ability to provide answers to specific questions — with the strength of a hypertext — the freedom to explore the domain as the user wishes — in a single system, so as to offer a seamless and natural mode of interaction.

Paul Soper, Trevor Bench-Capon

Information Structuring for Intelligent Hypermedia: A Knowledge Engineering Approach

The next generation of hypermedia systems will undoubtedly incorporate many concepts and techniques taken from the field of artificial intelligence and knowledge-based systems, in order to add more intelligence to the basic hypertext model. In this paper, we discuss where this need for more intelligent hypermedia systems comes from, by looking at the major authoring and reading problems characteristic of hypermedia. We then point out the similarity in system architecture which exists between expert systems and intelligent hypermedia systems, and discuss how a knowledge engineering approach to hypermedia information structuring can help us address these authoring and reading problems. Finally, we describe the use of such an approach in our implementation of IKON, a prototype third-order hypermedia system, based on the so-called Model-Map-View-Praxis or MMVP conceptual system architecture

Hans C. Arents, Walter F. L. Bogaerts

ASMMA: A Simple Multi-Media Application — A Modeling Approach

The recent boom of hypermedia and hypertext applications is not only dictated by the emerging technology in the area of computer hardware. The fact that almost everybody is talking about hypermedia lies in the relevance of this technology to a wide variety of research areas of computer science. In these days, we found ourselves thinking about the introduction of a quite common modeling scheme to the multi-media application business. Our engagement resulted in a useful modeling scheme that combines a number of features known to database modelers. As discussed in this paper, this scheme seems to cope successfully with a number of problems pertaining to the organization of the information space of multi-media applications (and not only).

L. Marinos, K. van Goor

Applications (III)

ProdIS: A Database Application for the Support of Computer Integrated Manufacturing

This paper describes the development and implementation of a relational database application for the KEBA company in Linz. The purpose of the database is to manage data which is important for computer integrated manufacturing and to work as an intelligent interface between the departements construction, production and quality control. To support corporate wide data consistency is a further main objective of the project. Dynamic data structures on attribute level and a rule based interface were used to achieve these goals.

Günter Steinegger, Rupert Hohl, Roland Wagner, Josef Küng

A Prototype Expert Database System for Programming Classical Music

This paper describes the design an implementation of a prototype expert database system for programming classical music, as a case study to demonstrate the applicability of using an expert database system approach to solve programming decisions. The paper describes the process of acquiring and representing expert knowledge; the development, testing, and implementation of the prototype; and the lessons learned during the course of developing the system.

Magdi N. Kamel, Ronald A. Boxall

A deductive query processor for statistical databases

Statistical data are complex data structures, which traditional data models are not sufficient to represent. Such a complexity transfers directly to statistical data manipulations, imposing on the user a great effort in specifying detailed algorithms. An high level language should be desirable, by which the user can specify only the starting data and the logical structure of the result. Such conceptual interaction requires an automatic mapping, transforming the high level operator into a number of elementary operations.This paper describes a statistical data model suitable to represent the complexity of statistical data structures and suitable to be processed by high level operators.This is followed by the description of a knowledge-based mechanism to deduce the elementary operations underlying an high level operator.Finally, a deductive query processor, StEM, is shown, which provides transparency of complex data manipulations, using both the above data model and deduction mechanism.

Carla Basili, Leonardo Meo-Evoli

Hyper-Agenda: A system for task management

This paper presents the Hyper-Agenda project, a system for task management. The main concepts of this system are discussed and argued: entities, tasks and agendas. Hyper-Agenda allows the user (1) to express the models of his tasks (2) to instantiate these models in order to achieve real tasks (3) to plan instances of tasks into agendas (4) and to follow their execution. We show that the modeling of tasks demands tree-like complex objets and that the structuring of entities leads to the point of view notion. Finally we show that agendas are derived objects very close to the database view concept. The general expression of constraints that allows planning is briefly discussed. The prototype of Hyper-Agenda is presented along with an example of multiple actor task.

C. Boksenbaum, P. Déhais, S. Hammoudi, F. Acosta

Knowledge-Based Systems

Supporting Browsing of Large Knowledge Bases

This paper examines the problem of browsing in knowledgebased systems. Current systems provide inadequate support for browsing large knowledge bases. Three areas where this support must be enhanced are the user interface, the facilities for querying the knowledge base and the facilities for exploring the knowledge base. Techniques to provide this enhanced support are described. A prototype system for browsing large framebased knowledge bases is presented.1

T. Patrick Martin, Hing-Kai Hung, Chris Walmsley

Knowledge Engineers, Do Not Neglect Terminology

Collection and formalization of expert terminological information by the knowledge engineer

The knowledge engineer, whose task it is to formalize expert knowledge, would be well advised to have at his disposal a terminological knowledge base of the domain concerned by the material gathered during interviews.We present some typical aspects of expert language: classifying expression, periphrastic reformulation, stereotypes, analogy, metonymy, metaphor and neological creation.With a view to systematizing this collection of terminological information, we suggest the knowledge engineer should be furnished with a computerized tool: a computer-assisted knowledge extraction, collection, formalization and validation system.This system, which we present in a theoretical form, is founded on the concept of terminological semantic networks (TSN). TSN’s facilitate classification, description and inheritance of attributes, and coherence management of terminological data by detection of similitudes. This autonomous system is intended to supplement any knowledge or fact base of a standard decision support expert system.

Gabriel Otman

Building Knowledge Based Systems for Maintainability

For the practical use of KBSs to become widespread in the 1990s sound software engineering principles need to be followed. One important aspect of this is maintainability. In this paper some of the results of the Maintenance Assistance for Knowledge Engineers (MAKE) project are described. The aim of the project is to address the important role of maintenance in KBSs and in particular KBSs based on written sources of which legal and quasi legal systems provide the prime example. These systems can be viewed at several different levels, the source level, the knowledge representation level and the target executable representation level. It is suggested that the key to the maintenance of such systems is to maintain the intermediate knowledge representation rather than patching the code used in the target executable representation. Maintenance is thus a matter of knowledge representation rather than programming. Further, maintenance can be greatly enhanced by using a suitable development environment and methodology supported by a set of maintenance tools that focuses on this intermediate representation and its relation to the sources to increase understandability and hence adaptability.One such environment, the Make Authoring and Development Environment (MAUDE), is described in this paper. This has been developed as part of the MAKE project and is designed to encourage the production of systems which can be maintained through an intermediate representation. MAUDE is supported by a suite of maintenance tools aimed at increasing understandability of the intermediate representation and to carry out various validation, verification and house keeping tasks to enhance maintainability. The MAKE suite of maintenance tools are also described. Both the MAUDE environment and methodology and tools have been used to produce a pilot KBS for British Coal’s Insurance and Pensions Division. The system is currently under going trials but some encouraging results have been received indicating that a sound footing has been established for further work.

Frans Coenen, Trevor Bench-Capon

Principal Components and the Accuracy of Machine Learning

Induction algorithms could be positioned somewhere between witchcraft and science since extracting rules (the essence of data semantics) from an incomplete set of examples is a devil’s task. On the other hand there always has been some romanticism in the interpretation of eigenvalues. Authors of this paper try to show that the abovfe mentioned notions work together quite well. Quite surprising results have been achieved. Not only the number of decision variables can be reduced but, in most tested cases, the prediction accuracy improves (despite the reduced number of variables). The results could be implemented as a preprocessor for most of the induction algorithms employed for an automatic rule generation (one of the most important components of any modern expert system).

Zbigniew Duszak, Waldemar W. Koczkodaj

Object Oriented Representation

A Grammar-Based Framework for Object Dynamics

A grammar-based categorization of practical systems for modeling the dynamic behaviour of objects is presented. A framework based on Augmented Transition Networks is proposed as an ideal modeling tool for such systems, affording both formal analyzability and the functionality needed to model practical systems. The flexibility of the framework in achieving useful tradeoffs between complexity and modeling power is demonstrated by presenting and critically analyzing some relevant algorithms.

Ranabir Gupta, Gary Hall

Adopting the Network Model for Indexing in Object-Oriented Databases

Relational data model has been shown to be very suitable for implementing many object-oriented concepts. It, however, is not effective in supporting object navigation which is an important factor for access support in object-oriented applications. We propose a hybrid data model, and then an indexing technique based on it, in which the relational model is used for storage of data and the network model is used for generalized complex object indexing to exploit semantics. Such a symbiotic approach can have the advantages of both the above models.We have evaluated the performance of our index scheme using an analytical model. A comparison is made with some existing indexing schemes (both in terms of space and time complexity). It shows that the proposed method is better for queries involving complex objects with large sub objects.

Kien A. Hua, Chinmoy Tripathy

Integrated Tools for Object Oriented Persistent Application Development

This paper presents the main characteristics of an integrated environment for object oriented, persistent application development. We first discuss the rationale of our approach and give our analysis of current database systems or persistent languages. Using such systems or languages is still a very difficult task because persistent application development combines the complexity of database schema design together with a software engineering problem.In the framework of the Aristote Project, we describe the main components of an integrated environment that helps the designer to define types, schema, methods and to structure his/her application. The main idea is to achieve a good degree of declarativity and to provide general tools to generate specific code for target (object oriented or extensible) DBMS. Issue of interoperability is also discussed.

M. Adiba, C. Collet, P. Dechamboux, B. Defude

Integrating Object-Oriented Databases into Applications — A Comparison

The market for object-oriented database systems (OODBS) is of increasing interest for engineering application developers. OODBS offer new technical features more adapted to engineering applications, such as support for complex data and highly concurrent work. One major advantage of OODBS and object technology according to their vendors is a significant improvement in development productivity. ABB Corporate Research Center recently evaluated several commercially available OODBS. One subject to tests was productivity. This paper summarizes our experiences*. Using the integration of persistent data storage into an existing C++ application as an example, we point out, where the integration of the data manipulation language (DML) and programming language is not seamless (enough), what issues are limiting the reuse of existing classes, and what tools are provided to help the developer. We conclude by suggesting points where productivity factors are not yet taken into account.

K. Erni

Applications in Software Engineering

Elen Prototype: an Active Hypertext System for Document Management in Software Engineering

In this paper we present the Elen prototype objective of which is to support maintenance activities for large software systems. We take advantage of hypertext systems in order to manage links between objects (programs and documentation) and to support navigation and interface features. To overcome hypertext insufficiency in supporting structural and dynamic aspects of software engineering documents, we define a new underlying data model. The proposed data model is object-oriented, it provides the hypertext by the abstraction mechanisms necessary to model structural documents and semantics of composition links between them. With this model the “nodes” of the hypertext becomes active, in the sense that, some actions can be executed on nodes of a given type, and side effects of modifications are propagated to other nodes which may react. These later aspects are supported at the data model level using methods which constitute triggers associated to objects. This paper is focused on the presentation of dynamic aspects of the model and on the Elen prototype implementing the model features.

Sahar Jarwa, Marie-France Bruandet

A Hypertext Information System for Reusable Software Component Retrieval

Software component reusability in an industrial context supposes integrating and acting on a number of information about software objects (programs, functions, requirements, design documents, diagrams, figures, pictures,...) gathered in a same database. Software engineering activities (specification, design, programming,...) implies to implement tools to retrieve an object by using this information which characterizes its content. Therefore, the problem is to organize this information database in order to allow the user (software’ designer, maintenance operator,...) to locate as fast and easily as possible information corresponding to the needs he expresses.In this paper, we present a reusable object retrieval system, handling a large Hypermedia database through a visual representation of the objects and their associations. Our modelization is based on content analysis, automatic classification and Hypertext browsing and navigation. This environment allows to integrate information in an incremental way, at any stage of the database manipulation, to design associative networks between objects (thesaurus, conceptual graph, component similarity,...) and to offer a direct interaction to the user.SYRIUS is a Hypertext Information Retrieval System using natural-like language queries possibly associated to a multicriteria retrieval. Its friendliness and high interactiveness lies in the visual interface which supplies a graphic representation and a direct manipulation of any information structure. Network structures are indeed used for displaying and defining intercomponent associations, associative thesaurus of terms, classification criteria,... An evaluation of the system shows the interest of such an approach wich provides an help to tackle with the difficulties of orientation and navigation in large and complex information databases. Its implementation [8] validated these concepts.

Florence Sedes

Hypertext for Software Engineering: Automatic Conversion of Source Code and its Documentation into an Integrated Hypertext

Without the right tools it will be increasingly difficult for software engineers to manage the extensive program sources and related documentation material of large software systems. In this paper, we propose to convert program sources, inline documentation and additional technical papers of a released software version into hypertext and to store them in a common database in order to facilitate integrated management. In particular, the goal is to make explicit the numerous relationships between program and text passages, but also within the programs or within the inline and the separately kept documentation, in order that the links generated during this process can be used for the maintenance of the software system.The automatic conversion into hypertext is managed by a special tool that processes the program sources principally in the same manner as the documentation: In the first step the input material is partitioned into logically coherent units according to a formal structure description depending on the programming language (for the sources) as well as on the text processor and the style (for the documentation). These units form hypertext nodes and serve for the presentation on the screen later on. In the second step these nodes and smaller parts of them are interconnected by links. By this procedure program sources and its documentation are closely combined.

F. Sarre, A. Myka, U. Güntzer

Distributed Aspects

An Efficient Load Balancing Strategy for Shared-Nothing Database Systems

This paper deals with load balancing in shared-nothing database systems. We introduce the notion of cell as the unit of data partition and load balancing. Since the number of items we need to be examined during load balancing is significantly reduced, our technique provides very impressive improvement over traditional approaches.

Kien A. Hua, Chiang Lee, Honesty C. Young

Automation of Control and Data flow in Distributed Application Systems

This paper considers control and data flow of well-structured procedures in distributed application systems. At control flow level, an application-oriented cooperation model is used to model well-structured cooperative work in distributed applications. At data flow level, a customizable data management mechanism passes data between activities and provides data necessary for activity execution. The cooperation model requires procedure-oriented data delivery and data passing between activities.

Berthold Reinwald, Hartmut Wedekind

MO2 An object Oriented Model for DataBase and Parallel Systems

In this paper, we are interested in the actors placing problem on a distributed architecture (transputer network and station network) and the management of communications between the different active entities on the network. The main feature of the suggested method of placing is the use of basic element called forms in order to group the actors of an application. The combination of the basic forms (pipes, matrix, trees) permits to construct various application architectures. Forms are managed in the same way as the other objects supported by the system. Hence, they are three types of objects to be placed on the network actors, forms and virtual communication channels. The aim of the method is to found an arrangement between the policy which consists to minimize the communication cost and the policy which try to balance the load of the network.Forms can be dynamically moved in time. Therefore applications like network neuronal applications can be easily supported.

A. Attoui, Ph. Homond

Modelling

Modelling Knowledge Systems

A uniform approach to modelling data, information and knowledge is given. One feature of this approach is that items of data, information and knowledge can be combined with one another in a natural way. Another feature is that a single principle of normalisation is introduced which may be applied to data, information and to knowledge. It is proposed that this unified approach to modelling provides the foundation for the development of powerful modelling tools for knowledge systems.

John Debenham

A Data Model Capturing the History of Texts

A definition is presented of a database universe permitting storage of historical evolution of document texts. The need for storage of this kind of information arises in certain areas of juridical research. The definition offers elegant specifications of complex constraints and complex transaction functions, using no heavier tools than intuitive set theory.

F. T. A. M. Pieper

Some Rules for Handling Derivable Data in Conceptual Data Modeling

Mixing original and derivable data in ER modeling causes problems concerning the manipulation and the integrity of databases. Therefore these two sorts of data should be kept apart. In order to treat derivable data adequately we classify them according to the kind of rules that are used to infer them. Whereas certain types of derivable data should be completely excluded from ER designs, others have to be defined explicitly. Up to now, there are no means to express them in the ERM. We suggest to extend the ERM in two ways: First, derivable entity and relationship sets can be drawn an a way so that they can be distinguished from original ones. Second, derivation rules can be written down in adjusted forms of predicate logic or SQL.

Otto Rauh

Office Information Systems

Backtracking Office Procedures

The office cooperation system ProMInanD supports cooperative office work by means of electronic circulation folders. Based on an abstract description of the office task and knowledge of the organizational structure a folder is navigated through the organization from one office worker in charge to the next. A substantial part of ProMInanD’s functionality is the handling of exceptional situations. The topic of this paper is a method by which an office task can be returned to a previous state under respectation of the system consistency, either to process it in a different way or to cancel it altogether. The solution is based on a compensation method and offers a roll back operation for a step, a step sequence, and a total cancelation of an office task. The realization of this backtracking method is discussed in the context of a single office task and in an office task family.

P. Vogel, R. Erfle

Knowledge-based Audit Support

In this paper we describe a formal model and a design — based on that model — of a system which offers knowledge-based support for both specification of the object of audit and specification of the planning of the audit of that object. This model can be compared to a Conceptual Model in the world of Information Systems. The object of audit includes databases containing auditee’s registrations of economic events. Our system design is currently applied to the development of operational products, i. e. CAATs (Computer Assisted Audit Techniques), such as an inquiry-based Expert System to determine the audit plan.

P. I. Elsas, R. P. van de Riet, J. J. van Leeuwen

Integration of Expert and Database Systems

Building Intelligent Interfaces to Relational Data Bases

The problem of the design of interfaces in order to formulate intelligent and flexible queries to relational data base, is dealt with in this paper. Such queries are really effective if the user does not need a particular language and if they can truly represent his objectives. An approach based on a development tool (Shell) in a Macintosh environment is proposed; this Shell allows to build user interfaces based on a Deductive Processor for a particular data base.

Antonio Boccalatte, Massimo Paolucci

Experiments on Coupling Expert and Database Systems

The paper describes an extended rule-based expert system with the capability to invoke external programs and the the access to a set of database files. The necessity to integrate an expert system with database as well as with external programs (e. g. on-line sensor data input) has been recognized during real applications development. The particular solutions and experience even in simple MS-DOS environment are illustrated by examples.

Vladimír Mařík, Jiří Lažanský, Tomáš Vlček, Werner Retschitzegger

Multimedia Database and Hypertext (III)

Electronic Product Catalogues — a Hypermedia Application with a Dedicated Development Tool

Electronic product catalogues (EPC) as a valueadded service for presenting, explaining, and selecting products are becoming more and more important. Especially systems combining multimedia and knowledge-based techniques seem to be appropriate to meet different requirements. This article describes an EPC developed by practitioners of Olivetti/TA Triumph-Adler and scientists of the Bavarian Research Center of Knowledge-Based Systems (FORWISS). The system contains the companies’ new range of portable computers and integrates hypermedia techniques with an object-oriented product data base. Based on experiences that have been made building this catalogue a dedicated construction set has been developed.

J.-S. Breuker, D. Lödel, P. Mertens, M. Ponader, S. Thesmann, I. Büttel-Dietsch

Hypertext Browsing and Probabilistic Network Searching

As more and more hypertext systems become available, the concept of browsing in hypertexts is enriched of new features and capabilities. However, the end result of any browsing activity is always one or more paths over the hypertext network. Sometimes though, the discovery of the right path may be a difficult experience. Often, users know the ultimate target of their search, but they do not know how to get there. A tool, called path verifier, is proposed here that, step by step, monitors the existence of a path to the target. The features of this tool are complicated by the fact that users may non-deterministically jump to and from different link structures within the hypertext network. We interpret this inherent non-determinism by means of probability values attached as labels to the edges of the network graph. The problem of efficiently computing the probability of connecting any two nodes of the hypertext network is discussed in relation to the theory of network reliability.

Giovanni V. Guardalben, Hi. T. Srl

Object-Oriented Databases and Deductive Databases: Systems Without Market? Market Without Systems?

Current database research can be characterized as being extremely broad-based, with a wide spectrum of topics pursued all over the world, and with new topics being introduced year after year. Or formulated somewhat negatively: Database research seems these days to be highly unfocussed. This is in marked difference to the Seventies and early Eighties, where a few topics dominated database research and development and gave coherence to it: relational databases, query languages, distibuted databases, transaction management, together with the ensuing implementation techniques and performance considerations. The researchers of that time can take satisfaction from the large number, wide availability and broad acceptance of database technology, and in particular relational technology.

Peter C. Lockemann

Appendix: Poster Sessions

Electronic Chart Representation and Interaction

We present the results of two projects which combine Knowledge based Systems (KBS) technology and Geographic Information System (GIS) techniques. The projects are concerned with the interaction and representation of electronic chats. The first is primarily concerned with the production of a navigational KBS but includes a electronic representation of a nautical chart to provided data from which inferences could be made. However the associated chart is limited in the amount of data shown and the efficiency of the interaction. This is largely due to the PROLOG representation and processing medium used for the KBS. The second project, which is concerned specifically with electronic charts, relies on C algorithms for the implementation and interaction. As a result a greater amount of data can be shown and more efficient interaction achieved.

Frans Coenen, Steve Fawcett, Peter Smeaton, Trevor Bench-Capon

Image Applications and Database Systems: an Approach Using Object-Oriented Systems

With the recent advances in computer technology it is now possible to manage several types of information such as alphanumerical data from traditional applications or images, sounds and graphics from more sophisticated applications. At present the challenge is to integrate all these different types of data inside multimedia systems [Kerherve92]. In our project, we pay special attention to images and to their integration in database systems. We propose an image data model allowing the description of the different types of data which compose the image. We examin how Object Oriented Database Systems can support image applications and we propose an image database architecture offering tools to efficiently manage images and related data in an integrated manner. The functionnal architecture of our system is divided into four different managers: the interface manager, the request manager, the image data manager and the access and storage manager. Each of these components is in charge of specific tasks that are mandatory for image applications.

Brigitte Kerhervé, Vincent Oria

Efficient Implementation of Deductive Database Based on the Method of Potential Effects

A fundamental issue in Deductive Database (DDB) is the checking of Integrity Constraints (IC) when a DDB is updated.A method is proposed which provides an efficient mechanism to test IC when DDB is updated. This method is based on the method of potential effect for information systems [VIL 91], which is an extension of the Llyod method [LLO87].There are now two kinds of checking methods: method with potential phase [LLO87], [DEC88], [ASI88], and method without potential phase [SAD88], [OLI89]; we propose a method here with potential phase. The goal is to search for, a priori, what the conditions must be, which must be evaluated before carrying out an update in order to preserve the integrity of the DDB.Hierarchical or stratified DDB are assumed, under the assumption of Clark’s complementation for treatment of the negative information. The DDB is normalized by the transformations proposed by Lloyd and Topor in [LLO84] and [LLO87], with allowed formulas; IC are assumed in denial form and an inconsistency predicate is associated to each IC.The following concepts are used: basic predicate, which is defined as a predicate that doesn’t appear in the atoms of the head of any Deductive Rule (DR), and derivate predicate, defined as a predicate which should appear in the atoms of the head of some DR.Before applying the method, the DDB must be transformed into a Basic DDB (BDDB), whose characteristics are: it doesn’t have fact of a derivate predicate, and all literals in the body of DR have a variable which coincides with some of those of the head.In the proposed method RI will be considered as RD, and it is divided into two phases which are carried out at compilation time. The first phase determines the potential effects associated with each atom of a basic predicate, called generic fact, in a finite number of stages; the second defines the set of operations associated with each generic fact, which allows us to calculate the set of inconsistency and each generic fact. These conditions must not be valid when the BDDB is updated if we want to conserve the BDDB integrity.To check IC in the phase of update of a base fact we must unify this fact with its generic fact and with each generic condition associated with this generic fact.The sets of generic conditions associated with each inconsistency predicate depend only on the DR and IC, they are independent from the base fact stored in the BDDB; thus when there is no update of DR in the BDDB, we do not need to recalculate the sets of generic conditions after each update.The advantage of this method over the classic method is that the generic conditions are calculated at compilation time.However, if we want insert or delete a DR, these sets are modified and we must recalculate them and check the integrity of BDDB.

Salvador Vilena Morales, Emilia Ruiz Martín, Cecilia Delgado Negrete, Buenaventura Clares Rodríguez

Techniques in Electronic Diagnostic

The need for this study was craved by the desire to realize an intelligence system that will help in carrying out diagnosis in our analog department. No project of this type exists in this area. There are a couple of works done in electronics in knowledge base system, however, there are not application specific. The ongoing work is meant to be a speed achiever in diagnosis in electric domain. It is focused on analog electronics and was completely detailed to comprise of all current electronic components available in the market today. It accentuates modern troubleshooting guidelines. It implements the manufacturers data sheets, and most common engineering samples. The knowledge to be incorporated in this design is immense. This quantity of the information at hand requires thinking of more effective techniques.

Mike A. Redford

Knowledge Elicitation: Towards its Transparency

However expert systems (ES) have been in use since the early eighties, but there is a lack of strong theoretical base for handling knowledge elicitation problems. More, the findings of different empirical investigations on the outcomes of ES use are often contradictory. Consequently, there is a requirement in the ES field for theories or explanatory models to formulate propositions, to conduct research and interpret findings in a coherent way. This article reports briefly on a particular part of a work conducted at the University of Lund to investigate the expert system development problems and motives. According to the results of a survey conducted between April 1989–December 1989, seven various knowledge elicitation techniques were identified. The study showed that the most popular elicitation technique used for extracting knowledge from human expert was the structured interview. In this study an attempt is made to outline a conceptual model of knowledge elicitation and puts forward a number of propositions and suggestions which can contribute to our knowledge of expert systems development. The conceptual model is based on non-technical prescriptive guide. The model may allow one to conceptualize KE process in a way that to implement an expert system project more successfully. Furthermore, the result of this study can provide a basis for further research in KE. Because, looking across many of the reports in which knowledge elicitation methods are described, one may find much advice, but few data. One direction that future research might take is, to compare software engineering techniques with knowledge engineering techniques.

Mehdi Sagheb-Tehrani

A Unifying Object-Oriented Analysis and Modeling Approach and its Tool Support

Rather independent of whether a classical application system, a database system or an expert system is to be developed, today’s major bottlenecks are at the early phases of development. The difficulty is in collecting, classifying and visualizing all relevant information and in identifying proper objects along with their components and behavior. This problem is intensified as a loosely structured domain of discourse usually has to be analyzed, and no real method exists for this task. A second difficulty is the recognition and reuse of already analyzed and designed components.

Harald Schaschinger

In Integrating Content-Oriented Retrieval on Images in an Object-Oriented Knowledge Model

We present the inverted quadtree (IQ) data structure and a spatial reasoning (or knowledge) techniques to retrieve images or parts of images. We consider that the retrieval problem is of two types. The first one lies with the search of images that contain certain (or specific) pattern looked for by the user and that was not defined previously when the image was inserted. The second one supports spatial reasoning and it is based on an object-oriented method of knowledge representation. We show how this kind of sighting enhances the flexibility of the data representation and provides the capability of manipulating a large amount of semantic information.

A. Touir, J. P. Cheiney

Supporting Plastic Selection by the knowledge-based Database Front-End BAKUS

BAKUS is the German abbreviation of “Beratungssystem zur Auswahl von Kunststoffen”. This is a consulting system to support plastic selection. BAKUS represents a knowledge-based front-end to a given database for plastic materials, which are applied within Siemens. The selection process is not only supported by an object-oriented user interface in windows-technique, but also by the representation of knowledge about selection and application of plastics. Knowledge-based material selection systems, which have been developed during the last years, are all marked by the integration of a material database with an expert system.

Andrea Wolf

Backmatter

Weitere Informationen