Skip to main content
main-content

Über dieses Buch

J.UCS is the electronic journal that covers all areas of computer science. The high quality of all accepted papers is ensured by a strict review process and an international editorial board of distinguished computer scientists. The online journal J.UCS is a prototype for modern electronic publishing. Distributed via the Internet, it supports all the search and navigation tools of advanced online systems. This first annual print and CD-ROM archive edition contains all articles published online in J.UCS during 1995. It allows easy and durable access without logging onto the Internet. Uniform citation of papers is guaranteed by identical page numbering and layout of all versions.
J.UCS is based on HyperWave (formerly Hyper-G), a networked hypermedia information system compatible with other systems.

Inhaltsverzeichnis

Frontmatter

Issue 1

Managing Editor’s Column

Hermann Maurer, Cristian Calude, Arto Salomaa

High-radix Division with Approximate Quotient-digit Estimation

High-radix division, developing several quotient bits per clock, is usually limited by the difficulty of generating accurate high-radix quotient digits. This paper describes techniques which allow quotient digits to be inaccurate, but then refine the result. We thereby obtain dividers with slightly reduced performance, but with much simplified logic. For example, a nominal radix-64 divider can generate an average of 4.5 to 5.5 quotient bits per cycle with quite simple digit estimation logic. The paper investigates the technique for radices of 8, 16, 64 and 256, including various qualities of digit estimation, and operation with restricted sets of divisor multiples.

Peter Fenwick

On Implementing EREW Work-Optimally on Mesh of Trees

We show how to implement an ℓ1× n log n-processor EREW PRAM work-optimally on a 2-dimensional n-sided mesh of trees, consisting of n processors, n memory modules, and O(n2) nodes. Similarly, we prove that an ℓ2 × n2 log n-processor EREW PRAM can be implemented work-optimally on a 3-dimensional n-sided mesh of trees. By the work-optimality of implementations we mean that the expected routing time of PRAM memory requests is O(1) per simulated PRAM processor with high probability. Experiments show that on relatively small ℓ1 and ℓ2 the cost per simulated PRAM processor is 1.5–2.5 in the 2-dimensional case, and 2–3 in the 3-dimensional case. If at each step at most ;’th of the PRAM processors make a, reference to the shared memory, then the simulation cost is approximately 1. We also compare our work-optimal simulations to those proposed for coated meshes.

Ville Leppänen

Levels of Anonymity

In this paper we make a first attempt at systematically investigating levels of anonymity required in networked computer systems: we feel it is often overlooked that beyond such obvious cases as identified by means of a password or “anonymous use” there are many other levels of anonymity, identification and authenticity necessary in various applications.

Bill Flinn, Hermann Maurer

What Is a Random String?

Chaitin’s algorithmic definition of random strings—based on the complexity induced by self-delimiting computers—is critically discussed. One shows that Chaitiivs model satisfy many natural requirements related to randomness, so it can be considered as an adequate model for finite random objects. It is a better model than the original (Kolmogorov) proposal. Finally, some open problems will be discussed.

Cristian Calude

Grammars Based on the Shuffle Operation

We consider generative mechanisms producing languages by starting from a finite set of words and shuffling the current words with words in given sets, depending on certain conditions. Namely, regular and finite sets are given for controlling the shuffling: strings are shuffled only to strings in associated sets. Six classes of such grammars are considered, with the shuffling being done on a leftmost position, on a prefix, arbitrarily, globally, in parallel, or using a maximal selector. Most of the corresponding six families of languages, obtained for finite, respectively for regular selection, are found to be incomparable. The relations of these families with Chomsky language families are briefly investigated.

Gheorghe Păun, Grzegorz Rozenberg, Arto Salomaa

Issue 2

Managing Editor’s Column

Hermann Maurer, Cristian Calude, Arto Salomaa

A Scalable Architecture for Maintaining Referential Integrity in Distributed Information Systems

One of the problems that we experience with today’s most widespread Internet Information Systems (like WWW or Gopher) is the lack of support for maintaining referential integrity. Whenever a resource is (re)moved, dangling references from other resources may occur.This paper presents a scalable architecture for automatic maintenance of referential integrity in large (thousands of servers) distributed information systems. A central feature of the proposed architecture is the p-flood algorithm, which is a scalable, robust, prioritizable, probabilistic server-server protocol for efficient distribution of update information to a large collection of servers.The p-flood algorithm is now implemented in the Hyper-G system, but may in principle also be implemented as an add-on for existing WWW and Gopher servers.

Frank Kappe

A Variant of Team Cooperation in Grammar Systems

We prove that grammar systems with (prescribed or free) teams (of constant size at least two or arbitrary size) working as long as they can do, characterize the family of languages generated by (context-free) matrix grammars with appearance checking; in this way, the results in [Păun, Rozenberg 1994] are completed and improved.

Rudolf Freund, Gheorghe Păun

On Four Classes of Lindenmayerian Power Series

We show that nonzero axioms add to the generative capacity of Lindenmayerian series generating systems. On the other hand, if nonzero axioms are allowed, nonterminals do not, provided that only quasiregular series are considered.

Juha Honkala, Werner Kuich

The Relationship Between Propagation Characteristics and Nonlinearity of Cryptographic Functions

The connections among the various nonlinearity criteria is currently an important topic in the area of designing and analyzing cryptographic functions. In this paper we show a quantitative relationship between propagation characteristics and nonlinearity, two critical indicators of the cryptographic strength of a Boolean function. We also present a tight lower bound on the nonlinearity of a cryptographic function that has propagation characteristics

Jennifer Seberry, Xian-Mo Zhang, Yuliang Zheng

On Completeness of Pseudosimple Sets

The paper contains completeness criterions for pseudosimple sets. Those criterions are constructed using effectivization of the definitions as well as extensionally bounded functions.

Vadim Bulitko

Issue 3

Managing Editor’s Column

Vol. 1, No. 3; March 28, 1995
Hermann Maurer, Cristian Calude, Arto Salomaa

Combining Concept Mapping and Adaptive Advice to Teach Reading Comprehension

When driven by simple models of information processing, reading instruction focuses on basic decoding skills centering on words and sentences. Factoring in advanced cognitive studies adds at least two more dimensions. First, readers must learn a collection of strategies for constructing meaning from text. Second, and most importantly, readers must develop enough situational awareness to diagnose a text and know which strategy to deploy. Teaching intellectual crafts that involve not only base-line performative skills but also a repertoire of problem-solving heuristics, and the metacognitive maturity to orchestrate multi-leveled activities, works well in a master-apprentice model. However, one-on-one instruction is far too labor- intensive to be commonplace in the teaching of reading. This paper describes a computerized learning environment for teaching the conceptual patterns of critical literacy. While the full implementation of the software treats both reading and writing, this paper covers only the reading aspects of R-WISE (Reading and Writing in a Supportive Environment).

Patricia A. Carlson, Veronica Larralde

Modular Range Reduction: a New Algorithm for Fast and Accurate Computation of the Elementary Functions

A new range reduction algorithm, called Modular Range Reduction (MRR), briefly introduced by the authors in [Daumas et, al. 1994] is deeply analyzed. It is used to reduce the arguments to exponential and trigonometric function algorithms to be within the small range for which the algorithms are valid. MRR reduces the arguments quickly and accurately. A fast hardwired implementation of MRR operates in time O(log(n)), where n is the number of bits of the binary input value. For example, with MRR it becomes possible to compute the sine and cosine of a very large number accurately. We propose two possible architectures implementing this algorithm.

Marc Daumas, Christophe Mazenc, Xavier Merrheim, Jean-Michel Muller

Special Cases of Division

This surveys algorithms and circuits for integer division in special cases. These include division by constants, small divisors, exact divisors, and cases where the divisor and the number base have a special relationship. The related operation of remainder is also covered. Various prior techniques are treated in a common framework. Worked examples are provided together with examples of practical application.

R. W. Doran

Bringing ITS to the Marketplace: A Successful Experiment in Minimalist Design

Intelligent Tutoring Systems (ITS) have proven to be effective tools for teaching and training. However, ITSs have not become common in industrial and organisational settings, in part because their complexity has proven difficult to manage outside of the research lab. Minimalist ITSs are an attempt to bridge the gap between research and practical application; they simplify research techniques while striving to maintain as much pedagogic intelligence as possible. This paper describes one such system, SWIFT, that is an example of how a minimalist ITS can be delivered as a commercial product. We outline some of the issues facing designers of a minimalist system, and describe the ways that research techniques have been incorporated into four modules of SWIFT: adaptive testing, course planning, guidance, and diagnosis.

Carl Gutwin, Marlene Jones, Patrick Brackett, Kim Massie Adolphe

Halting probability amplitude of quantum computers

The classical halting probability Ω introduced by Chaitin is generalized to quantum computations.

K. Svozil

Issue 4

Managing Editor’s Column

Vol. 1, No. 4; April 28, 1995
Hermann Maurer, Cristian Calude, Arto Salomaa

The Hyper-G Network Information System

As the Internet continues to experience exponential rates of growth, attention is shifting away from mainstream network services such as electronic mail and file transfer to more interactive information services. Current network information systems, whilst extremely successful, run into problems of fragmentation, consistency, scalability, and loss of orientation.The development of second generation1’ network information systems such as Hyper-G can help overcome these limitations. Of particular note are Hyper-G’s tightly-coupled structuring, linking, and search facilities, its projection of a seamless information space across server boundaries with respect to each of these facilities, and its support for multiple languages. The Harmony client for Hyper-G utilises two and three-dimensional visualisations of the information space and couples location feedback to search and link browsing operations, in order to reduce the likelihood of disorientation. This paper presents a comprehensive overview of Hyper-G and Harmony.

Keith Andrews, Frank Kappe, Hermann Maurer

About WWW

The World-Wide Web is the most talked-about distributed information system today. This paper does not touch on its workings; it tries to give a brief history and outlines the feelings provoked by the explosive adoption in all circles of WWW as the first vehicle on the Global Information Infrastructure.

R. Cailliau

Electronic Publishing

Publishing the results of scientific research is the basis of the advancement of science, technology and medicine. Over the past decades traditional scientific publishing has been facing ever increasing difficulties because of the continuously growing number of publicationsthe specialization of sciencethe rising cost of distributionacquisitionarchiving, and with itthe danger of unavailability and/or inaccessibility

Dietrich Goetze

Evolution of Internet Gopher

How the Internet Gopher system has evolved since its first released in 1991 and how Internet Gopher relates to other popular Internet information systems. Current problems and future directions for the Internet Gopher system.

Mark P. McCahill, Farhad X. Anklesaria

WAIS and Information Retrieval on the Internet

WAIS (Wide Area Information Servers), a development of Thinking Machine Corporation, turned out to be one of the main search engines in connection with the World Wide Web (WWW). This article gives a short overview of WAIS, its history, its basics and some connected developments.

Helmut Mülner

Issue 5

Managing Editor’s Column

Vol. 1, No. 5; May 28, 1995
Hermann Maurer, Cristian Calude, Arto Salomaa

Conditional Tabled Eco-Grammar Systems versus (E)TOL Systems

We investigate the generative capacity of the so-called conditional tabled eco-grammar systems (CTEG). They are a variant of eco-grammar systems, generative mechanisms recently introduced as models of the interplay between environment and agents in eco-systems. In particular, we compare the power of CTEG systems with that of programmed and of random context T0L systems and with that of ET0L systems. CTEG systems with one agent only (and without extended symbols) are found to be surprisingly powerful (they can generate non-ETOL languages). Representation theo­rems for ET0L and for recursively enumerable languages in terms of CTEG languages are also presented.

Erzsébet Csuhaj-Varjú, Gheorghe Păun, Arto Salomaa

HOME: an Environment for Hypermedia Objects

In this paper, we present HOME, a new environment for distributed hypermedia. We mainly concentrate on the server side, and provide access to World-Wide Web clients through a gateway mechanism. Data and metadata are strictly separated in the distributed HOME server. The architecture is based on a layered approach with separate layers for raw data, multimedia characteristics and hypermedia structure. We briefly present some of the implementation aspects and emphasise distinctive charac­teristics of HOME. We conclude with a comparison with related research and our plans for the future.

Erik Duval, Henk Olivié, Piers O’Hanlon, David G. Jameson

Lexical Analysis with a Simple Finite-Fuzzy-Automaton Model

Many fuzzy automaton models have been introduced in the past. Here, we discuss two basic finite fuzzy automaton models, the Mealy and Moore types, for lexical analysis. We show that there is a remarkable difference between the two types. We consider that the latter is a suitable model for implementing lexical analysers. Various properties of fuzzy regular languages are reviewed and studied. A fuzzy lexical analyzer generator (FLEX) is proposed.

Alexandru Mateescu, Arto Salomaa, Kai Salomaa, Sheng Yu

Software Patents and The Internet: Lessons from the Compuserve/Unisys Graphics Interchange Format Case Study

The attempt by Unisys to obtain royalties from the Lempel Zev Welch Graphics Interchange Format specification through Compuserve has wide implications for the Internet. Increased activity in the US software patents area is likely to result in damage to progress of the software arts and the Internet, and to generate upscaled protest from Internet users. The LZW GIF case highlights the Internet culture in favour of free and unfettered development. Clarification of this important principle will have a major effect on the future of the Internet.

Jenny Shearer, Arnould Vermeer

GAC — the Criterion for Global Avalanche Characteristics of Cryptographic Functions

We show that some widely accepted criteria for cryptographic functions, including the strict avalanche criterion (SAC) and the propagation criterion, have various limitations in capturing properties of vital importance to cryptographic algorithms, and propose a new criterion called GAC to measure the global avalanche characteristics of cryptographic functions. We also introduce two indicators related to the new criterion, one forecasts the sum-of-squares while the other the absolute avalanche characteristics of a function. Lower and upper bounds on the two indicators are derived, and two methods are presented to construct cryptographic functions that achieve nearly optimal global avalanche characteristics.

Xian-Mo Zhang, Yuliang Zheng

Issue 6

Managing Editor’s Column

Vol. 1, No. 6; June 28, 1995
Hermann Maurer, Cristian Calude, Arto Salomaa

A Translation of the Pi-Calculus Into MONSTR

A translation of the π-calculus into the MONSTR graph rewriting language is described and proved correct. The translation illustrates the heavy cost in practice of faithfully implementing the communication primitive of the π-calculus and similar process calculi. It also illustrates the convenience of representing an evolving network of communicating agents directly within a graph manipulation formalism, both because the necessity to use delicate notions of bound variables and of scopes is avoided, and also because the standard model of graphs in set theory automatically yields a useful semantics for the process calculus. The correctness proof illustrates many features typically encountered in reasoning about graph rewriting systems, and particularly how serialisation techniques can be used to reorder an arbitrary execution into one having stated desirable properties.

R. Banach, J. Balázs, G. Papadopoulos

Distributed Caching in Networked File Systems

Changing relative performance of processors, networks, and disks makes it necessary to reconsider algorithms using these three resources. As networks get faster and less congested topologies emerge, it becomes important to use network resources more aggressively to obtain good performance. Substitution of local disk accesses by accesses to remote memory can lead to better balanced resource usage and thus to faster systems. In this work we address the issue of file caching in a networked file system configuration. Distributed block-level in-memory caches are considered. We show that carefully constructed distributed concepts can lead to lower server load and better overall system performance than centralized concepts. Oversimplification, although aimed at gaining performance for single components, may deteriorate overall performance as a result of unbalanced resource usage.

Artur Klauser, Reinhard Posch

From Personal Computer to Personal Assistant

Much of the confusion that surrounds electronic personal assistants arises from the open-ended complexity of their development. In this paper we categorise some of their more common uses before suggesting several thought-provoking extensions.

Jennifer Lennon, Arnould Vermeer

Microworlds for teaching concepts of object oriented programming

We present two examples of microworlds built into the Smalltalk environment for the purpose of teaching the main concepts of object oriented programming (OOP) and of the Smalltalk programming language. The distinguishing features of our microworlds are that each of them presents the student with a sequence of environments. These environments introduce one OOP concept after another, and disclose the Smalltalk environment and language in a step-by-step fashion. The starting environment does not require any programming and does not encourage the user to use Smalltalk tools, the last environment must be programmed in Smalltalk and discloses the major Smalltalk tools. The intended use of our microworlds is for the introductory part of a course on OOP, to be followed by a detailed presentation of the language. An extension of the presented approach would make the method suitable for teaching basics of computer programming in a computer literacy course.

Ivan Tomek

Issue 7

Managing Editor’s Column

Vol. 1, No. 7; July 28, 1995
Hermann Maurer, Cristian Calude, Arto Salomaa

Introduction to the Special Issue “Real Numbers and Computers”

This special issue contains a selection of papers presented during the international conference “Real Numbers and Computers”, Saint-Étienne, France, April 1995.

Jean-Claude Bajard, Dominique Michelucci, Jean-Michel Moreau, Jean-Michel Muller

A High Radix On-line Arithmetic for Credible and Accurate Computing

The result of a simple floating-point computation can be in great error, even though no error is signaled, no coding mistakes are in the program, and the computer hardware is functioning correctly. This paper proposes a set of instructions appropriate for a general purpose microprocessor that can be used to improve the credibility and accuracy of numerical computations. Such instructions provide direct hardware support for monitoring events which may threaten computational integrity, implementing floating-point data types of arbitrary precision, and repeating calculations with greater precision. These useful features are obtained by the efficient implementation of high radix on-line arithmetic. The prevalence of super-scalar and VLIW processors makes this approach especially attractive.

Thomas Lynch, Michael J. Schulte

Estimation of Round-Off Errors on Several Computers Architectures

Numerical validation of computed results in scientific computation is always an essential problem as well on sequential architecture as on parallel architecture. The probabilistic approach is the only one that allows to estimate the round-off error propagation of the floating point arithmetic on computers. We begin by recalling the basics of the CESTAC method (Contròle et Estimation STochastique des Arrondis de Calculs). Then, the use of the CADNA software (Control of Accuracy and Debugging For Numerical Applications) is presented for numerical validation on sequential architecture. On parallel architecture, we present two solutions for the control of round-off errors. The first, one is the combination of CADNA and the PVM library. This solution allows to control round-off errors of parallel codes with the same architecture. It does not need more processors than the classical parallel code. The second solution is represented by the RAPP prototype. In this approach, the CESTAC method is directly parallelized. It works both on sequential and parallel programs. The essential difference is that this solution requires more processors than the classical codes. These different approaches are tested on sequential and parallel programs of multiplication of matrices.

Jalil Asserrhine, Jean-Marie Chesneaux, Jean-Luc Lamotte

Round-off error propagation in the solution of the heat equation by finite differences

The effect of round-off errors on the numerical solution of the heat equation by finite differences can be theoretically determined by computing the mean error at each time step. The floating point error propagation is then theoretically time linear. The experimental simulations agree with this result for the towards zero rounding arithmetic. However the results are not so good for the rounding to the nearest artihmetic. The theoretical formulas provide an approximation of the experimental round-off errors. In these formulas the mean value of the assignment operator is used, and consequently, their reliability depends on the arithmetic used.

Fabienne Jézéquel

LCF: A Lexicographic Binary Representation of the Rationals

A binary representation of the rationals derived from their continued fraction expansions is described and analysed. The concepts “adjacency”, “mediant” and “convergent” from the literature on Farey fractions and continued fractions are suitably extended to provide a foundation for this new binary representation system. Worst case representation-induced precision loss for any real number by a fixed length rep- resentable number of the system is shown to be at most 19% of bit word length, with no precision loss whatsoever induced in the representation of any reasonably sized rational number. The representation is supported by a computer arithmetic system implementing exact rational and approximate real computations in an on-line fashion.

Peter Kornerup, David W. Matula

Exact Statistics and Continued Fractions

In this paper we investigate an extension to Vuillemin’s work on continued fraction arithmetic [Vuillemin 87, Vuillemin 88, Vuillemin 90], that permits it to evaluate the standard statistical distribution functions. By this we mean: the normal distribution, the λ2-distribution, the t-distribution, and, in particular, the F-distribution. The underlying representation of non-rational computable real numbers is also as continued fractions, in the style of Vuillemin. This permits arbitrary accuracy over a range of values. The number of terms of a continued fraction that are used by the implementation is dynamically controlled by the accuracy demanded of the final answer. The use of a modern lazy functional language - Haskell - has considerably eased the programming task. Two features are of note. Firstly, the type-class structure allows one to augment the varieties of numbers supported by the language. Secondly, the laziness inherent in the Haskell’s semantics, makes it very straightforward to dynamically control the accuracy of the intermediate evaluations.

David Lester

On directed interval arithmetic and its applications

We discuss two closely related interval arithmetic systems: i) the system of directed (generalized) intervals studied by E. Kaucher, and ii) the system of normal intervals together with the outer and inner interval operations. A relation between the two systems becomes feasible due to introduction of special notations and a so-called normal form of directed intervals. As an application, it has been shown that both interval systems can be used for the computation of tight inner and outer inclusions of ranges of functions and consequently for the development of software for automatic computation of ranges of functions.

Svetoslav Markov

MSB-First Digit Serial Arithmetic

We develop a formal account of digit serial number representations by describing them as strings from a language. A prefix of a string represents an interval approximating a number by enclosure. Standard on-line representations are shown to be a special case of the general digit serial representations. Matrices are introduced as representations of intervals and a finite-state transducer is used for mapping strings into intervals. Homographic and bi-homographic functions are used for representing basic arithmetic operations on digit serial numbers, and finally a digit serial representation of floating point numbers is introduced.

Asger Munk Nielsen, Peter Kornerup

Some Algorithms Providing Rigourous Bounds for the Eigenvalues of a Matrix

Three algorithms providing rigourous bounds for the eigenvalues of a real matrix are presented. The first is an implementation of the bisection algorithm for a symmetric tridiagonal matrix using IEEE floating-point arithmetic. The two others use interval arithmetic with directed rounding and are deduced from the Jacobi method for a symmetric matrix and the Jacobi-like method of Eberlein for an unsymmetric matrix.

Raymond Pavec

On a Formally Correct Implementation of IEEE Computer Arithmetic

IEEE floating-point arithmetic standards 754 and 854 reflect the present state of the art in designing and implementing floating-point arithmetic units. A formalism applied to a standard non-trapping mode floating-point system shows incorrectness of some numeric and non-numeric results. A software emulation of decimal floating-point computer arithmetic supporting an enhanced set of exception symbols is reported. Some implementation details, discussion of some open questions about utility and consistency of the implemented arithmetic with the IEEE Standards are provided. The potential benefit for computations with infinite symbolic elements is outlined.

Evgenija D. Popova

Issue 8

Managing Editor’s Column

Vol. 1, No. 8; August 28, 1995
Hermann Maurer, Cristian Calude, Arto Salomaa

BROCA: A Computerized Environment for Mediating Scientific Reasoning through Writing

This paper describes a work-in-progress: a computerized learning environment for teaching the conceptual patterns of scientific reasoning. BROCA (Basic Research, Observations, Critical Analysis) is theory-driven, combining two very powerful conceptual models of thinking. The first — drawn from cognitive psychology and information theory — focuses on the mental manipulations by which data becomes information and information becomes knowledge. The second theoretical construct comes from rhetoric and describes the intellectual activities carried out in prewriting, drafting, and revision by an expert writing. As an interactive “cognitive tool,” BROCA provides scaffolding (through visual algorithms and adaptive prompting) to help a fledgling thinker practice the robust patterns of scientific reasoning.

Patricia A. Carlson

Differential Ziv-Lempel Text Compression

We describe a novel text compressor which combines Ziv-Lempel compression and arithmetic coding with a form of vector quantisation. The resulting compressor resembles an LZ-77 compressor, but with no explicit phrase lengths or coding for literals. An examination of the limitations on its performance leads to some predictions of the limits of LZ-77 compression in general, showing that the LZ-77 text compression technique is already very close to the limits of its performance.

Peter Fenwick

Bounds for Heights of Integer Polynomial Factors

We describe new methods for the estimation of the bounds of the coefficients of proper divisors of integer polynomials in one variable. There exist classes of polynomials for which our estimates are better than those obtained using the polynomial measure or the 2-weighted norm.

Laurenţiu Panaitopol, Doru Ştefănescu

A Robust Affine Matching Algorithm Using an Exponentially Decreasing Distance Function

We describe a robust method for spatial registration, which relies on the coarse correspondence of structures extracted from images, avoiding the establishment of point correspondences. These structures (tokens) are points, chains, polygons and regions at the level of intermediate symbolic representation (ISR). The algorithm recovers conformal transformations (4 affine parameters), so that 2-dimensional scenes as well as planar structures in 3D scenes can be handled. The affine transformation between two different tokensets is found by minimization of an exponentially decreasing distance function. As long as the tokensets are kept sparse, the method is very robust against a broad variety of common disturbances (e.g. incomplete segmentations, missing tokens, partial overlap). The performance of the algorithm is demonstrated using simple 2D shapes, medical, and remote sensing satellite images. The complexity of the algorithm is quadratic on the number of affine parameters.

Axel Pinz, Manfred Prantl, Harald Ganster

Issue 9

Managing Editor’s Column

Vol. 1, No. 9; September 28, 1995
Hermann Maurer, Cristian Calude, Arto Salomaa

An Efficient Distributed Algorithm For st-numbering The Vertices Of A Biconnected Graph

Given a biconnected network G with n nodes and a specific edge (r, s) of G, the st-numbering problem asks for an assignment of integers to the nodes satisfying the following condition: r is assigned the number 1 and s is assigned the number n and all other nodes are assigned numbers in such a way that every node (other than r and s) has a neighbour with smaller st-number and a neighbour with larger st-number. Since st-numbering exists iff G is biconnected, it serves as a powerful “local characterization” of the “global” property of the network. We present an efficient O(e) message complexity and O(n) time complexity algorithm for st-numbering a biconnected graph.

R. F. M. Aranha, C. Pandu Rangan

A Decision Method for the Unambiguity of Sets Defined by Number Systems

We show that it is decidable, given a number system N whether or not there is an unambiguous number system equivalent to N.

Juha Honkala

A Method for Proving Theorems in Differential Geometry and Mechanics

A zero decomposition algorithm is presented and used to devise a method for proving theorems automatically in differential geometry and mechanics. The method has been implemented and its practical efficiency is demonstrated by several non-trivial examples including Bertrand’s theorem, Schell’s theorem and Kepler-Newton’s laws.

Dongming Wang

Issue 10

Managing Editor’s Column

Vol. 1, No. 10; October 28, 1995
Hermann Maurer, Cristian Calude, Arto Salomaa

An aperiodic set of Wang cubes

We introduce Wang cubes with colored faces that are a generalization of Wang tiles with colored edges. We show that there exists an aperiodic set of 21 Wang cubes, that is, a set for which there exists a tiling of the whole space with matching unit cubes but there exists no periodic tiling. We use the aperiodic set of 13 Wang tiles recently obtained by the first author using the new method developed by the second. Our method can be used to construct an aperiodic set of n-dimensional cubes for any n ≥ 3.

Karel Culik, Jarkko Kari

Contained Hypermedia

We propose a new hypermedia data model, called CHM for Contained HyperMedia. Our model is based on set-oriented data structuring, with a strong emphasis on automatic maintenance of link integrity. In this paper, the CHM model is presented in detail: both data structuring, navigational facilities and authoring support are presented. We will also explain how we have integrated support for the CHM model in Home, our Hypermedia Object Management Environment, publicly accessible through the World-Wide Web.

Erik Duval, Henk Olivié, Nick Scherbakov

Authoring on the fly

We report about a new way of producing hypermedia documents for supporting teaching at universities. A computer held lecture is automatically converted into the core of a multimedia document and linked together with papers, textbooks, animations and simulations. As an electronic substitute of the blackboard we have used the whiteboard wb of the Mbone toolset and have transmitted the lecture also to remote locations. Our experiments demonstrate that classroom lecturing, distance teaching, and the production of educational hypermedia can be successfully integrated.

Th. Ottmann, Ch. Bacher

Issue 11

Managing Editor’s Column

Vol. 1, No. 11; November 28, 1995
Hermann Maurer, Cristian Calude, Arto Salomaa

Digital Libraries as Learning and Teaching Support

For 30 years repeated attempts have been made to use computers to support the teaching and learning process, albeit with only moderate success. Whenever major attempts failed, some seemingly convincing reasons were presented the for less than satisfactory results. In the early days cost or even lack of suitable equipment was blamed; after colour graphics computers started to be widespread, production costs of interactive and graphically appealing material were considered the main culprits; when modern multimedia authoring techniques did not change the situation either, the lack of personalized feed-back, of network support and the difficulty of producing high quality simulations were seen as main obstacles. With networks now offering excellent multimedia presentation and communication facilities the final breakthrough of computers as ultimate teaching and learning tool is (once more) predicted. And once more results will be disappointing if one crucial component is again overlooked: good courseware must give both guidance to students but also provide a rich variety of background material whenever such is needed. It is the main claim of this paper that the advent of sizeable digital libraries provides one of the most significant chances for computer based training ever. We will argue that such libraries not only allow the efficient production of courseware but also provide the extensive background reservoir of material needed in many situations.

Hermann Maurer, Jennifer Lennon

Testing a High-Speed Data Path The Design of the RSAβ Crypto Chip

High speed devices for public key cryptography are of emerging interest. For this reason, the RSAa crypto chip was designed. It is an architecture capable of performing fast RSA encryption and other cryptographic algorithms based on modulo multiplication. Besides the modulo multiplication algorithm called FastMM, the reasons for its high computation speed are the As Parallel As Possible APAP architecture, as well as the high operation frequency. The RSAα crypto chip also contains on-chip RAM and a special-purpose control logic, enabling special features like encrypted key loading. However, this control mechanism influences to some extend testability of the MM data path which is the heart of the chip. For this reason, the RSAβ crypto chip has been designed to be able to evaluate the behaviour of the pure MM data path. In the following, we describe the strategies used with the RSAβ crypto chip for testing the MM data path under realistical conditions. In this context, analyzing control signal flow turns out to be the key action.

Wolfgang Mayerwieser, Karl C. Posch, Reinhard Posch, Volker Schindler

A Comparison of WWW and Hyper-G

In this paper we attempt to compare features of WWW and Hyper-G, the first fully operable networked multimedia system that goes much beyond WWW and incorporates many features first proposed in Xanadu and later partially tested in systems such as Intermedia.

Andrew Pam, Arnould Vermeer

Issue 12

Managing Editor’s Column

Vol. 1, No. 12; December 28, 1995
Hermann Maurer, Cristian Calude, Arto Salomaa

A Novel Type of Skeleton for Polygons

A new internal structure for simple polygons, the straight skeleton, is introduced and discussed. It is composed of pieces of angular bisectores which partition the interior of a given n-gon P in a tree-like fashion into n monotone polygons. Its straight-line structure and its lower combinatorial complexity may make the straight skeleton preferable to the widely used medial axis of a polygon. As a seemingly unrelated application, the straight skeleton provides a canonical way of constructing a polygonal roof above a general layout of ground walls.

Oswin Aichholzer, Franz Aurenhammer, David Alberts, Bernd Gärtner

Constraint Agents for the Information Age

We propose constraints as the appropriate computational constructs for the design of agents with the task of selecting, merging and managing electronic information coming from such services as Internet access, digital libraries. E-mail, or on-line information repositories. Specifically, we introduce the framework of Constraint-Based Knowledge Brokers, which are concurrent agents that use so-called signed feature constraints to represent partially specified information and can flexibly cooperate in the management of distributed knowledge. We illustrate our approach by several examples, and we define application scenarios based on related technology such as Telescript and workflow management systems.

Jean-Marc Andreoli, Uwe M. Borghoff, Remo Pareschi, Johann H. Schlichter

Parikh prime words and GO-like territories

An n-dimensional vector of natural numbers is said to be prime if the greatest common divisor of its components is one. A word is said to be Parikh prime if its Parikh vector is prime. The languages of Parikh prime and of Parikh non-prime words are investigated (they are neither semilinear nor slender, hence are not context-free or DOL languages; both of them can be generated by matrix grammars with appearance checking).Marking in the plane the points identified by prime (2-dimensional) vectors, interesting patterns of non-marked (“free”) points appear (they are similar to the territories in the game of GO). The shape of such possible territories is investigated (with an exhaustive analysis of tro-, tetro-, pento- and hexominoes). Some open problems are formulated (both concerning the mentioned languages and the “GO territories theory”).

A. Mateescu, Gh. Păun, G. Rozenberg, A. Salomaa

Exploiting Parallelism in Constraint Satisfaction for Qualitative Simulation

Constraint satisfaction is very common in many artificial intelligence applications. This paper presents results from parallelizing constraint satisfaction in a special application — the algorithm for qualitative simulation QSim [Kuipers 94], A parallel-agent based strategy (PAB) is used to solve the constraint satisfaction problem (CSP). Two essential steps of PAB are studied in more detail to achieve a good performance of the parallel algorithm. Partitioning heuristics to generate independent parts of the overall search space are investigated. Sequential CSP algorithms are compared in order to reveal the most efficient one for QSim. The evaluation of these heuristics and algorithms is based on runtime measurements using CSPs traced from QSim. These runtimes allow a best- and worst-case estimation of the expected speedup of the parallel algorithms. The comparison of sequential CSP algorithms leads to following strategy for solving partitioned problems. Less complex problems are solved with simple backtracking, and more complex models are solved with graph-directed backjumping (GBJ).

Marco Platzner, Bernhard Rinner, Reinhold Weiss

A Markov Process for Sequential Allocation

We describe a Markov process which models the sequential allocation for two adjacent tables coexisting in memory by growing towards each other. The tables are expected to fill at the same rate; random deletions and insertions are allowed

Cătălina Ştefănescu

Backmatter

Weitere Informationen