Skip to main content

1994 | Buch

Machine Learning: ECML-94

European Conference on Machine Learning Catania, Italy, April 6–8, 1994 Proceedings

herausgegeben von: Francesco Bergadano, Luc De Raedt

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This volume contains the proceedings of the European Conference on Machine Learning 1994, which continues the tradition of earlier meetings and which is a major forum for the presentation of the latest and most significant results in machine learning.
Machine learning is one of the most important subfields of artificial intelligence and computer science, as it is concerned with the automation of learning processes.
This volume contains two invited papers, 19 regular papers, and 25 short papers carefully reviewed and selected from in total 88 submissions.
The papers describe techniques, algorithms, implementations, and experiments in the area of machine learning.

Inhaltsverzeichnis

Frontmatter
Industrial applications of ML: Illustrations for the KAML dilemma and the CBR dream

This paper presents several industrial applications of ML in the context of their effort to solve the “KAML problem”, i.e., the problem of merging knowledge acquisition and machine learning techniques. Case-based reasoning is a possible alternative to the problem of acquiring highly compiled expert knowledge, but it raises also many new problems that must be solved before really efficient implementations are available.

Y. Kodratoff
Knowledge representation in machine learning

This paper investigates the influence of knowledge representation languages on the complexity of the learning process. However, the aim of the paper is not to give a state-of-the-art account of the involved issues, but to survey the underlying ideas. Then, references will be provided only occasionally and all the specific quantitative results are left to the presentation. Finally, the paper is intentionally unbalanced, because a larger space is given to those issues that are more novel or less investigated in the literature.

Filippo Neri, Lorenza Saitta
Inverting implication with small training sets

We present an algorithm for inducing recursive clauses using inverse implication (rather than inverse resolution) as the underlying generalization method. Our approach applies to a class of logic programs similar to the class of primitive recursive functions. Induction is performed using a small number of positive examples that need not be along the same resolution path. Our algorithm, implemented in a system named CRUSTACEAN, locates matched lists of generating terms that determine the pattern of decomposition exhibited in the (target) recursive clause. Our theoretical analysis defines the class of logic programs for which our approach is complete, described in terms characteristic of other ILP approaches. Our current implementation is considerably faster than previously reported. We present evidence demonstrating that, given randomly selected inputs, increasing the number of positive examples increases accuracy and reduces the number of outputs. We relate our approach to similar recent work on inducing recursive clauses.

David W. Aha, Stephane Lapointe, Charles X. Ling, Stan Matwin
A context similarity measure

This paper concentrates upon similarity between objects described by vectors of nominal features. It proposes non-metric measures for evaluating the similarity between: (a) two identical values in a feature, (b) two different values in a feature, (c) two objects. The paper suggests that similarity is dependent upon the context: It is influenced by the given set of objects, and the concept under discussion. The proposed Context-Similarity measure was tested, and the paper presents comparisons with other measures. The comparisons suggest that compared to other measures, the Context-Similarity suites best for natural concepts.

Yoram Biberman
Incremental learning of control knowledge for nonlinear problem solving

In this paper we advocate a learning method where a deductive and an inductive strategies are combined to efficiently learn control knowledge. The approach consists of initially bounding the explanation to a predetermined set of problem solving features. Since there is no proof that the set is sufficient to capture the correct and complete explanation for the decisions, the control rules acquired are then refined, if and when applied incorrectly to new examples. The method is especially significant as it applies directly to nonlinear problem solving, where the search space is complete. We present hamlet, a system where we implemented this learning method, within the context of the prodigy architecture. hamlet learns control rules for individual decisions corresponding to new learning opportunities offered by the nonlinear problem solver that go beyond the linear one. These opportunities involve, among other issues, completeness, quality of plans, and opportunistic decision making. Finally, we show empirical results illustrating hamlet's learning performance.

Daniel Borrajo, Manuela Veloso
Characterizing the applicability of classification algorithms using meta-level learning

This paper is concerned with a comparative study of different machine learning, statistical and neural algorithms and an automatic analysis of test results. It is shown that machine learning methods themselves can be used in organizing this knowledge. Various datasets can be characterized using different statistical and information theoretic measures. These together with the test results can be used by a ML system to generate a set of rules which could also be altered or edited by the user. The system can be applied to a new dataset to provide the user with a set of recommendations concerning the suitability of different algorithms and these are graded by an appropriate information score. The experiments with the implemented system indicate that the method is viable and useful.

Pavel Brazdil, Joāo Gama, Bob Henery
Inductive learning of characteristic concept descriptions from small sets of classified examples

This paper presents a novel idea to the problem of learning concept descriptions from examples. Whereas most existing approaches rely on a large number of classified examples, the approach presented in the paper is aimed at being applicable when only a few examples are classified as positive (and negative) instances of a concept. The approach tries to take advantage of the information which can be induced from descriptions of unclassified objects using a conceptual clustering algorithm. The system Cola is described and results of applying Cola in two real-world domains are presented.

Werner Emde
FOSSIL: A robust relational learner

The research reported in this paper describes Fossil, an ILP system that uses a search heuristic based on statistical correlation. This algorithm implements a new method for learning useful concepts in the presence of noise. In contrast to Foil's stopping criterion, which allows theories to grow in complexity as the size of the training sets increases, we propose a new stopping criterion that is independent of the number of training examples. Instead, Fossil's stopping criterion depends on a search heuristic that estimates the utility of literals on a uniform scale. In addition we outline how this feature can be used for top- down pruning and present some preliminary results.

Johannes Fürnkranz
A multistrategy learning system and its integration into an interactive floorplanning tool

The presented system COSIMA learns floorplanning rules from structural descriptions incrementally, using a number of cooperating machine learning strategies: Selective inductive generalization generates most specific generalizations using predicate weights to select the best one heuristically. The predicate weights are adjusted statistically. Inductive specialization eliminates overgeneralizations. Constructive induction improves the learning process in several ways. The system is organized as a learning apprentice system. It provides an interactive design tool and can automate single floorplanning steps.

Jürgen Herrmann, Reiner Ackermann, Jörg Peters, Detlef Reipa
Bottom-up induction of oblivious read-once decision graphs

We investigate the use of oblivious, read- once decision graphs as structures for representing concepts over discrete domains, and present a bottom-up, hill-climbing algorithm for inferring these structures from labelled instances. The algorithm is robust with respect to irrelevant attributes, and experimental results show that it performs well on problems considered difficult for symbolic induction methods, such as the Monk's problems and parity.

Ron Kohavi
Estimating attributes: Analysis and extensions of RELIEF

In the context of machine learning from examples this paper deals with the problem of estimating the quality of attributes with and without dependencies among them. Kira and Rendell (1992a,b) developed an algorithm called RELIEF, which was shown to be very efficient in estimating attributes. Original RELIEF can deal with discrete and continuous attributes and is limited to only two-class problems. In this paper RELIEF is analysed and extended to deal with noisy, incomplete, and multi-class data sets. The extensions are verified on various artificial and one well known real-world problem.

Igor Kononenko
BMWk revisited generalization and formalization of an algorithm for detecting recursive relations in term sequences

As several works in Machine Learning (particularly in Inductive Logic Programming) have focused on building recursive definitions from examples, this paper presents a formalization and a generalization of the BMWk methodology, which stems from program synthesis from examples, ten years ago. The framework of the proposed formalization is term rewriting. It allows to state some theoretical results on the qualities and limitations of the method.

Guillaume Le Blanc
An analytic and empirical comparison of two methods for discovering probabilistic causal relationships

The discovery of causal relationships from empirical data is an important problem in machine learning. In this paper the attention is focused on the inference of probabilistic causal relationships, for which two different approaches, namely Glymour et al.'s approach based on constraints on correlations and Pearl and Verma's approach based on conditional independencies, have been proposed. These methods differ both in the kind of constraints they consider while selecting a causal model and in the way they search the model which better fits to the sample data. Preliminary experiments show that they are complementary in several aspects. Moreover, the method of conditional independence can be easily extended to the case in which variables have a nominal or ordinal domain. In this case, symbolic learning algorithms can be exploited in order to derive the causal law from the causal model.

Donate Malerba, Giovanni Semeraro, Floriana Esposito
Sample PAC-learnability in model inference

In this article, PAC-learning theory is applied to model inference, which concerns the problem of inferring theories from facts in first order logic. It is argued that uniform sample PAC-learnability cannot be expected with most of the ‘interesting’ model classes. Polynomial sample learnability can only be accomplished in classes of programs having a fixed maximum number of clauses. We have proved that the class of context free programs in a fixed maximum number of clauses with a fixed maximum number of literals is learnable from a polynomial number of examples. This is also proved for a more general class of programs.

S. H. Nienhuys-Cheng, M. Polman
Averaging over decision stumps

In this paper, we examine a minimum encoding approach to the inference of decision stumps. We then examine averaging over decision stumps as a method of generating probability estimates at the leaves of decision trees.

Jonathan J. Oliver, David Hand
Controlling constructive induction in CIPF: An MDL approach

We describe the propositional learning system CiPF, which tightly couples a simple concept learner with a sophisticated constructive induction component. It is described in terms of a generic architecture for constructive induction. We focus on the problem of controlling the abundance of opportunities for constructively adding new attributes. In CiPF the so-called Minimum Description Length (MDL) principle acts as a powerful control heuristic. This is also confirmed in the experiments reported.

Bernhard Pfahringer
Using constraints to building version spaces

Our concern is building the set G of maximally general terms covering positive examples and rejecting negative examples in prepositional logic.Negative examples are represented as constraints on the search space. This representation allows for defining a partial order on the negative examples and on attributes too. It is shown that only minimal negative examples and minimal attributes are to be considered when building the set G. These results hold in case of a non-convergent data set.Constraints can be directly used for a polynomial characterization of G. They also allow for detecting erroneous examples in a data set.

Michèle Sebag
On the utility of predicate invention in inductive logic programming

The task of predicate invention in ILP is to extend the hypothesis language with new predicates in case that the vocabulary given initially is insufficient for the learning task. However, whether predicate invention really helps to make learning succeed in the extended language depends on the bias that is currently employed.In this paper we investigate for which commonly employed language biases predicate invention is an appropriate shift operation. We prove that for some restricted languages predicate invention does not help in case that the learning task fails, and characterize the languages for which predicate invention is useful as bias shift operation.

Irene Stahl
Learning problem-solving concepts by reflecting on problem solving

Learning and problem solving are intimately related: problem solving determines the knowledge requirements of the reasoner which learning must fulfill, and learning enables improved problem-solving performance. Different models of problem solving, however, recognize different knowledge needs, and, as a result, set up different learning tasks. Some recent models analyze problem solving in terms of generic tasks, methods, and subtasks. These models require the learning of problem-solving concepts such as new tasks and new task decompositions. We view reflection as a core process for learning these problem-solving concepts. In this paper, we identify the learning issues raised by the task-structure framework of problem solving. We view the problem solver as an abstract device, and represent how it works in terms of a structure-behavior-function model which specifies how the knowledge and reasoning of the problem solver results in the accomplishment of its tasks. We describe how this model enables reflection, and how model-based reflection enables the reasoner to adapt its task structure to produce solutions of better quality. The Autognostic system illustrates this reflection process.

Eleni Stroulia, Ashok K. Goel
Existence and nonexistence of complete refinement operators

Inductive Logic Programming is a subfield of Machine Learning concerned with the induction of logic programs. In Shapiro's Model Inference System — a system that infers theories from examples — the use of downward refinement operators was introduced to walk through an ordered search space of clauses. Downward and upward refinement operators compute specializations and generalizations of clauses respectively. In this article we present the results of our study of completeness and properness of refinement operators for an unrestricted search space of clauses ordered by θ-subsumption. We prove that locally finite downward and upward refinement operators that are both complete and proper for unrestricted search spaces ordered by θ-subsumption do not exist. We also present a complete but improper upward refinement operator. This operator forms a counterpart to Laird's downward refinement operator with the same properties.

Patrick R. J. van der Laag, Shan-Hwei Nienhuys-Cheng
A hybrid nearest-neighbor and nearest-hyperrectangle algorithm

Algorithms based on Nested Generalized Exemplar (NGE) theory [10] classify new data points by computing their distance to the nearest “generalized exemplar” (i.e. an axis-parallel multidimensional rectangle). An improved version of NGE, called BNGE, was previously shown to perform comparably to the Nearest Neighbor algorithm. Advantages of the NGE approach include compact representation of the training data and fast training and classification. A hybrid method that combines BNGE and the k-Nearest Neighbor algorithm, called KBNGE, is introduced for improved classification accuracy. Results from eleven domains show that KBNGE achieves generalization accuracies similar to the k-Nearest Neighbor algorithm at improved classification speed. KBNGE is a fast and easy to use inductive learning algorithm that gives very accurate predictions in a variety of domains and represents the learned knowledge in a manner that can be easily interpreted by the user.

Dietrich Wettschereck
Automated knowledge acquisition for Prospector-like expert systems

The method for automatic knowledge acquisition from categorical data is explained. Empirical implications are generated from data according to their frequencies. Only those of them are inserted to created knowledge base whose validity in data statistically significantly differs from the weight composed by the Prospector like inference mechanism from the weights of the implications already present in the base. A comparison with classical machine learning algorithms is discussed. The method is implemented as a part of the Knowledge EXplorer system.

Petr Berka, Jiří Ivánek
On the role of machine learning in knowledge-based control

Knowledge-based methods gain increasing importance in automation systems. But many real applications are too complex or there is too little understanding to acquire useful knowledge. Therefore machine learning techniques like the directed self-learning which is used here may help to bridge this gap. In order to point out the advantages of machine learning in process automation, we applied the directed self-learning method to the control of an inverted pendulum. Through a comparison between a knowledge-based and a machine learning version of the controller, both based on the knowledge of the same expert, results were achieved which demonstrate the usefulness of machine learning in control applications.

Werner Brockmann
Discovering dynamics with genetic programming

This paper describes an application of the genetic programming paradigm to the problem of structure identification of dynamical systems. The approach is experimentally evaluated by reconstructing the models of several dynamical systems from simulated behaviors.

Sašo Džeroski, Igor Petrovski
A geometric approach to feature selection

We propose a new method for selecting features, or deciding on splitting points in inductive learning. Its main innovation is to take the positions of examples into account instead of just considering the numbers of examples from different classes that fall at different sides of a splitting rule. The method gives rise to a family of feature selection techniques. We demonstrate the promise of the developed method with initial empirical experiments in connection of top-down induction of decision trees.

Tapio Elomaa, Esko Ukkonen
Identifying unrecognizable regular languages by queries

We describe a new technique useful in identifying a subclass of regular trace languages (defined on a free partially commutative monoid). We extend an algorithm defined by Dana Angluin in 1987 for DFA's and using equivalence and membership queries. In trace languages the words are equivalence classes of strings, and we show how to extract, from a given class, a string that can drive the original learning algorithm. In this way we can identify a class of regular trace languages which includes languages which are not recognizable by any automaton.

Claudio Ferretti, Giancarlo Mauri
Intensional learning of logic programs

In this paper we investigate the possibility of learning logic programs by using an intensional evaluation of clauses. Unlike learning methods based on extensionality, by adopting an intensional evaluation of clauses the learning algorithm presented in this paper is correct and sufficient and does not depend on the kind of examples provided. Since searching a space of possible programs (instead of a space of independent clauses) is unfeasible, only partial programs containing clauses successfully used to derive at least one positive example are taken into consideration. Since clauses are not learned independently of each others, backtracking may be required.

D. Gunetti, U. Trinchero
Partially isomorphic generalization and analogical reasoning

Analogical reasoning is carried out based on an analogy which gives a similarity between a base domain and a target domain. Thus the analogy plays an important role in analogical reasoning. However, computing such an analogy leads to a combinatorial explosion. This paper introduces a notion of partially isomorphic generalizations of atoms and rules which makes it possible to carry out analogical reasoning without computing the analogy, and gives a relationship between our generalization and the analogy. Then we give a procedure which produces such a generalization in polynomial time.

Eizyu Hirowatari, Setsuo Arikawa
Learning from recursive, tree structured examples

In this paper, we propose an example representation system that combines a greater expressive richness than that of the Boolean framework and an analogous treatment complexity. The model we have chosen is algebraic, and has been used up to now to cope with program semantics [4]. The examples are represented by labelled, recursive typed trees. A signature enables us to define the set of all allowed (partial or complete) representations. This model properly contains Boolean representations. We show that in the PAC framework defined by Valiant [10], the extensions to this model of two Boolean formula classes: k-DNF and k-DL, remain polynomially learnable.

P. Jappy, M. C. Daniel-Vatonne, O. Gascuel, C. de la Higuera
Concept formation in complex domains

Most empirical learning algorithms describe objects as a list of attribute-value pairs. A flat attribute-value representation fails, however, to capture the internal structure of real objects. Mechanisms are therefore needed to represent the different levels of detail at which an object can be seen. A common structuring method is reviewed, and new principles of evaluation are proposed. As another way of enriching the representation language, a formalism is also proposed for multi-valued attributes, allowing the representation of sets of objects.

A. Ketterlin, J. J. Korczak
An algorithm for learning hierarchical classifiers
Jyrki Kivinen, Heikki Mannila, Esko Ukkonen, Jaak Vilo
Learning belief network structure from data under causal insufficiency

Hidden variables are well known sources of disturbance when recovering belief networks from data based only on measurable variables. Hence models assuming existence of hidden variables are under development. This paper presents a new algorithm exploiting the results of the known CI algorithm of Spirtes, Glymour and Scheines [4]. CI algorithm produces partial causal structure from data indicating for some variables common unmeasured causes. We claim that there exist belief network models which (1) have connections identical with those of CI output, (2) have edge orientations identical with CI (3) have no other latent variables than those indicated by CI, and (4) and the same time fit the data. We present a non-deterministic algorithm generating the whole family of such belief networks.

Mieczyslaw A. Klopotek
Cost-sensitive pruning of decision trees

The pruning of decision trees often relies on the classification accuracy of the decision tree. In this paper, we show how the misclassification costs, a related criterion applied if errors vary in their costs, can be integrated in several well-known pruning techniques.

Ulrich Knoll, Gholamreza Nakhaeizadeh, Birgit Tausend
An instance-based learning method for databases: An information theoretic approach

A new method of instance-based learning for databases is proposed. We improve the current similarity measures in several ways using information theory. Similarity is defined on every possible attribute type in a database, and also the weight of each attribute is calculated automatically by the system. Besides, our nearest neighbor algorithm assigns different weights to the selected instances. Our system is implemented and tested on a typical machine learning database.

Changhwan Lee
Early screening for gastric cancer using machine learning techniques

The feasibility of using machine-learning techniques to screen dyspeptic patients for those at high risk of gastric cancer was demonstrated in this study. Data on 1401 dyspeptic patients over the age of 40, consisted of 85 epidemiological and clinical variables and a gold-standard diagnosis, made by upper gastrointestinal endoscopy. The diagnoses were grouped into two classes — those at high risk of having (or developing) gastric cancer and those at low risk. A machine-learning approach was used to generate a cross-validated sensitivity-specificity curve in order to assess the power of the discrimination between the two groups.

W. Z. Liu, A. P. White, M. T. Hallissey
DP1: Supervised and unsupervised clustering

This paper presents Dp1, an incremental clustering algorithm that accepts a description of the expected performance task — the goal of learning — and uses that description to alter its learning bias. With different goals Dp1 addresses a wide range of empirical learning tasks from supervised to unsupervised learning. At one extreme, Dp1 performs the same task as does ID3, and at the other, it performs the same task as does Cobweb.

Joel D. Martin
Using machine learning techniques to interpret results from discrete event simulation

This paper describes an approach to the interpretation of discrete event simulation results using machine learning techniques. The results of two simulators were processed as machine learning problems. Interpretation obtained by the regression tree learning system Retis was intuitive but obviously expressed in a complicated way. To enable a more powerful knowledge representation, Inductive Logic Programming (ILP) system Markus was used, that also highlighted some attribute combinations. These attribute combinations were used as new attributes in further experiments with Retis and Assistant. Some interesting regularities were thereby automatically discovered.

Dunja Mladenić, Ivan Bratko, Ray J. Paul, Marko Grobelnik
Flexible integration of multiple learning methods into a problem solving architecture

One of the key issues in so-called multi-strategy learning systems is the degree of freedom and flexibility with which different learning and inference components can be combined. Most of multi-strategy systems only support fixed, tailored integration of the different modules for a specific domain of problems. We will report here our current research on the Massive Memory Architecture (MMA), an attempt to provide a uniform representation framework for inference and learning components supporting flexible, multiple combination of these components. Rather than a specific combination of learning methods, we are interested in an architecture adaptable to different domains where multiple learning strategies (combinations of learning methods) can be programmed or even learned.

Enric Plaza, Josep Lluís Arcos
Concept sublattices

We consider the following problem: Given a “universe” of primitive and composed entities, where non-primitive entities may contain other ones. How should we represent these entities, such that their containment relation is decidable? As an answer to this problem we propose a representation based on a Galois connection. An application of this idea in modelling human memory is given as well.

Janos Sarbo, József Farkas
The piecewise linear classifier DIPOL92

This paper presents a learning algorithm which constructs an optimised piecewise linear classifier for n-class problems.In the first step of the algorithm initial positions of the discriminating hyperplanes are determined by linear regression for each pair of classes. To optimise these positions depending on the misclassified patterns an error criterion function is defined. This function is minimised by a gradient descent procedure for each hyperplane separately. As an option in the case of non-convex classes, a clustering procedure decomposing the classes into appropriate subclasses can be applied. The classification of patterns is defined on a symbolic level on the basis of the signs of the discriminating hyperplanes.

Barbara Schulmeister, Fritz Wysotzki
Complexity of computing generalized VC-dimensions

In the PAC-learning model, the Vapnik-Chervonenkis (VC) dimension plays the key role to estimate the polynomial-sample learnability of a class of binary functions. For a class of {0,..., m}-valued functions, the notion has been generalized in various ways. This paper investigates the complexity of computing some of generalized VC-dimensions: VC*-dimension, Ψ*-dimension, and ΨG-dimension. For each dimension, we consider a decision problem that is, for a given matrix representing a class F of functions and an integer K, to determine whether the dimension of F is greater than K or not. We prove that the VC*-dimension problem is polynomial-time reducible to the satisfiability problem of length J with O(log2J) variables, which includes the original VC-dimension problem as a special case. We also show that the ΨG-dimension problem is still reducible to the satisfiability problem of length J with O(log2 J), while the Ψ*-dimension problem becomes NP-complete.

Ayumi Shinohara
Learning relations without closing the world

This paper describes Link, a heuristically guided learner that combines aspects of three major approaches to ILP — LGG, search heuristic and (declarative) structural bias. In the manner of LGG algorithms, Link generates sets of candidate premise literals by collecting facts about the terms that appear in goal examples. It uses a linked-enough heuristic to select amongst these candidates to form hypothesis clauses (conjunctions of literals), and uses structural criteria to select among possible hypotheses in the manner of declarative bias-based systems. This combination — together with a parametrized hypothesis evaluation function — allows Link to learn in realistic situations where many FOL learners have problems because they are forced to make assumptions about the data: when there are no negative examples, when information is sparse, and when the closed-world assumption cannot or should not be made on examples and/or background.

Edgar Sommer
Properties of Inductive Logic Programming in function-free Horn logic

Inductive Logic Programming (ILP) deals with inductive inference in first order Horn logic. A commonly employed restriction on the hypothesis space in ILP is that to function-free programs. It yields a more tractable hypothesis space, and simplifies induction. This paper investigates basic properties of ILP in function-free languages.

Irene Stahl
Representing biases for Inductive Logic Programming

As each of the four main approaches to a declarative bias represention in Inductive Logic Programming (ILP), the representation by parameterized languages or by clause sets, the grammar-based and the scheme-based representation, fails in representing all language biases in ILP systems, we present a unifying representation language MILES-CTL for these biases by extending the scheme-based approach.

Birgit Tausend
Biases and their effects in Inductive Logic Programming

The shift from attribute-value based hypothesis languages to Horn clause logic as in Inductive Logic Programming (ILP) results in a very complex hypothesis space. In this paper, we study how the basic constituents of biases reduce the size of the hypothesis space in ILP.

Birgit Tausend
Inductive learning of normal clauses

In this paper, we are interested in the induction of normal clauses. We consider here the well-founded semantics, based on a three-valued logics. The classical constraint: the learned program must cover the positive examples and reject the negative ones, can be too strong and we have defined a weaker criterion: we require that a positive (resp. negative) example is not considered as False (resp. True) by the learned program. This study has been applied in a framework in many points similar to the system FOIL.

Christel Vrain, Lionel Martin
Backmatter
Metadaten
Titel
Machine Learning: ECML-94
herausgegeben von
Francesco Bergadano
Luc De Raedt
Copyright-Jahr
1994
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-48365-6
Print ISBN
978-3-540-57868-0
DOI
https://doi.org/10.1007/3-540-57868-4