Skip to main content
main-content

Über dieses Buch

In the past decades several researchers have developed statistical models for the prediction of corporate bankruptcy, e. g. Altman (1968) and Bilderbeek (1983). A model for predicting corporate bankruptcy aims to describe the relation between bankruptcy and a number of explanatory financial ratios. These ratios can be calculated from the information contained in a company's annual report. The is to obtain a method for timely prediction of bankruptcy, a so­ ultimate purpose called "early warning" system. More recently, this subject has attracted the attention of researchers in the area of machine learning, e. g. Shaw and Gentry (1990), Fletcher and Goss (1993), and Tam and Kiang (1992). This research is usually directed at the comparison of machine learning methods, such as induction of classification trees and neural networks, with the "standard" statistical methods of linear discriminant analysis and logistic regression. In earlier research, Feelders et al. (1994) performed a similar comparative analysis. The methods used were linear discriminant analysis, decision trees and neural networks. We used a data set which contained 139 annual reports of Dutch industrial and trading companies. The experiments showed that the estimated prediction error of both the decision tree and neural network were below the estimated error of the linear discriminant. Thus it seems that we can gain by replacing the "traditionally" used linear discriminant by a more flexible classification method to predict corporate bankruptcy. The data set used in these experiments was very small however.

Inhaltsverzeichnis

Frontmatter

Artificial Intelligence Techniques

Frontmatter

Using Machine Learning, Neural Networks and Statistics to Predict Corporate Bankruptcy: A Comparative Study

Abstract
Recent literature strongly suggests that machine learning approaches to classification outperform “classical” statistical methods. We make a comparison between the performance of linear discriminant analysis, classification trees and neural networks in predicting corporate bankruptcy. Linear discriminant analysis represents the “classical” statistical approach to classification, whereas classification trees and neural networks represent artificial intelligence approaches. A proper statistical design is used to be able to test whether observed differences in predictive performance are statistically significant. The dataset consists of two large collections of annual reports from Belgian companies. The first collection contains the reports of 994 industrial companies and the second collection contains the reports of 576 construction companies. We use stratified 10-fold cross-validation on the training set to choose “good” parameter values for the different learning methods.
P. P. M. Pompe, A. J. Feelders

Prolog Business Objects in a Three-Tier Architecture

Abstract
Prolog is generally not the language of choice of business application developers. This, despite the fact that Prolog presents clear advantages in the declarative representation of business logic. One of the primary reasons for this is that the representational advantages of Prolog are perceived as being outweighed by the disadvantages of implementing integrated user interfaces and database access. With the advent of three-tiered client/server architectures the disadvantages faced by Prolog have been removed, and Prolog now presents a highly attractive platform for the implementation of business logic. This paper addresses two issues related to the use of Prolog Business Logic Objects. First is the integration of Prolog-based Business Logic Objects within the three-tiers. Second is the structure of the individual Business Logic Object in such a way that is can take advantage of the meta-level characteristics offered by Prolog.
David G. Schwartz

The Effect of Training Data Set Size and the Complexity of the Separation Function on Neural Network Classification Capability: The Two-Group Case

Abstract
Classification among groups is a crucial problem in managerial decision making. Classification techniques are used in: identifying stressed firms, classifying among consumer types, rating of firms’ bonds, etc. Neural networks are recognized as important and emerging methodologies in the area of classification. In this paper, we study the effect of training sample size and the neural network topology on the classification capability of neural networks. We also compare neural network capabilities with those of commonly used statistical methodologies. Experiments were designed and carried out on two-group classification problems to find answers to these questions.
Moshe Leshno, Yishay Spector

Imaginal Agents

Abstract
There is a need for an integrative approach to the design of agent architectures that considers both issues of individual agency and agent interaction. Image Theory, a well-established framework for analyzing and understanding the activities of decision-makers (DM), is applied to provide conceptual guidelines for establishing inter-agent communication; independent agent deliberation, and the evolution or modification of individual agent behavior. This paper presents Image Theory and examines its implications on the design of individual agents and societies of agents. After presenting the relevant aspects of Image Theory, we suggest a number of agent design principles derived from the theory, as well as some practical implications of these principles.
David G. Schwartz, Dov Te’eni

Financial Applications

Frontmatter

Financial Product Representation and Development Using a Rule-Based System

Abstract
Increased competitiveness of the financial services industry forces financial institutions to develop new products that fit individual needs of their customers in shorter time. As a consequence it is necessary to develop suitable information systems that support the process of product development and product management. More complex products may be built from simpler basic products. In this paper a general methodology for the abstract representation of bank products is presented. A rule-based system is currently under development. Common properties of products (of different lines of business) are identified. This allows reusability of business processes that are concerned with development, sales and administration of bank products. Finally a concept for risk and yield management that fits into the parameter and process model discussed is presented.
Anja Lange, Juergen Seitz, Eberhard Stickel

Applications of Artificial Intelligence and Cognitive Science Techniques in Banking

Abstract
In this paper we propose to develop some cognitive science techniques which could be useful for several domains of banking. One of our main topics will be decision support systems and knowledge-based decision support systems. Thus we have to consider the knowledge acquisition stage which is known as the bottleneck in the construction of these systems. We will present a model for learning strategies established by a decision maker for a feature task of categorical judgment of objects described by several attributes. It has been improved for several domains and we will show an application to understand how individuals categorize savings plans. This knowledge will be useful for the bank consultant to be able to advise exactly what his/her clientele wants to have. To finish, some applications in the credit field, training systems, and portfolio management are briefly discussed.
Philippe Lenca

Business Applications

Frontmatter

AI-Supported Quality Function Deployment

Abstract
Manual Quality Function Deployment (QFD) tools are limited in their use and their reuse. Computational tools can alleviate these limitations. In addition, Artificial Intelligence (AI) tools can further enhance the functionality of QFD tools. A graph-based information representation is proposed as the basis for integrating various QFD and AI tools. An architecture of a computation QFD (CQFD) tool based on the graph-based modeling environment n-dim is briefly discussed. The ideas are illustrated through the design of a cork remover.
Yoram Reich

Knowledge Reuse in Mass Customization of Knowledge-Intensive Services

Abstract
Mass customization is bound to become a strategic necessity for companies in most industries, as it enables a firm to operate at mass production levels and cater to the needs of individual customers. When the products being produced are intangible and knowledge-intensive, knowledge-based systems (KBSs) are critical enablers of mass customization. Recognizing that many firms offering such products already have in place various KBSs that support the “production” of standard products, this paper investigates the extent to which eight knowledge reuse approaches can be applied so as to “adapt” existing KBSs for (re)use in the context of mass customization.
Michel Benaroch

Harvest Optimization of Citrus Crop Using Genetic Algorithms

Abstract
The harvest optimization problem (HOP) of the citrus crop is concerned with finding the picking schedule of the orange plots (or “blocks”) that maximizes the total net revenue. In its most simplified form the HOP is an integer-programming (IP) problem where the decision is to determine which block to pick at which week. Since the number of blocks is several hundreds and the picking season extends over 6 months, the resulting IP problem is very large which makes it hard to solve analytically. Consequently, we pursue a heuristic approach to solve the HOP involving Genetic Algorithms (GA). The GA approach is demonstrated by means of a prototype problem that is somewhat simplified, yet captures many of the components and characteristics of the real full-scale HOP. To study the sensitivity and stability of the approach, the GA model was solved on a variety of demand and supply scenarios, and was compared with a Linear-programming (LP) based solution.
Nissan Levin, Jacob Zahavi

“CORPUS,” An Approach to Capitalizing Company Knowledge

Abstract
“CORPUS” is a practical approach, now under development that provides guidelines for “Capitalizing company knowledge.”
Michel Grundstein

Economic Applications

Frontmatter

Fuzzy Approach in Economic Modelling of Economics of Growth

Abstract
The present paper describes an elaborated expert system fuzzy knowledge base for economic development analysis.
V. Deinichenko, G. Bikesheva, A. Borisov

Computer Based Analysis of an Economy in Transition to Steady State Equilibrium

Abstract
We present a simple disequilibrium model of an economy in transition, which enables analysis of dynamic interrelations between interest rates, inflation, money and output. We also study effects of a change in nominal and real money on output.
Krzysztof Cichocki, Tomasz Szapiro

A Multistrategy Conceptual Analysis of Economic Data

Abstract
The goal of the multistrategy tool, INLEN, is to serve as an intelligent assistant for discovering knowledge in large databases. INLEN has been applied to, and is well-suited for the exploration of databases consisting of economic and demographic facts and statistics. Preliminary experiments on several data sets have focused on discerning and comparing various patterns in the status and development of countries in different regions of the world. These experiments have provided some interesting and often unexpected results, and serve as an example of one way in which such data can be explored. This paper describes in brief the INLEN methodology, presents examples of its learning and discovery operators, and demonstrates its application to economic domains.
Kenneth A. Kaufman, Ryszard S. Michalski

The Credible Modelling of Economic Agents with Limited Rationality

Abstract
A model of bounded rationality suitable for modelling economic agents is proposed. Agents’ beliefs are modeled as descriptions of tentative models. These models are inductively found by a limited incremental search based on the current model and by combinations of past models from a space restricted by the imposition of a priori beliefs. The action of the agent is then decided by a search of the goal space where it intersects the model space (if any).
Bruce Edmonds, Scott Moss

Logic, Reasoning and a Programming Language for Simulating Economic and Business Processes with Artificially Intelligent Agents

Abstract
The merits of modelling within a logical, as opposed to Bayesian, framework is discussed. It is claimed that a logical formalism is more appropriate for modelling qualitative decisions and that this framework makes the unfolding of process more apparent. This difference in approach leads to adopting a declarative programming rather than imperative paradigm. This approach also enables the credible modelling of agents with limited information processing capacities. An agent orientated and strictly declarative computer modelling language is presented called SDML which has been specifically developed to support such a style of modelling.
Bruce Edmonds, Scott Moss, Steve Wallis

Qualitative and Cognitive Research

Frontmatter

Information Processing, Motivation and Decision Making

Abstract
In Botelho and Coelho, 1995, the authors presented a model of memory for autonomous artificial agents (SALT: Schema-Associative Long Term memory). The main feature of the SALT model is that it allows agents to exhibit context-dependent cognition. This is an important feature since it enables us to gain a better understanding of the reasons why someone may produce different decisions about a given problem, in different situations. In our research we focus on decisions regarding personnel selection. Namely, we are interested in addressing situations in which a manager with a particular task to be done has to decide which of his or her subordinates will be assigned to its execution. In this paper we present the COMINT model, a COgnition and Motivation INTegration model of decision making. The COMINT model extends the original SALT model to explain the influence of motivation in the information processing mechanism of the decision maker.
Luis Miguel Botelho, Helder Coelho

A Practical Tool for Explanation of Quantitative Model Behaviour

Abstract
For the class of linear discrete-time structural quantitative economic models, we develop a procedure which can clarify the core properties of such models by way of a qualitative causal explanation. From the equation representation of the model and knowledge about the causality within a single equation, we derive in an automatic way a representation of the economic model using signed directed graphs. Causal explanation of a quantitative simulation with the model can then be generated by traversing the signed directed graph. The procedures have been implemented in a Prolog program.
Ron Berndsen

Practical Application of Artificial Intelligence in Education and Training

Abstract
A critical review of the rationale and potential of artificial intelligence (AI) technology for education and training is given, from both a theoretical and a practical perspective. AI can be regarded as a multi-disciplinary technology which lends itself to collaborative research and participation in the design, development and evaluation of AI systems. A model suggested is one which encourages collaboration between industry, academia and users. It therefore has the potential for assisting in the creation of a supportive network for enhancing the skill and opportunities of people in the community. This could be especially helpful in the environment in which the Technikon Northern Transvaal finds itself, which is a community in need of training within a country in need of trained people. A strong critical challenge, however, is posed to the short-term market orientated approach of AI research, with its focus on automation and especially the production of marketable products which have no direct social benefits and human purpose.
Louis Dannhauser
Weitere Informationen