Skip to main content

2007 | Buch

New Trends in Applied Artificial Intelligence

20th International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems, IEA/AIE 2007, Kyoto, Japan, June 26-29, 2007. Proceedings

herausgegeben von: Hiroshi G. Okuno, Moonis Ali

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

“The true sign of intelligence is not knowledge but imagination.” Albert Einstein Applied arti?cial intelligence researchers have been focusing on developing and employing methods and systems to solve real-life problems in all areas incl- ing engineering, science, industry, automation & robotics, business & ?nance, th cyberspace, and man–machine interactions. The 20 International Conference on Industrial, Engineering and Other Applications of Applied Intelligent S- tems (IEA/AIE-2007) held in Kyoto, Japan presented such work performed by many scientists worldwide. The previous IEA/AIE conference held in Japan was the Ninth International Conference on Industrial and Engineering Applications of Arti?cial Intelligence and Expert systems (IEA/AIE-1996) in Fukuoka in 1996. The duration of 11 years between two conferences demanded drastic changes around applied art- cialintelligenceresearch.ThemaincausesaretherapidexpansionoftheInternet and the deluge of electronic and on-line text data. The Program Committee - cusedonAsian-originatingtechnologies,suchasactivemining,integrationofthe Internet and broadcasting, chance discovery, real-world interactions, and fuzzy logic applications. The ?rst four are from Japan and the last one is from Taiwan and China. We received 462 papers from all parts of the world. Each paper was sent to at least three Program Committee members for review. Only 116 papers were selected for presentation and publication in the proceedings. We would like to express our sincere thanks to the Program Committee and all the reviewers for their hard work.

Inhaltsverzeichnis

Frontmatter

Keynotes

Towards New Content Services by Fusion of Web and Broadcasting Contents

We describe the research on fusion of Web and TV broadcasting contents conducted by Kyoto University 21

st

COE program and NICT communication/broadcasting contents fusion project. Despite much talk about the fusion of broadcasting and the Internet, no technology has been established for fusing web and TV program content. We proposed several ways to acquire information from diverse information sources of different media types, especially from Web and TV broadcasting. A notable difference between Web contents and TV program contents is that the former is a document-based information media and the latter is a time-based continuous information media, which leads to the difference of information accessing methods. Conventional “Web browsing” is an active manner of accessing information. On the other hand, conventional “TV watching” is a passive way of accessing information. In order to search, integrate and view the information of Web and TV, we explored (1) media conversion between Web and TV contents, (2) watching TV with live chats, (3) dynamic TV-content augmentation by Web, and (4) searching for TV contents with Web.

Katsumi Tanaka
Pattern Discovery from Graph-Structured Data - A Data Mining Perspective

Mining from graph-structured data has its root in concept formation. Recent advancement of data mining techniques has broadened its applicability. Graph mining faces with subgraph isomorphism which is known to be NP-complete. Two contrasting approaches of our work on extracting frequent subgraphs are revisited, one using complete search (AGM) and the other using heuristic search (GBI). Both use canonical labelling to deal with subgraph isomorphism. AGM represents a graph by its adjacency matrix and employs an Apriori-like bottom up search algorithm using anti-monotonicity of frequency. It can handle both connected and dis-connected graphs, and has been extended to handle a tree data and a sequential data by incorporating a different bias to each in joining operators. It has also been extended to incorporate taxonomy in labels to extract generalized subgraphs. GBI employs a notion of chunking, which recursively chunks two adjoining nodes, thus generating fairly large subgraphs at an early stage of search. The recent improved version extends it to employ pseudo-chunking which is called chunkingless chunking, enabling to extract overlapping subgraphs. It can impose two kinds of constraints to accelerate search, one to include one or more of the designated subgraphs and the other to exclude all of the designated sub-graphs. It has been extended to extract paths and trees from a graph data by placing a restriction on pseudo-chunking operations. GBI can further be used as a feature constructor in decision tree building. The paper explains how both GBI and AGM with their extended versions can be applied to solve various data mining problems which are difficult to solve by other methods.

Hiroshi Motoda

Text Processing

A Collocation-Based WSD Model: RFR-SUM

In this paper, the concept of Relative Frequency Ratio (RFR) is presented to evaluate the strength of collocation. Based on RFR, a WSD Model RFR-SUM is put forward to disambiguate polysemous Chinese word sense. It selects 9 frequently used polysemous words as examples, and achieves the average precision up to 92:50% in open test. It has compared the model with Naïve Bayesian Model and Maximum Entropy Model. The results show that the precision by RFR-SUM Model is 5:95% and 4:48% higher than that of Naïve Bayesian Model and Max- imum Entropy Model respectively. It also tries to prune RFR lists. The results reveal that leaving only 5% important collocation information can keep almost the same precision. At the same time, the speed is 20 times higher.

Weiguang Qu, Zhifang Sui, Genlin Ji, Shiwen Yu, Junsheng Zhou
A Simple Probability Based Term Weighting Scheme for Automated Text Classification

In the automated text classification,

tfidf

is often considered as the default term weighting scheme and has been widely reported in literature. However,

tfidf

does not directly reflect terms’ category membership. Inspired by the analysis of various feature selection methods, we propose a simple probability based term weighting scheme which directly utilizes two critical information ratios, i.e. relevance indicators. These relevance indicators are nicely supported by probability estimates which embody the category membership. Our experimental study based on two data sets, including Reuters-21578, demonstrates that the proposed probability based term weighting scheme outperforms

tfidf

significantly using Bayesian classifier and Support Vector Machines (SVM).

Ying Liu, Han Tong Loh
Text Classification for Healthcare Information Support

Healthcare information support (HIS) is essential in managing, gathering, and disseminating information for healthcare decision support through the Internet. To support HIS, text classification (TC) is a key kernel. Upon receiving a text of healthcare need (e.g. symptom description from patients) or healthcare information (e.g. information from medical literature and news), a text classifier may determine its corresponding categories (e.g. diseases), and hence subsequent HIS tasks (e.g. online healthcare consultancy and information recommendation) may be conducted. The key challenge lies on

high-quality

TC, which aims to classify most texts into suitable categories (i.e. recall is very high), while at the same time, avoid misclassifications of most texts (precision is very high). High-quality TC is particularly essential, since healthcare is a domain where an error may incur higher cost and/or serious problems. Unfortunately, high-quality TC was seldom achieved in previous studies. In the paper, we present a case study in which a high-quality classifier is built to support HIS in Chinese disease-related information, including the cause, symptom, curing, side-effect, and prevention of cancer. The results show that, without relying on domain knowledge and complicated processing, cancer information may be classified into suitable categories, with a controlled amount of confirmations.

Rey-Long Liu

[Special] Fuzzy System Applications I

Nurse Scheduling Using Fuzzy Multiple Objective Programming

Nurse scheduling is a complex scheduling problem and involves generating a schedule for each nurse that consists of shift duties and days off within a short-term planning period. The problem involves multiple conflicting objectives such as satisfying demand coverage requirements and maximizing nurses’ preferences subject to a variety of constraints imposed by legal regulations, personnel policies and many other hospital-specific requirements. The inherent nature of the nurse scheduling problem (NSP) bears vagueness of information on target values of hospital objectives and on personal preferences. Also, the ambiguity of the constraints is some source of uncertainty that needs to be treated in providing a high quality schedule. Taking these facts into account, this paper presents the application of Fuzzy Set Theory (FST) within the context of NSP and proposes a fuzzy goal programming model. To explore the viability of the proposed model, computational experiments are presented on a real world case problem.

Seyda Topaloglu, Hasan Selim
Fuzzy Adaptive Threshold Determining in the Key Inheritance Based Sensor Networks

Sensor networks are often deployed in unattended environments, thus leaving these networks vulnerable to false data injection attacks. False data injection attacks will not only cause false alarms that waste real world response efforts, but also drain the finite amount of energy in a battery powered network. The key inheritance based filtering scheme can detect a false report at the very next node of the compromised node that injected the false report before it consumes a significant amount of energy. The choice of a security threshold value in this scheme represents a trade off between security and overhead. In this paper, we propose a fuzzy adaptive threshold determining method for the key inheritance based filtering scheme. The fuzzy rule based system is exploited to determine the security threshold value by considering the average energy level of all the nodes along the path from the base station to a cluster, the number of nodes in that cluster, and the number of compromised nodes. We also introduce a modified version of this scheme to reduce the overhead for changing the threshold value. The proposed method can conserve energy, while it provides sufficient resilience.

Hae Young Lee, Tae Ho Cho
A New Approach for Evaluating Students’ Answerscripts Based on Interval-Valued Fuzzy Sets

In this paper, we present a new approach for evaluating students’ answerscripts based on the similarity measure between interval-valued fuzzy sets. The marks awarded to the answers in the students’ answerscripts are represented by interval-valued fuzzy sets, where each element in the universe of discourse belonging to an interval-valued fuzzy set is represented by an interval between zero and one. An index of optimism

λ

determined by the evaluator is used to indicate the degree of optimism of the evaluator, where

λ

∈ [0, 1]. The proposed approach using interval-valued fuzzy sets for evaluating students’ answerscripts can evaluate students’ answerscripts in a more flexible and more intelligent manner.

Hui-Yu Wang, Shyi-Ming Chen

Vision I

An Intelligent Multimedia E-Learning System for Pronunciations

The proposed system relates to an interactive scoring system for learning a language, in which a means such as a web camera is used to capture the learners lip movements and then a score is given by making a comparison with images stored in the database. The images stored in the database are those previously recorded by a teacher. By means of the scoring system, the learner can identify and rectify pronunciation problems concerning the lips and tongue. The system also records sounds as well as images from the student. The proposed system processes this data with multimedia processing techniques. With regard to the interactive perspective, a user-friendly visual interface was constructed to help learners use the system. The learners can choose the words they want to practice by capturing their lip image sequences and speech. The lip region image sequences are extracted automatically as visual feature parameters. Combining the visual and voice parameters, the proposed system calculates the similarity between a learners and a teachers pronunciation. An evaluation score is suggested by the proposed system through the previous similarity computation. By this learning process, learners can see the corresponding lip movement of both themselves and a teacher, and correct their pronunciation accordingly. The learners can use the proposed system to practice their pronunciation as many times as they like, without troubling the human teacher, and thus they are able to take more control of improving their pronunciation.

Wen-Chen Huang, Tsai-Lu Chang-Chien, Hsiu-Pi Lin
Phase-Based Feature Matching Under Illumination Variances

The problem of matching feature points in multiple images is difficult to solve when their appearance changes due to illumination variance, either by lighting or object motion. In this paper we tackle this ill-posed problem by using the difference of local phase which is known to be stable to a certain extent even under illumination variances. In order to realize a precise matching, we basically compute the local phase by convolutions with Gabor filters which we design in multi scales. We then evaluate the stability of local phase against lighting changes. Through experiments using both CG and real images that are with illumination variance, we show the relevancy of our theoretical investigations.

Masaaki Nishino, Atsuto Maki, Takashi Matsuyama
Text Extraction for Spam-Mail Image Filtering Using a Text Color Estimation Technique

In this paper, we propose an algorithm for extracting text regions from images in spam-mails. The Color Layer-Based Text Extraction(CLTE) algorithm divides the input image into eight planes as color layers. It extracts connected components on the eight planes, and then classifies them into either text regions or non-text. We also propose an algorithm to recover damaged text strokes in Korean text images. There are two types of damaged strokes: (1) middle strokes such as ‘⌉’ or ‘—’ are deleted, and (2) the first and last strokes such as ‘∘’ or ‘□’ are filled with black pixels. An experiment with 200 spammail images shows that the proposed approach is more accurate than conventional methods by over 10%.

Ji-Soo Kim, S. H. Kim, H. J. Yang, H. J. Son, W. P. Kim

[Special] Real World Interaction

Intention Through Interaction: Toward Mutual Intention in Real World Interactions

Human-Artifact interaction in real world situations is currently an active area of research due to the importance foreseen of the social capabilities of near future robots and other intelligent artifacts in integrating them into the human society. In this paper a new paradigm for mutual intention in human-artifact interactions based on the embodied computing paradigm called

Intention through Interaction

is introduced with theoretical analysis of its relation to the embodiment framework. As examples of the practical use of the framework to replace traditional symbolic based intention understanding systems, the authors’ preliminary work in a real-world agent architecture (IECA) and a natural drawing environment (NaturalDraw) is briefed.

Yasser F. O. Mohammad, Toyoaki Nishida
An Interactive Framework for Document Retrieval and Presentation with Question-Answering Function in Restricted Domain

We propose a speech-based interactive guidance system based on document retrieval and presentation. In conventional audio guidance systems, such as those deployed in museums, the information flow is one-way and the content is fixed. To make the guidance interactive, we prepare two modes, a user-initiative retrieval/QA mode (pull-mode) and a system-initiative recommendation mode (push-mode), and switch between them according to the user’s state. In the user-initiative retrieval/QA mode, the user can ask questions about specific facts in the documents in addition to general queries. In the system-initiative recommendation mode, the system actively provides the information the user would be interested in. We implemented a navigation system containing Kyoto city information. The effectiveness of the proposed techniques was confirmed through a field trial by a number of real novice users.

Teruhisa Misu, Tatsuya Kawahara
Generating Cartoon-Style Summary of Daily Life with Multimedia Mobile Devices

Mobile devices are treasure boxes of personal information containing user’s context, personal schedule, diary, short messages, photos, and videos. Also, user’s usage information on Smartphone can be recorded on the device and they can be used as useful sources of high-level inference. Furthermore, stored multimedia contents can be also regarded as relevant evidences for inferring user’s daily life. Without user’s consciousness, the device continuously collects information and it can be used as an extended memory of human users. However, the amount of information collected is extremely huge and it is difficult to extract useful information manually from the raw data. In this paper, AniDiary (Anywhere Diary) is proposed to summarize user’s daily life in a form of cartoon-style diary. Because it is not efficient to show all events in a day, selected landmark events (memorable events) are automatically converted to the cartoon images. The identification of landmark events is done by modeling causal-effect relationships among various events with a number of Bayesian networks. Experimental results on synthetic data showed that the proposed system provides an efficient and user-friendly way to summarize user’s daily life.

Sung-Bae Cho, Kyung-Joong Kim, Keum-Sung Hwang

[Special] Fuzzy System Applications II

Economic Turning Point Forecasting Using Neural Network with Weighted Fuzzy Membership Functions

This paper proposes a new forecasting model based on neural network with weighted fuzzy membership functions (NEWFM) concerning forecasting of turning points in business cycle by the composite index. NEWFM is a new model of neural networks to improve forecasting accuracy by using self adaptive weighted fuzzy membership functions. The locations and weights of the membership functions are adaptively trained, and then the fuzzy membership functions are combined by bounded sum. The implementation of the NEWFM demonstrates an excellent capability in the field of business cycle analysis.

Soo H. Chai, Joon S. Lim
A New Approach for Automatically Constructing Concept Maps Based on Fuzzy Rules

In recent years, some methods have been presented for dealing with the concept map construction to provide adaptive learning guidance to students. In this paper, we present a new method to automatically construct concept maps based on fuzzy rules and students’ testing records. We apply the fuzzy set theory and fuzzy reasoning techniques to automatically construct concept maps and evaluate the relevance degrees between concepts. The proposed method provides a useful way to automatically construct concept maps in adaptive learning systems.

Shih-Ming Bai, Shyi-Ming Chen
Application of Fuzzy Logic for Adaptive Interference Canceller in CDMA Systems

In this paper, the performance of the proposed fuzzy logic parallel interference cancellation (FLPIC) multiuser detector is evaluated for frequency-selective fading channels in wireless CDMA communication systems. A modified fuzzy logic system (FLS) with an adequate scaling factor (SF) is proposed to infer adequate partial factors (PFs) for the PIC scheme. Simulation results show that the proposed FLS can adapt to the large variations of users’ fading effects. Therefore, the FLPIC outperforms the conventional PIC (CPIC) and constant weight PIC (CWPIC) over two-path and three-path time-varying frequency-selective fading channels especially at heavy system load in DS-CDMA systems.

Yung-Fa Huang, Ping-Ho Ting, Tan-Hsu Tan

Vision II

Robust Multi-scale Full-Band Image Watermarking for Copyright Protection

With the exponential growth of digital materials in this age, the protection of Intellectual Property Right (IPR) becomes an important and urgent topic. In this paper, we propose a novel digital watermarking scheme based on the Singular Value Decomposition (SVD) method and the Distributed Discrete Wavelet Transformation (DDWT) method. Our scheme transforms original image data from the spatial domain into the frequency domain by using the multi-scale DDWT technique, and then applies the SVD technique by modifying singular values of two sub-bands with watermark data and the DDWT watermarking embedding process on the rest two sub-bands. Thus, watermark information is embedded into the four sub-bands of the last scale. We exploit both of the advantages of the DDWT method, which is robust against one kind of geometric attack, i.e. the cropping attack, and the SVD method, which is robust against other geometric attacks and non-geometric attacks. Experimental results show that the quality of the stego-image is superior and the embedded watermark has high resistance against a variety of common geometric and non-geometric attacks.

Jung-Chun Liu, Chu-Hsing Lin, Li-Ching Kuo, Jen-Chieh Chang
Perimeter Intercepted Length and Color t-Value as Features for Nature-Image Retrieval

This paper proposes a context-based image retrieval system based on color, area, and perimeter intercepted lengths of segmented objects in an image. It characterizes the shape of an object by its area and the intercepted lengths obtained by intercepting the object perimeter by eight lines with different orientations passing through the object center, and the object color by its mean and standard deviation (STD). Recently, we reported that the color-shape based method (CSBM) is better than conventional color histogram (CCH) and fuzzy color histogram (FCH) in retrieving computer-generated images. However, its performance is only fair in the retrieval of natural images. For CSBM, object color is treated as uniform by reducing the number of colors in an image to only 27 colors. In this paper, we improve the performance by representing the color features of an object with its mean and STD. During the image retrieval stage,

t

-value is calculated based on the color features of two images, one in the query and the other in the database. The result shows that the proposed method achieves better performance in retrieving natural images compared to CCH, FCH, and CSBM. In the future, the proposed technique will be applied for the retrieval of digitized museum artifacts.

Yung-Fu Chen, Meng-Hsiun Tsai, Chung-Chuan Cheng, Po-Chou Chan, Yuan-Heng Zhong
Selecting an Appropriate Segmentation Method Automatically Using ANN Classifier

In general, we can easily determine the manufacturing step that does not function properly by referring to the flaw type. However, a successful segmentation of flaws is the prerequisite for the success of the subsequent flaw classification. It is worth noticing that, different segmentation methods are needed for different types of images. In the study, a mechanism that is capable of choosing a proper segmentation method automatically has been proposed. The mechanism employed artificial neural networks to select a suitable segmentation method from three methods, i.e., Otsu, HV standard deviation, and Gradient Otsu. The selection is based on the four features extracted from an image including standard deviation of background image, variance coefficient, the ratio of the width to height of both foreground and background histograms. The results show the success of the proposed mechanism. The high segmentation rate reflects the fact that the four carefully selected features are adequate.

Yih-Chih Chiou, Meng-Ru Tsai

Genetic Algorithm

Efficient Reinforcement Hybrid Evolutionary Learning for Recurrent Wavelet-Based Neuro-fuzzy Systems

This paper proposes a recurrent wavelet-based neuro-fuzzy system (RWNFS) with the reinforcement hybrid evolutionary learning algorithm (R-HELA) for solving various control problems. The proposed R-HELA combines the compact genetic algorithm (CGA) and the modified variable-length genetic algorithm (MVGA), performs the structure/ parameter learning for dynamically constructing the RWNFS. That is, both the number of rules and the adjustment of parameters in the RWNFS are designed concurrently by the R-HELA. In the R-HELA, individuals of the same length constitute the same group. There are multiple groups in a population. The evolution of a population consists of three major operations: group reproduction using the compact genetic algorithm, variable two-part crossover, and variable two-part mutation. An illustrative example was conducted to show the performance and applicability of the proposed R-HELA method.

Cheng-Hung Chen, Cheng-Jian Lin, Chi-Yung Lee
A Relation-Based Genetic Algorithm for Partitioning Problems with Applications

This paper proposes a new relation-based genetic algorithm named relational genetic algorithm (RGA) for solving partitioning problems. In our RGA, a relation-oriented representation (or relational encoding) is adopted and corresponding genetic operators are redesigned. The relational encoding is represented by the equivalence relation matrix which has a 1-1 and onto correspondence with the class of all possible partitions. It eliminates the redundancy of previous GA representations and improves the performance of genetic search. The generalized problem-independent operators we redesigned manipulate the genes without requiring specific heuristics in the process of evolution. In addition, our RGA also supports a variable number of subsets. It works without requiring a fixed number of subsets in advance. Experiments for solving some well-known classic partitioning problems by RGA and GGA with and without heuristics are performed. Experimental results show that our RGA is significantly better than GGA in all cases with larger problem sizes.

Jiah-Shing Chen, Yao-Tang Lin, Liang-Yu Chen
Constrained Optimization of a Newsboy Problem with Return Policy Using KKT Conditions and GA

A newsboy problem model in which a vendor has limited resource is developed. It is assumed that the supplier will either sell the items to the vendor outright or offer the items to the vendor with return policy. In the latter case, the supplier buys back at certain percentage of original cost from the vendor the unsold items at the end of the selling season. The purpose of this study is to investigate how the vendor should replenish the items with return policy, constrained resources and changing procured price. Three numerical examples are provided. In one example with two constrained variables, an optimal solution is derived by using KKT (Karush-Kuhn-Tucker) conditions. The other two multiple variables examples with a minimum service level or a limited budget are solved using GA (genetic algorithm).

P. C. Yang, H. M. Wee, S. L. Chung, S. H. Kang

[Special] Fuzzy System Applications III

Fuzzy Interpolative Reasoning Via Cutting and Transformations Techniques

Fuzzy interpolative reasoning techniques can reduce the complexity of a sparse fuzzy rule-based system. In this paper, we present a new fuzzy interpola tive reasoning method via cutting and transformations techniques for sparse fuzzy rule-based systems. It produces more reasonable reasoning consequences than the ones presented in [1] and [3]. The proposed method provides a useful way to deal with fuzzy interpolative reasoning in sparse fuzzy rule-based systems.

Yaun-Kai Ko, Shyi-Ming Chen
Using Fuzzy Theory for Packaging Attribute Deployment for New Notebook Computer Introduction

The purpose of this study is to focus on the packaging issues at the new product introduction (NPI) stage of notebook computer that enterprises encounter when practicing global logistics. It acquires the weight of product design by quality function deployment in two phases: package design and product design. This study uses product’s attributes and their weights as the measurement index for TOPSIS to evaluate the risk priority number of FMEA. Design suggestions generated from reverse feedback can increase logistics efficiency and enable designers to design-out logistics inefficiency caused by product design at the early stage. With effective cooperation, we can learn the critical attributes in NPI when considering logistics factors and assist designers to dissolve design inefficiency. As a result, we can achieve the mechanism of prevention inefficiency in advance, decrease engineering changes, lower costs to speed up the NPI and increase enterprises’ competitiveness.

Hsin Rau, Chien-Ping Liao, Wei-Jung Shiang, Chiu-Hsiang Lin
Fuzzy System Model to Assist with Real Estate Appraisals

Real estate appraisal requires expert knowledge and should be performed by licensed professionals. Prior to the evaluation the appraiser must conduct a thorough study of the appraised property i.e. a land parcel and/or a building. Despite the fact that he sometimes uses the expertise of the surveyor, the builder, the economist or the mortgage lender, his estimations are usually subjective and are based on his experience and intuition. The primary goal of the paper is to present the concept of a fuzzy rule-based system to assist with real estate appraisals. The input variables of the system comprise seven attributes of a property and as the output the system proposes the property’s value. For the appraisal area, so called representative property is determined and in fact the deviations of property attribute values from the representative ones are the input into the fuzzy system. The proportion of the representative property price to the value of the property being assessed is produced as the output of the system. The experts have built the Mamdani model of the system, however they have not been able to construct the rule base. Therefore an evolutionary algorithm has been employed to generate the rule base. The Pittsburgh approach has been applied. The learning process has been conducted using training and testing sets prepared on the basis of 150 sales transactions from one city.

Dariusz Król, Tadeusz Lasota, Wojciech Nalepa, Bogdan Trawiński
Application of Fuzzy System on a Server-Dependent Queue Modeled with Empirical Bayesian Estimation

This study presents a fuzzy system by collecting membership function and rules based on a decision model that uses empirical Bayesian estimation to construct a server-dependent

M

/

M

/2 /

L

queue. A Markovian queue of finite capacity in which the number of servers depends upon queue length is considered. First, data on the interarrival times and service times are collected by observing a queuing system, and the empirical Bayesian method is adopted to estimate its server utilization. Second, the costs are associated with the operation of the second server and the waiting of customers, to establish a cost minimization model to determine the optimal number of customers in the system to activate the second server (

N

), and the optimal number of customers in system to deactivate the second server (

Q

). The decision model provides knowledge to construct rules used in a fuzzy inference system. The MATLAB Fuzzy Inference Toolbox is used to construct a fuzzy system to aid management in determining when to initiate the second server and when to turn it off, according to specific parameters.

Pei-Chun Lin, Jenhung Wang

Robot

Real-Time Auditory and Visual Talker Tracking Through Integrating EM Algorithm and Particle Filter

This paper presents techniques that enable a talker tracking for effective human-robot interaction. We propose new way of integrating an EM algorithm and a particle filter to select an appropriate path for tracking the talker. It can easily adapt to new kinds of information for tracking the talker with our system. This is because our system estimates the position of the desired talker through means, variances, and weights calculated from EM training regardless of the numbers or kinds of information. In addition, to enhance a robot’s ability to track a talker in real-world environments, we applied the particle filter to talker tracking after executing the EM algorithm. We also integrated a variety of auditory and visual information regarding sound localization, face localization, and the detection of lip movement. Moreover, we applied a sound classification function that allows our system to distinguish between voice, music, or noise. We also developed a vision module that can locate moving objects.

Hyun-Don Kim, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno
Self-organizing Multiple Models for Imitation: Teaching a Robot to Dance the YMCA

The traditional approach to implement motor behaviour in a robot required a programmer to carefully decide the joint velocities at each timestep. By using the principle of learning by imitation, the robot can instead be taught simply by

showing

it what to do. This paper investigates the self-organization of a connectionist modular architecture for motor learning and control that is used to imitate human dancing. We have observed that the internal representation of a motion behaviour tends to be captured by more than one module. This supports the hypothesis that a modular architecture for motor learning is capable of self-organizing the decomposition of a movement.

Axel Tidemann, Pinar Öztürk
An Efficient Flow-Shop Scheduling Algorithm Based on a Hybrid Particle Swarm Optimization Model

In this paper, a new hybrid particle swarm optimization model named HPSO that combines random-key (RK) encoding scheme, individual enhancement (IE) scheme, and particle swarm optimization (PSO) is presented and used to solve the flow-shop scheduling problem (FSSP). The objective of FSSP is to find an appropriate sequence of jobs in order to minimize makespan. Makespan means the maximum completion time of a sequence of jobs running on the same machines in flow-shops. By the RK encoding scheme, we can exploit the global search ability of PSO thoroughly. By the IE scheme, we can enhance the local search ability of particles. The experimental results show that the solution quality of FSSP based on the proposed HPSO is far better than those based on GA [1] and NPSO [1], respectively.

I-Hong Kuo, Shi-Jinn Horng, Tzong-Wann Kao, Tsung-Lieh Lin, Pingzhi Fan
Towards the Automated Design of Phased Array Ultrasonic Transducers – Using Particle Swarms to Find “Smart” Start Points

Continuum Probe Designer

TM

by Acoustic Ideas Inc. is a tool that can help design the “best” phased array ultrasonic transducer for a given inspection task. Given a specific surface geometry for the ultrasonic transducer, one component of Continuum Probe Designer

TM

can determine the number of elements, and the required size and shape of each element to meet a list of ultrasonic inspection goals. Using the number of elements as a cost function, an optimization problem to find the best surface geometry for the transducer is created. Previous work has demonstrated that a (1+

λ

)-evolution strategy (ES) can be a very effective search technique for this problem. The performance of this ES was improved by starting it from “smart” (i.e. better than random) start points. Particle swarm optimization (PSO) can be used to improve the “smart” start points, and the overall PSO-ES hybrid is capable of finding feasible transducer designs from all of the start points in a benchmark test suite. This level of performance is an important step towards the use of Continuum Probe Designer

TM

as a fully automated tool for the design of phased array ultrasonic transducers.

Stephen Chen, Sarah Razzaqi, Vincent Lupien

Poster

Solution of the Perspective-Three-Point Problem
Calculation from Video Image by Using Inclinometers Attached to the Camera

In this paper, we describe a method for finding the pose of an object from a single image. We assume that we can detect and match in the image three feature points of the object, and that we know their relative geometry on the object. At first we present the exact pose calculation with an existing method and emphasize the limitation. Then we introduce a new method which consists of adding to the camera an inclinometer so that we reduce the number of unknown parameters and thus are able to compute the pose efficiently by using a classical iterative optimization method.

Loic Merckel, Toyoaki Nishida
A Decision Support System for Underground Mining Method Selection

Underground mining method selection (UMMS) is one of the most important decisions that should be made by mining engineers. Choosing a suitable underground mining method to carry out extraction from a mineral deposit is very important in terms of the economics, safety and the productivity of mining operations. In reality, UMMS is one of the Multiple Criteria Decision Making (MCDM) problems and decision makers have some difficulties in making the right decision in the multiple criteria environment. In this paper, a decision support system for underground mining method selection (UMMS-DSS) has been designed and developed in order to take into account the whole related problem criteria, research the entire effects of different scenarios of all available criteria and carry out sensitive analysis when needed. UMMS-DSS was designed as to use Analytic Hierarchy Process (AHP), one of the MCDM methods, to manage those tasks and to produce acceptable solution alternatives.

Serafettin Alpay, Mahmut Yavuz
Efficient Modified Bidirectional A * Algorithm for Optimal Route-Finding

A* algorithm, a kind of informed search, is widely used for finding an optimal car route, because the location of starting and ending point are known beforehand. Unidirectional A* algorithm guarantees an optimal route but requires considerable search time. On the other hand, bidirectional A* algorithm, usually known faster than unidirectional A*, does not guarantee the route found to be optimal, if the search ends when the forward and backward search meet in the middle. It may even take longer than unidirectional search to find an optimal route. In this paper, a new modified bidirectional A* algorithm which takes less search time and guarantees an optimal route is proposed. To evaluate the efficiency of the proposed algorithm, several experiments are conducted in real urban road environment and the results show that the algorithm is very effective in terms of finding an optimal route and search time.

Taeg-Keun Whangbo
Toward a Large Scale E-Market: A Greedy and Local Search Based Winner Determination

Combinatorial auction is one of the most popular market mechanisms and it has a huge effect on electronic markets and political strategies. On large scale e-markets, we need a good approximation algorithm for winner determination that is robust for changing the distribution and the number of bids in an auction. We proposed approximate algorithms for combinatorial auctions with massively large number of (more than 100,000) bids. In this paper, we show the robustness of our winner determination algorithms for combinatorial auctions with large number of bids. Experimental results demonstrate that our proposed algorithms are robust on changing the distribution and the number of bids in an auction. Finally, we shortly describe a theoretical limitation about our algorithms that concerns with giving truthfulness of the auction mechanism.

Naoki Fukuta, Takayuki Ito
Agent Based Dynamic Job Shop Simulation System

Although most real manufacturing systems have dynamic job shop structures, there is no general analytic method that has been found for analyzing them yet and computer simulation is still an outstanding tool. One of the most difficult problems in a dynamic job shop environments is to assign the optimal due dates. Due date assignment is an important task in shop-floor control, affecting both timely delivery and customer satisfaction. The ability to meet the due dates, however, is dependent not only on reasonableness of the due dates but also on the scheduling or dispatching procedures. In this paper, an agent based dynamic job shop simulation system is designed and developed to help the decision makers who have to mainly solve the problems of selecting correct due date assignment models and dispatching rules depending on selected performance criteria in their multi machine dynamic stochastic job shop environment.

Şerafettin Alpay
A Manufacturing-Environmental Model Using Bayesian Belief Networks for Assembly Design Decision Support

Assembly design decision making is to provide a solution of currently violating design by evaluating assembly design alternatives with the consideration of the assembly design decision (ADD) criteria and of the causal interactions with manufacturing-environmental factors. Even though existing assembly design support systems have a systematic mechanism for determining the decision-criterion weight, the system still has a limitation to capture the interactions between manufacturing-environmental factors and ADD criteria. Thus, we introduce in this paper, Bayesian belief networks (BBN) for the representation and reasoning of the manufacturing-environmental knowledge. BBN has a sound mathematical foundation and reasoning capability. It also has an efficient evidence propagation mechanism and a proven track record in industry-scale applications. However, it is less friendly and flexible, when used for knowledge acquisition. In this paper, we propose a methodology for the indirect knowledge acquisition, using fuzzy cognitive maps, and for the conversion of the representation into BBN.

Wooi Ping Cheah, Kyoung-Yun Kim, Hyung-Jeong Yang, Sook-Young Choi, Hyung-Jae Lee
Evaluation of Two Simultaneous Continuous Speech Recognition with ICA BSS and MFT-Based ASR

An adaptation of independent component analysis (ICA) and missing feature theory (MFT)-based ASR for two simultaneous continuous speech recognition is described. We have reported on the utility of a system with isolated word recognition, but the performance of the MFT-based ASR is affected by the configuration, such as an acoustic model. The system needs to be evaluated under a more general condition. It first separates the sound sources using ICA. Then, spectral distortion in the separated sounds is estimated to generate missing feature masks (MFMs). Finally, the separated sounds are recognized by MFT-based ASR. We estimate spectral distortion in the temporal-frequency domain in terms of feature vectors, and we generate MFMs. We tested an isolated word and the continuous speech recognition with a cepstral and spectral feature. The resulting system outperformed the baseline robot audition system by 13 and 6 points respectively on the spectral features.

Ryu Takeda, Shun’ichi Yamamoto, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno
Knowledge Based Discovery in Systems Biology Using CF-Induction

The cell is an entity composed of several thousand types of interacting proteins. Our goal is to comprehend the biological system using only the revelent information which means that we will be able to reduce or to indicate the main metabolites necessary to measure. In this paper, it is shown how the Artificial Intelligence description method functioning on the basis of Inductive Logic Programming can be used successfully to describe essential aspects of cellular regulation. The results obtained shows that the ILP tool CF-induction discovers the activities of enzymes on glycolyse metabolic pathway when only partial information about it has been used. This procedure is based on the filtering of the high processes to reduce the space search.

Andrei Doncescu, Katsumi Inoue, Yoshitaka Yamamoto
Environment Recognition System for Biped Walking Robot Using Vision Based Sensor Fusion

This paper addresses the method of environment recognition specialized for biped walking robot. Biped walking robot should have the ability to autonomously recognize its surrounding environment and make right decisions in corresponding to its situation. In the realization of the vision system for biped walking robot, two algorithms have been largely suggested, they are; object detection system with unknown objects, and obstacle recognition system. By using the techniques mentioned above, a biped walking robot becomes to be available to autonomously move and execute various user-assigned tasks in an unknown environment. From the results of experiments, the proposed environment recognition system can be said highly available to be applied to biped walking robot walking and operated in the real world.

Tae-Koo Kang, Heejun Song, Dongwon Kim, Gwi-Tae Park
Design of a SOA-Oriented E-Diagnostics System for Hydroelectric Generating Sets

In order to resolve existing problems such as low efficiency, high cost and lack of technical resource in current maintenance, it is necessary to realize remote diagnosis for hydroelectric generating sets (HGSs). In this work, basing on the Service-Oriented Architecture (SOA) and Web Services technology, a SOA-oriented E-diagnostics system for HGSs (HGS-SES) is proposed, the framework of HGS-SES is constructed, a layout of the system’s hardware settings is described, the key modules of the system and a specific diagnostic procedure are given. HGS-SES makes rapid and convenient information transmission for services function and diagnosis decision-making, develops a dynamic network diagnostic platform for HGSs, and has broad prospects for farther research.

Liangliang Zhan, Yongchuan Zhang, Jianzhong Zhou, Yucheng Peng, Zheng Li

Genetic Algorithm II

A Systematic Layout Planning of Visualizing Devices on a Non-rectangular Plane by Genetic Heuristics

As the new era of RFID (Radio Frequency Identification) has come, it makes the visualizing of a plane possible. The proposed research focuses on how to plan the locations of RFID readers in a non-rectangular plane such that regions with or without forbidden blocks can be fully monitored by using a well developed RFID system under the spending of a minimum cost. An algorithm constitutes of three phases is used to obtain the optimal devices’ layout planning. In the beginning, we scrutinize a non-rectangular plane and its forbidden blocks so that a general grid-scheme can be applied. Secondly, the linear programming (LP) approach is applied to decide the number of RFID readers needed. At last, a hybrid genetic algorithm (GA) is used to find the appropriate locations for these designated number of RFID readers. The overall cost of deploying RFID readers and the total monitored region of the proposed RFID system are recorded. They are two key performance indexes to evaluate the efficiency of the proposed method. Simulation results show the proposed method has high efficiency on dealing RFID readers’ planning problems in visualizing a non-rectangular plane.

Chir-Ho Chang, Jin-Ling Lin
Promising Search Regions of Crossover Operators for Function Optimization

Performance of a genetic algorithm for function optimization, often appeared in real-world applications, depends on its crossover operator strongly. Existing crossover operators are designed for intensive search in certain promising regions. This paper, first, discusses where the promising search regions are on the basis of some assumptions about the fitness landscapes of objective functions and those about a state of a population, and this discussion reveals that existing crossover operators intensively search some of the promising regions but not all of them. Then, this paper designs a new crossover operator for searching all of the promising regions. For utilizing the advantageous features of this crossover operator, a new selection model considering characteristic preservation is also introduced. Several experiments have shown the proposed method has worked effectively on various test functions.

Hiroshi Someya
Automatic Fingerprints Image Generation Using Evolutionary Algorithm

Constructing a fingerprint database is important to evaluate the performance of automatic fingerprint recognition systems. Because of the difficulty in collecting fingerprint samples, there are only few benchmark databases available. Moreover, various types of fingerprints are required to measure how robust the system is in various environments. This paper presents a novel method that generates various fingerprint images automatically from only a few training samples by using the genetic algorithm. Fingerprint images generated by the proposed method include similar characteristics of those collected from a corresponding real environment. Experiments with real fingerprints verify the usefulness of the proposed method.

Ung-Keun Cho, Jin-Hyuk Hong, Sung-Bae Cho
A Hybrid Genetic Algorithm for the Cut Order Planning Problem

This paper proposes a new hybrid heuristic to a difficult but frequently occurring problem in the apparel industry: the cut order planning (COP). This problem consists of finding the combination of ordered sizes on the material layers that minimizes total material utilization. The current practice in industry solves COP independently from the two-dimensional layout (TDL) problem; i.e., COP estimates the length of the layout required to cut a particular combination of sizes instead of packing the pieces on the fabric and determining the actual length used. Evidently, this results in a build up of estimation errors; thus increased waste. Herein, COP and TDL are combined into a single problem CT. The resulting problem is modeled and solved using a hybrid heuristic which combines the advantages of population based approaches (genetic algorithms) with those of local search (simulated annealing). The experimental results show the validity of the proposed model, and the sizeable savings it induces when solved using the proposed hybrid heuristic.

Ahlem Bouziri, Rym M’hallah

Fuzzy Logic I

Supervised Adaptive Control of Unknown Nonlinear Systems Using Fuzzily Blended Time-Varying Canonical Model

In spite of the prosperous literature in adaptive control, application of this promising control strategy has been restricted by the lack of assurance in closed-loop stability. This paper proposes an adaptive control architecture, which is augmented by a supervising controller, to enhance the robustness of an adaptive PID control system in the face of exaggerated variation in system parameters, disturbances, or parameter drift in the adaptation law. Importantly, the supervising controller is designed based on an on-line identified model in a fuzzily blended time-varying canonical form. This model largely simplified the identification process, and the design of both the supervising controller and the adaptation law. Numerical studies of the tracking control of an uncertain Duffing–Holmes system demonstrate the effectiveness of the proposed control strategy.

Yau-Zen Chang, Zhi-Ren Tsai
Multi-agent System with Hybrid Intelligence Using Neural Network and Fuzzy Inference Techniques

In this paper, a novel multi-agent control system incorporating hybrid intelligence and its physical testbed are presented. The physical testbed is equipped with a large number of embedded devices interconnected by three types of physical networks. It mimics a ubiquitous intelligent environment and allows real-time data collection and online system evaluation. Human control behaviours for different physical devices are analysed and classified into three categories. Physical devices are grouped based on their relevance and each group is assigned to a particular behaviour category. Each device group is independently modelled by either fuzzy inference or neural network agents according to the behaviour category. Comparative analysis shows that the proposed multi-agent control system with hybrid intelligence achieves significant improvement in control accuracy compared to other offline control systems.

Kevin I-Kai Wang, Waleed H. Abdulla, Zoran Salcic
Analysis of Log Files Applying Mining Techniques and Fuzzy Logic

With the explosive growth of data available on the Internet, a recent area of investigation called Web Mining has arise. In this paper, we will study general aspects of this area, principally the process of Web Usage Mining where log files are analyzed. These files register the activity of the user when interact with the Web. In the Web Usage Mining, different techniques of mining to discover usage patterns from web data can be applied. We will also study applications of Fuzzy Logic in this area. Specially, we analyze fuzzy techniques such as fuzzy association rules or fuzzy clustering, featuring their functionality and advantages when examining a data set of logs from a web server. Finally, we give initial traces about the application of Fuzzy Logic to personalization and user profile construction.

Víctor H. Escobar-Jeria, María J. Martín-Bautista, Daniel Sánchez, María-Amparo Vila
Stability Analysis for Nonlinear Systems Subjected to External Force

This paper considers a fuzzy Lyapunov method for stability analysis of nonlinear systems subjected to external forces. The nonlinear systems under external forces can be represented by Tagagi-Sugeno (T-S) fuzzy model. In order to design a nonlinear fuzzy controller to stabilize this nonlinear system, the parallel distributed compensation (PDC) scheme is used to construct a global fuzzy logic controller. We then propose the robustness design to ensure the modeling error is bounded and some stability conditions are derived based on the controlled systems. Based on the stability criterion, the nonlinear systems with external forces are guaranteed to be stable. This control problem can be reformulated into linear matrix inequalities (LMI) problem.

Ken Yeh, Cheng-Wu Chen, Shu-Hao Lin, Chen-Yuan Chen, Chung-Hung Tsai, Jine-Lih Shen

Manufacturing

Integrated Framework for Reverse Logistics

Although reverse logistics has been disregarded for many years, pressures from both environmental awareness and business sustainability have risen. Reverse logistical activities include return, repair and recycle products. Traditionally, since the information transparency of the entire supply chain is restricted, business is difficult to predict, and prepare for these reverse activities. This study presents an agent-based framework to increase the degree of information transparency. The cooperation between sensor and disposal agents helps predict reverse activities, avoid return, speed up repair and prepare for recycling behaviors.

Heng-Li Yang, Chen-Shu Wang
Screening Paper Formation Variations on Production Line

This paper is concerned with a multi–resolution tool for screening paper formation variations in various frequency regions on production line. A paper web is illuminated by two red diode lasers and the reflected light recorded as two time series of high resolution measurements constitute the input signal to the papermaking process monitoring system. The time series are divided into blocks and each block is analyzed separately. The task is treated as kernel based novelty detection applied to a multi–resolution time series representation obtained from the band-pass filtering of the Fourier power spectrum of the series. The frequency content of each frequency region is characterized by a feature vector, which is transformed using the canonical correlation analysis and then categorized into the

inlier

or

outlier

class by the novelty detector. The ratio of outlying data points, significantly exceeding the predetermined value, indicates abnormalities in the paper formation. The tools developed are used for online paper formation monitoring in a paper mill.

Marcus Ejnarsson, Carl Magnus Nilsson, Antanas Verikas
Multi-modal Data Integration Using Graph for Collaborative Assembly Design Information Sharing and Reuse

Collaborative design has been recognized an alternative environment for product design in which multidisciplinary participants are naturally involving. Reuse of product design information has long been recognized as one of core requirements for efficient collaborative product design. This paper addresses integration of multi-modal data using a graph for an assembly design information sharing and reuse in the collaborative environment. In the system, assembly product images obtained from multi-modal devices are utilized to share and to reuse design information. The proposed system conducts the segmentation of an assembly product image by using a labeling method and generates an attribute relation graph (ARG) that represents properties of segmented regions and their relationships. The generated ARG is extended by integrating corresponding part/assembly information. In this manner, the integration of multi-modal data has been realized to retrieve assembly design information using a product image.

Hyung-Jae Lee, Kyoung-Yun Kim, Hyung-Jeong Yang, Soo-Hyung Kim, Sook-Young Choi
Enhanced Probabilistic Filtering for Improving the Efficiency of Local Searches

The probabilistic filtering method filters out an unpromising candidate solution by conducting a simple preliminary evaluation before a complete evaluation in order to improve the efficiency of a local search. In this paper, we improve probabilistic filtering so that it can be applied in general to large-scaled optimization problems. As compared to the previous probabilistic filtering method, our enhanced version includes a scaling and truncation function to increase the discriminating power of probabilistic filtering and repair some defects of the previous bias function in adjusting the level of greediness. Experiments have shown that our method is more effective in improving the performance of a local search than the previous method. It has also been shown that the probabilistic filtering can be effective even when the preliminary evaluation heuristic is somewhat inaccurate, and the lesser the cost of preliminary evaluation, the greater is its effectiveness.

Byoungho Kang, Kwang Ryel Ryu

Data Mining I

A Weighted Feature C-Means Clustering Algorithm for Case Indexing and Retrieval in Cased-Based Reasoning

A successful Case-Based Reasoning (CBR) system highly depends on how to design an accurate and efficient case retrieval mechanism. In this research we propose a Weighted Feature C-means clustering algorithm (WF-C-means) to group all prior cases in the case base into several clusters. In WF-C-means, the weight of each feature is automatically adjusted based on the importance of the feature to clustering quality. After executing WF-C-means, the dissimilarity definition adopted by K-Nearest Neighbor (KNN) search method to retrieve similar prior cases for a new case becomes refined and objective because the weights of all features adjusted by WF-C-means can be involved in the dissimilarity definition. On the other hand, based on the clustering result of WF-C-means, this research proposes a cluster-based case indexing scheme and its corresponding case retrieval strategy to help KNN retrieving the similar prior cases efficiently. Through our experiments, the efforts of this research are useful for real world CBR systems.

Chuang-Cheng Chiu, Chieh-Yuan Tsai
Neural Networks for Inflow Forecasting Using Precipitation Information

This work presents forecast models for the natural inflow in the Basin of Iguaçu River, incorporating rainfall information, based on artificial neural networks. Two types of rainfall data are available: measurements taken from stations distributed along the basin and ten-day rainfall forecasts using the ETA model developed by CPTEC (Brazilian Weather Forecating Center). The neural nework model also employs observed inflows measured by stations along the Iguaçu River, as well as historical data of the natural inflows to be predicted. Initially, we applied preprocessing methods on the various series, filling missing data and correcting outliers. This was followed by methods for selecting the most relevant variables for the forecast model. The results obtained demonstrate the potential of using artificial neural networks in this problem, which is highly non-linear and very complex, providing forecasts with good accuracy that can be used in planning the hydroelectrical operation of the Basin.

Karla Figueiredo, Carlos R. Hall Barbosa, André V. A. Da Cruz, Marley Vellasco, Marco Aurélio C. Pacheco, Roxana J. Conteras
A Gradational Reduction Approach for Mining Sequential Patterns

The technology of data mining is more important in recent years, and it is generally applied to commercial forecast and decision supports. Sequential pattern mining algorithms in the field of data mining play one of the important roles. Many of sequential pattern mining algorithms were proposed to improve the efficiency of data mining or save the utility rate of memory. So, our major study tries to improve the efficiency of sequential pattern mining algorithms.

We propose a new algorithm - GRS (A

G

radational

R

eduction Approach for Mining

S

equential Patterns) which is an efficient algorithm of mining sequential patterns. GRS algorithm uses gradational reduction mechanism to reduce the length of transactions and uses GraDec function to avoid generating large number of infrequent sequential patterns; and it is very suitable to mine the transactions of databases whose record lengths are very long. The GRS algorithm only generates some sequences which are very possible to be frequent. So, the GRS algorithm can decrease a large number of infrequent sequences and increase the utility rate of memory.

Jen-Peng Huang, Guo-Cheng Lan, Huang-Cheng Kuo
A Kernel Method for Measuring Structural Similarity Between XML Documents

Measuring structural similarity between XML documents has become a key component in various applications, including XML data mining, schema matching, web service discovery, among others. The paper presents a novel structural similarity measure between XML documents using kernel methods. Results on preliminary simulations show that this outperforms conventional ones.

Buhwan Jeong, Daewon Lee, Hyunbo Cho, Boonserm Kulvatunyou

Neural Network I

A Neural Network Based Data Least Squares Algorithm for Channel Equalization

Using the neural network model for oriented principal component analysis (OPCA), we propose a solution to the data least squares (DLS) problem, in which the error is assumed to lie in the data matrix only. In this paper, We applied this neural network model to channel equalization. Simulations show that DLS outperforms ordinary least squares in channel equalization problems.

Jun-Seok Lim
Novelty Detection in Large-Vehicle Turbocharger Operation

We develop novelty detection techniques for the analysis of data from a large-vehicle engine turbocharger in order to illustrate how abnormal events of operational significance may be identified with respect to a model of normality. Results are validated using polynomial function modelling and reduced dimensionality visualisation techniques to show that system operation can be automatically classified into one of three distinct state spaces, each corresponding to a unique set of running conditions.

This classification is used to develop a regression algorithm that is able to predict the dynamical operating parameters of the turbocharger and allow the automatic detection of periods of abnormal operation. Visualisation of system trajectories in high-dimensional space are communicated to the user using parameterised projection techniques, allowing ease of interpretation of changes in system behaviour.

David A. Clifton, Peter R. Bannister, Lionel Tarassenko
Equalization of 16 QAM Signals with Reduced BiLinear Recurrent Neural Network

A novel equalization scheme for 16 QAM signals through a wireless ATM communication channel using Reduced-Complex Bilinear Recurrent Neural Network (R-CBLRNN) is proposed in this paper. The 16 QAM signals from a wireless ATM communication channel have severe nonlinearity and intersymbol interference due to multiple propagation paths in the channel. The R-CBLRNN equalizer is compared with the conventional equalizers including a Volterra filter equalizer, a decision feedback equalizer (DFE), and a multilayer perceptron type neural network (MLPNN) equalizer. The results show that the R-CBLRNN equalizer for 16 QAM signals gives very favorable results in both of the Mean Square Error(MSE) and the Symbol Error Rate (SER) criteria over conventional equalizers.

Dong-Chul Park, Yunsik Lee
Enhanced Neural Filter Design and Its Application to the Active Control of Nonlinear Noise

A novel neural filter and its application to active control of nonlinear noise are proposed in this paper. This method helps to avoid the premature saturation of backpropagation algorithm; meanwhile, guarantees the system convergence by the proposed self-tuning method. The comparison between the conventional filtered-X least-mean-square (FXLMS) algorithm and proposed method for nonlinear broadband noise in active noise cancellation (ANC) system is also made in this paper. The proposed design method is very easy to be implemented and versatile to the other applications. Several simulation results show that the proposed method can effectively cancel the narrowband and nonlinear broadband noise in a duct.

Cheng-Yuan Chang, I-Ling Chung, Chang-Min Chou, Fuh-Hsin Hwang

Constraint Satisfaction

Case Analysis of Criminal Behaviour

In this paper, it is shown how behavioural properties can be specified for three types of violent criminals. Moreover, it is shown how empirical material in the form of informal descriptions of traces of crime-related events can be formalised. Furthermore, it is shown how these formalised traces and behavioural properties can be used in automated analysis, for example in order to determine which type of criminal can have committed such a crime. Moreover, an underlying dynamical model is presented that shows causal mechanisms behind each of the behaviours, and their dependencies on the characteristics of the type of criminal and inputs in terms of stimuli from the environment.

Tibor Bosse, Charlotte Gerritsen, Jan Treur
Diagnosing Dependent Failures in the Hardware and Software of Mobile Autonomous Robots

Previous works have proposed to apply model-based diagnosis (MBD) techniques to detect and locate faults in the control software of mobile autonomous robots at runtime. The localization of faults at the level of software components enables the autonomous repair of the system by restarting failed components. Unfortunately, classical MBD approaches assume that components fail independently. In this paper we show that dependent failures are very common in this application domain and we propose the concept of

diagnosis environments (DEs)

in order to tackle the arising problems. We provide an algorithm for the computation of DEs and present the results of case studies.

Jörg Weber, Franz Wotawa
PrDLs: A New Kind of Probabilistic Description Logics About Belief

It is generally accepted that knowledge based systems would be smarter if they can deal with uncertainty. Some research has been done to extend Description Logics(DLs) towards the management of uncertainty, most of which concerned the

statistical information

such as “The probability that a randomly chosen bird flies is greater than 0.9”. In this paper, we present a new kind of extended DLs to describe degrees of belief such as “The probability that all plastic objects float is 0.3”. We also introduce the extended tableau algorithm for Pr

$\mathcal {A}\mathcal {L}\mathcal {C}$

as an example to compute the probability of the implicit knowledge.

Jia Tao, Zhao Wen, Wang Hanpin, Wang Lifu
Multi-constraint System Scheduling Using Dynamic and Delay Ant Colony System

This study presents and evaluates a modified ant colony optimization (ACO) approach for the precedence and resource-constrained multiprocessor scheduling problems. A modified ant colony system, with two designed rules, called dynamic and delay ant colony system, is proposed to solve the scheduling problems. The dynamic rule is designed to modify the latest starting time of jobs and hence the heuristic function. A delay solution generation rule in exploration of the search solution space is used to escape the local optimal solution. Simulation results demonstrate that the proposed modified ant colony system algorithm provides an effective and efficient approach for solving multiprocessor system scheduling problems with precedence and resource constraints.

Shih-Tang Lo, Ruey-Maw Chen, Yueh-Min Huang

Data Mining II

Constructing Domain Ontology Using Structural and Semantic Characteristics of Web-Table Head

This study concerns the constructing of domain ontology from web tables in a specific domain. Ontology defines the common terms and their meaning (concepts) within a context. Thus only meaningful tables are our concern. The meaningful table is composed of a head and a body, which are formatted in rows and columns. The head abstracts the meaning expressed in the body. Thus, in order to obtain a table-information-extraction framework, this study extracts, as prerequisite work, the structural semantic, that is, the domain ontology that frames web-table information, from the head. We suggest a method for automatically extracting domain ontology using the structural and semantic characteristics of the web-table head. The construction of domain ontology proceeds through two steps: (a) extracting table schema as pseudo-ontology from each table from the same domain and (b) constructing domain ontology combining those extracted table schemata. The combination of schemata proceeds through splitting and clustering using (a) statistical information and (b) heuristics based on the structural and semantic characteristics of the web-table head.

Sung-won Jung, Mi-young Kang, Hyuk-chul Kwon
Maintenance of Fast Updated Frequent Trees for Record Deletion Based on Prelarge Concepts

The frequent pattern tree (FP-tree) is an efficient data structure for association-rule mining without generation of candidate itemsets. It, however, needed to process all transactions in a batch way. In the past, we proposed the Fast Updated FP-tree (FUFP-tree) structure to efficiently handle the newly inserted transactions in incremental mining. In this paper, we attempt to modify the FUFP-tree maintenance based on the concept of pre-large itemsets for efficiently handling deletion of records. Pre-large itemsets are defined by a lower support threshold and an upper support threshold. The proposed approach can thus achieve a good execution time for tree maintenance especially when each time a small number of records are deleted. Experimental results also show that the proposed Pre-FUFP deletion algorithm has a good performance for incrementally handling deleted records.

Chun-Wei Lin, Tzung-Pei Hong, Wen-Hsiang Lu, Chih-Hung Wu
Solving a Committee Formation and Scheduling Problem by Frequent Itemset Mining

Selecting faculty members to form a mission committee and simultaneously scheduling the corresponding committee meetings is a tough decision problem frequently encountered in every academic department. In this paper, we present a formal model of the problem. We also present an approach showing that, with simple database construction, the problem can be transformed into a constrained itemset mining problem, which is an important branch of frequent itemset mining. Thus, the problem can be exactly solved by techniques for constrained itemset mining. For high efficiency, we provide a method to convert some problem constraint into an anti-monotone constraint, which can be easily embedded into the framework of frequent itemset mining and is considered very effective for search space pruning. Experiments were performed and the results show that our approach offers very high performance.

Chienwen Wu

Neural Network II

Dual Gradient Descent Algorithm on Two-Layered Feed-Forward Artificial Neural Networks

The learning algorithms of multilayered feed-forward networks can be classified into two categories, gradient and non-gradient kinds. The gradient descent algorithms like backpropagation (BP) or its variations are widely used in many application areas because of convenience. However, the most serious problem associated with the BP is local minima problem. We propose an improved gradient descent algorithm intended to weaken the local minima problem without doing any harm to simplicity of the gradient descent method. This algorithm is called dual gradient learning algorithm in which the upper connections (hidden-to-output) and the lower connections (input-to-hidden) separately evaluated and trained. To do so, the target values of hidden layer units are introduced to be used as evaluation criteria of the lower connections. Simulations on some benchmark problems and a real classification task have been performed to demonstrate the validity of the proposed method.

Bumghi Choi, Ju-Hong Lee, Tae-Su Park
A New Hybrid Learning Algorithm for Drifting Environments

An adaptive algorithm for drifting environments is proposed and tested in simulated environments. Two powerful problem solving technologies namely Neural Networks and Genetic Algorithms are combined to produce intelligent agents that can adapt to changing environments. Online learning enables the intelligent agents to capture the dynamics of changing environments efficiently. The algorithm’s efficiency is demonstrated using a mine sweeper application. The results demonstrate that online learning within the evolutionary process is the most significant factor for adaptation and is far superior to evolutionary algorithms alone. The evolution and learning work in a cooperating fashion to produce best results in short time. It is also demonstrated that online learning is self sufficient and can achieve results without any pre-training stage. When mine sweepers are able to learn online, their performance in the drifting environment is significantly improved. Offline learning is observed to increase the average fitness of the whole population.

Khosrow Kaikhah
Solving Inequality Constraints Job Scheduling Problem by Slack Competitive Neural Scheme

A competitive neural network provides a highly effective means of attaining a sound solution and of reducing the network complexity. A competitive approach is utilized to deal with fully-utilized scheduling problems. This investigation employs slack competitive Hopfield neural network (SCHNN) to resolve non-fully and fully utilized identical machine scheduling problems with multi-constraint, real time (execution time and deadline constraints) and resource constraints. To facilitate resolving the scheduling problems, extra slack neurons are added on to the neural networks to represent pseudo-jobs. This study presents an energy function corresponding to a neural network containing slack neurons. Simulation results demonstrate that the proposed energy function integrating competitive neural network with slack neurons can solve fully and non-fully utilized real-time scheduling problems.

Ruey-Maw Chen, Shih-Tang Lo, Yueh-Min Huang

Fuzzy Logic II

Intelligent OS Process Scheduling Using Fuzzy Inference with User Models

The process scheduling aims to arrange CPU time to multiple processes for providing users with more efficient throughput. Except the class of process set by user, conventional operating systems have applied the equivalent scheduling policy to every process. Moreover, if the scheduling policy is once determined, it is unable to change without resetting the operating system which takes much time. In this paper, we propose an intelligent CPU process scheduling algorithm using fuzzy inference with user models. It classifies processes into three classes, batch, interactive and real-time processes, and models user’s preferences to each process class. Finally, it assigns the priority of each process according to the class of the process and user’s preference through the fuzzy inference. The experimental result shows the proposed method can adapt to user and allow different scheduling policies to multiple users.

Sungsoo Lim, Sung-Bae Cho
Cardinality-Based Fuzzy Time Series for Forecasting Enrollments

Forecasting activities are frequent and widespread in our life. Since Song and Chissom proposed the fuzzy time series in 1993, many previous studies have proposed variant fuzzy time series models to deal with uncertain and vague data. A drawback of these models is that they do not consider appropriately the weights of fuzzy relations. This paper proposes a new method to build weighted fuzzy rules by computing cardinality of each fuzzy relation to solve above problems. The proposed method is able to build the weighted fuzzy rules based on concept of large itemsets of Apriori. The yearly data on enrollments at the University of Alabama are adopted to verify and evaluate the performance of the proposed method. The forecasting accuracies of the proposed method are better than other methods.

Jing-Rong Chang, Ya-Ting Lee, Shu-Ying Liao, Ching-Hsue Cheng
A New Fuzzy Interpolative Reasoning Method for Sparse Fuzzy Rule-Based Systems

Fuzzy interpolative reasoning is an important research topic of sparse fuzzy rule-based systems. In recent years, some methods have been presented for dealing with fuzzy interpolative reasoning. However, the involving fuzzy sets appearing in the antecedents of fuzzy rules of the existing fuzzy interpolative reasoning methods must be normal and non-overlapping. Moreover, the reasoning conclusions of the existing fuzzy interpolative reasoning methods sometimes become abnormal fuzzy sets. In this paper, in order to overcome the drawbacks of the existing fuzzy interpolative reasoning methods, we present a new fuzzy interpolative reasoning method for sparse fuzzy rule-based systems. The proposed fuzzy interpolative reasoning method can handle the situation of non-normal and overlapping fuzzy sets appearing in the antecedents of fuzzy rules. It can overcome the drawbacks of the existing fuzzy interpolative reasoning methods in sparse fuzzy rule-based systems.

Li-Wei Lee, Shyi-Ming Chen

Machine Learning I

A New Multi-class Support Vector Machine with Multi-sphere in the Feature Space

Support vector machine (SVM) is a very promising classification technique developed by Vapnik. However, there are still some shortcomings in the original SVM approach. First, SVM was originally designed for binary classification. How to extend it effectively for multi-class classification is still an on-going research issue. Second, SVM does not consider the distribution of each class. In this paper, we propose an extension to the SVM method of pattern recognition for solving the multi-class problem in one formal step. Contrast to previous multi-class SVMs, our approach considers the distribution of each class. Experimental results show that the proposed method is more suitable for practical use than other multi-class SVMs, especially for unbalanced datasets.

Pei-Yi Hao, Yen-Hsiu Lin
Construction of Prediction Module for Successful Ventilator Weaning

Ventilator weaning is the process of discontinuing mechanical ventilation from patients with respiratory failure. Previous investigations reported that 39%-40% of the intensive care unit (ICU) patients need mechanical ventilator for sustaining their lives. Among them, 90% of the patients can be weaned from the ventilator in several days while other 5%-15% of the patients need longer ventilator support. Modern mechanical ventilators are believed to be invaluable tools for stabilizing the condition of patients in respiratory failure. However, ventilator support should be withdrawn promptly when no longer necessary so as to reduce the likelihood of known nosocomial complications and costs. Although successful ventilator weaning of ICU patients has been widely studied, indicators for accurate prediction are still under investigation. Furthermore, the predication rate of successful weaning is only 35-60% based on previous studies. It is desirable to have objective measurements and predictors of weaning that decrease the dependence on the wisdom and skill of an individual physician. However, one study showed that clinicians were often wrong when predicting weaning outcome. In this study, 189 patients, who had been supported by mechanical ventilation for longer than 21 days and were clinically stable were recruited from our all-purpose ICUs. Twenty-seven variables in total were recorded, while only 8 variables which reached significant level were used for support vector machine (SVM) classification after logistic regression analysis. The result shows that the successful prediction rate achieves as high as 81.5% which outperforms a recently published predictor (78.6%) using combination of sample entropy of three variables, inspiratory tidal volume, expiratory tidal volume, and respiration rate.

Jiin-Chyr Hsu, Yung-Fu Chen, Hsuan-Hung Lin, Chi-Hsiang Li, Xiaoyi Jiang
Extension of ICF Classifiers to Real World Data Sets

Classification problem asks to construct a classifier from a given data set, where a classifier is required to capture the hidden oracle of the data space. Recently, we introduced a new class of classifiers ICF, which is based on iteratively composed features on {0,1, ∗ }-valued data sets. We proposed an algorithm ALG-ICF

 ∗ 

to construct an ICF classifier and showed its high performance. In this paper, we extend ICF so that it can also process real world data sets consisting of numerical and/or categorical attributes. For this purpose, we incorporate a discretization scheme into ALG-ICF

 ∗ 

as its preprocessor, by which an input real world data set is transformed into {0,1, ∗ }-valued one. Based on the experimental studies on conventional discretization schemes, we propose a new discretization scheme, integrated construction (IC). Our computational experiments reveal that the ALG-ICF

 ∗ 

equipped with IC outperforms a decision tree constructor C4.5 in many cases.

Kazuya Haraguchi, Hiroshi Nagamochi

[Special] Chance Discovery and Social Network I

Hierarchical Visualization for Chance Discovery

Chance discovery has achieved much success in discovering events that, though rare, are important to human decision making. Since humans are able to efficiently interact with graphical representations of data, it is useful to use visualizations for chance discovery. KeyGraph enables efficient visualization of data for chance discovery, but does not have provisions for adding domain-specific constraints. This contribution extends the concepts of KeyGraph to a visualization method based on the target sociogram. As the target sociogram is hierarchical in nature, it allows hierarchical constraints to be embedded in visualizations for chance discovery. The details of the hierarchical visualization method are presented and a class of problems is defined for its use. An example from software requirements engineering illustrates the efficacy of our approach.

Brett Bojduj, Clark S. Turner
Episodic Memory for Ubiquitous Multimedia Contents Management System

Recently, mobile devices are regarded as a content storage with their functions such as camera, camcorder, and music player. It creates massive new data and downloads contents from desktop or wireless internet. Because of the massive size of digital contents in the mobile devices, user feels difficulty to recall or find information from the personal storage. If it is possible to organize the storage in a style of human-memory management, it could reduce user’s effort in contents management. Based on the evidence that human memory is organized as an episodic-style, we propose a KeyGraph-based reorganization method of mobile device storage for better accessibility to the data. It can help user not only find useful information from the storage but also expand his/her memory by adding user’s contexts such as location, SMS, call, and device status. User can recall his/her memory from the contents and contexts. KeyGraph finds rare but relevant events that can be used as a memory landmark in the episodic memory. Using artificially generated logs from a pre-defined scenario, the proposed method is tested and analyzed to check the possibility.

Kyung-Joong Kim, Myung-Chul Jung, Sung-Bae Cho
Catalyst Personality for Fostering Communication Among Groups with Opposing Preference

The activity of an organization is excited by introducing new persons. Understanding such catalyst personality is an important basis for fostering communication among groups with opposing preference. In the prior understanding, the groups are independent segments. Cognition, resulted from seeing the overlaps revealed between the segments, is not the same as the prior understanding. This gap is a clue. We demonstrate an experiment using questionnaire on the preference of art pieces.

Yoshiharu Maeno, Yukio Ohsawa, Takaichi Ito

Education

On Using Learning Automata to Model a Student’s Behavior in a Tutorial-like System

This paper presents a new philosophy to model the behavior of a Student in a Tutorial-

like

system using Learning Automata (LA). The model of the Student in our system is inferred using a higher-level of LA, referred to

Meta-LA

, which attempt to characterize the learning model of the Students (or Student Simulators), while the latter use the Tutorial-

like

system. To our knowledge, this is the first published result which attempts to infer the learning model of an LA when it is treated externally as a black box, whose outputs are the only observable quantities. Additionally, our paper represents a new class of Multi-Automata systems, where the

Meta-LA

communicate with the Students,

also modelled

using LA, in a synchronous scheme of interconnecting automata.

Khaled Hashem, B. John Oommen
Test-Sheet Composition Using Immune Algorithm for E-Learning Application

In this paper, a novel approach Immune Algorithm (IA) is applied to improve the efficiency of composing near optimal test sheet from item banks to meet multiple assessment criteria. We compare the results of immune and Genetic Algorithm (GA) to compose test-sheets for multiple assessment criteria. From the experimental results, the IA approach is desirable in composing near optimal test-sheet from large item banks. And objective conceptual vector (

OCV

) and objective test-sheet test item numbers (

M

) can be effectually achieved. Hence it can support the needs of precisely evaluating student’s learning status. We successfully extend the applications of artificial intelligent - Immune to the educational measurement.

Chin-Ling Lee, Chih-Hui Huang, Cheng-Jian Lin
PDA Plant Search System Based on the Characteristics of Leaves Using Fuzzy Function

In view of the fact that most people do not know the names of the plants which can be seen everywhere, a plant search system with artificial intelligence on PDA devices is developed. In this study, users need to input the classified characteristics of leaves by observing plants’ leaves. After calculating the centroid-contour distance (CCD) and fuzzy function for all characteristics, the search results are displayed and arranged according to their ranks. Thus there is no need to consult all kinds of correlative books about plants. With the help of the plant search system on PDA devices, users can broaden their knowledge by mobile learning and the observation of plants.

Shu-Chen Cheng, Jhen-Jie Jhou, Bing-Hong Liou

Machine Learning II

Stochastic Point Location in Non-stationary Environments and Its Applications

This paper reports the first known solution to the Stochastic Point Location (SPL) problem when the Environment is non-stationary. The SPL problem [12,13,14] involves a general learning problem in which the learning mechanism attempts to learn a “parameter”, say

λ

*

, within a closed interval. However, unlike the earlier reported results, we consider the scenario when the learning is to be done

in a non-stationary setting

. The Environment communicates with an intermediate entity (referred to as the Teacher) about the point itself, advising it where it should go. The mechanism searching for the point, in turn, receives responses from the Teacher, directing

it

how

it

should move. Therefore, the point itself, in the overall setting, is moving, delivering possibly incorrect information about its location to the Teacher. This, in turn, means that the “Environment” is itself non-stationary, implying that the advice of the Teacher is both uncertain and

changing with time

- rendering the problem extremely fascinating. The heart of the strategy we propose involves discretizing the space and performing a controlled random walk on this space. Apart from deriving some analytic results about our solution, we also report simulation results which demonstrate the power of the scheme.

B. John Oommen, Sang-Woon Kim, Mathew Samuel, Ole-Christoffer Granmo
Quick Adaptation to Changing Concepts by Sensitive Detection

In mining data streams, one of the most challenging tasks is adapting to

concept change

, that is change over time of the underlying concept in the data. In this paper, we propose a novel ensemble framework for mining concept-changing data streams. This algorithm, called QACC (

Q

uick

A

daptation to

C

hanging

C

oncepts), realizes quick adaptation to changing concepts using an ensemble of classifiers. For quick adaptation, QACC sensitively detects concept changes in noisy streaming data. Empirical studies show that the QACC algorithm is efficient for various concept changes.

Yoshiaki Yasumura, Naho Kitani, Kuniaki Uehara
ACIK : Association Classifier Based on Itemset Kernel

Considering the interpretability of association classifier, and high classification accuracy of SVM, in this paper, we propose ACIK, an association classifier built with help of SVM, so that the classifier has an interpretable classification model, and has excellent classification accuracy. We also present a novel family of Boolean kernel, namely itemset kernel. ACIK, which takes SVM as learning engine, mines interesting association rules for construct itemset kernels, and then mines the classification weight of these rules from the classification hyperplane constructed by SVM. Experiment results on UCI dataset show that ACIK outperforms some state-of-art classifiers, such as CMAR, CPAR, L

3

, DeEPs, linear SVM, and so on.

Yang Zhang, Yongge Liu, Xu Jing, Jianfeng Yan

[Special] Chance Discovery and Social Network II

Risk Discovery Based on Recommendation Flow Analysis on Social Networks

Social networks have been working as a medium to provide cooperative interactions between people. However, as some of users take malicious actions, the social network potentially contains some risks (e.g., information distortion). In this paper, we propose a robust information diffusion (or propagation) model to detect malicious peers on social network. Especially, we apply statistical sequence analysis to discover a peculiar patterns on recommendation flows. Through two experimentation, we evaluated the performance of risk discovery on social network.

Jason J. Jung, Geun-Sik Jo
Using Conceptual Scenario Diagrams and Integrated Scenario Map to Detect the Financial Trend

In order to visualise the decision making process, the data association diagram is prepared to show the relationships or scenarios extracted from data, and provide a way for designing or discovering alternatives. However, managers are not easy to design alternatives if the collected data is large and complex. Thus, this study provides an approach for extracting the concepts from association diagrams to create the conceptual scenario diagrams. Afterward, variant diagrams are generated from the conceptual scenario diagrams for easily visualising and explaining the variation of financial status within a firm. Finally, the integrated scenario map is produced for managers to understand the financial manipulations of the firm.

Chao-Fu Hong, Tzu-Fu Chiu, Yu-Ting Chiu, Mu-Hua Lin
Chance Discovery in Credit Risk Management
Estimation of Chain Reaction Bankruptcy Structure by Directed KeyGraph

Credit risk management based on portfolio theory becomes popular in recent Japanese financial industry. But consideration and modeling of chain reaction bankruptcy effect in credit portfolio analysis leave much room for improvement. That is mainly because method for grasping relations among companies with limited data is underdeveloped. In this article, chance discovery method with directed KeyGraph is applied to estimate industrial relations that are to include companies’ relations that transmit chain reaction of bankruptcy. The steps for the data analysis are introduced and result of example analysis with default data in Kyushu, Japan, 2005 is presented.

Shinichi Goda, Yukio Ohsawa

Speech

The Design of Phoneme Grouping for Coarse Phoneme Recognition

Automatic speech recognition for real-world applications such as a robot should deal with speech under noisy environments. This paper presents coarse phoneme recognition which uses a phoneme group instead of a phoneme as a unit of speech recognition for such a real-world application. In coarse phoneme recognition, the design of the phoneme group is crucial. We, thus, introduce two types of phoneme groups – exclusive and overlapping phoneme groups, and evaluate coarse phoneme recognition with these two phoneme grouping methods under various kinds of noise conditions. The experimental results show that our proposed overlapping phoneme grouping improves the correct phoneme inclusion rate by 20 points on average.

Kazuhiro Nakadai, Ryota Sumiya, Mikio Nakano, Koichi Ichige, Yasuo Hirose, Hiroshi Tsujino
An Improved Voice Activity Detection Algorithm for GSM Adaptive Multi-Rate Speech Codec Based on Wavelet and Support Vector Machine

This paper proposes an improved voice activity detection (VAD) algorithm for controlling discontinuous transmission (DTX) of the GSM adaptive multi-rate (AMR) speech codec. First, based on the wavelet transform, the original IIR filter bank and the open-loop pitch detector are implemented via the wavelet filter bank and the wavelet-based pitch detection algorithm, respectively. The proposed wavelet filter bank divides the input speech signal into 9 frequency bands so that the signal level at each sub-band can be calculated. In addition, the background noise can be estimated in each sub-band by using the wavelet de-noising method. The wavelet filter bank is also derived to detect correlated complex signals like music. Then one can apply support vector machine (SVM) to train an optimized non-linear VAD decision rule involving the sub-band power, noise level, pitch period, tone flag, and complex signals warning flag of input speech signals. By the use of the trained SVM, the proposed VAD algorithm can produce more accurate detection results. Various experimental results carried out from the Aurora speech database show that the proposed algorithm gives considerable VAD performances superior to the AMR VAD Option 1 and comparable with the AMR VAD Option 2.

Shi-Huang Chen, Yaotsu Chang, T. K. Truong
The PICA Framework for Performance Analysis of Pattern Recognition Systems and Its Application in Broadcast News Segmentation

In this paper, the performance influencing class analysis (PI-CA) framework is proposed for performance analysis of pattern recognition systems dealing with data with great variety and diversity. Through the PICA procedure, the population of data is divided into subsets on which the system achieves different performances by means of statistical methods. On basis of the division, performance assessment and analysis are conducted to estimate the system performance on the whole data population. The PICA framework can predict true performance in real application and facilitate comparison of different systems without the same test set. The PICA framework is applied to the analysis of a broadcast news segmentation system. The procedure is presented and experimental results were given, which verified the effectiveness of PICA.

Xiangdong Wang, Meiyin Li, Shouxun Lin, Yueliang Qian, Qun Liu

[Special] E-commerce I

The Theory of Maximal Social Welfare Feasible Coalition

This paper proposes a new theory for forming a maximum-value-cooperation coalition known as

the Maximal Social Welfare Feasible Coalition

. This theory can give such solution because it does not assume that each player requesting to join a coalition knows information of other players. However, all players’ private information requesting to join the coalition is known by an honest coordinator. This allows the coordinator to select a coalition structure with maximal value of cooperation among successful players so as they get at least at their required minimum values. Not only this maximal value is shown to be equal to or larger than the value of a core coalition but also the value allocation is shown to be Pareto optimal.

Laor Boongasame, Veera Boonjing, Ho-fung Leung
Recommender Agent Based on Social Network

Conventional collaborative recommendation approaches neglect weak relationships even when they provide important information. This study applies the concepts of chance discovery and small worlds to recommendation systems. The trust (direct or indirect) relationships and product relationships among customers are to find candidates for collaboration. The purchasing quantities and feedback of customers are considered. The whole similarities are calculated based on the model, brand and type of purchased product.

Heng-Li Yang, Hsiao-Fang Yang
A New Pooled Buying Method Based on Risk Management

In this paper, we handle a negotiation method in which a main negotiation consists of multiple sub-negotiations. In items allocation for commerce, there are some risks on the trade because the market balance is determined by supply and demand. The result of main negotiation is also determined by the order of sub-negotiations and agents’ behaviors since agents’ budgets have limitations on the actual commercial trading. However, it is difficult to decide the order of negotiations, such as simultaneous decisions and rotations. In this paper, we give the trading model in such cases, that is, agents purchase items by pooled buying. In actual trading as pooled buying, items are sold by the volume discount. Concretely, we discuss joint-stock company and private limited partnership on the web. In the negotiation phase, an agent proposes pooled buying based on the number of items and their prices considering their budgets. The degree of risks is calculated. All agents can see the risks with each item. Agents cooperate to the proposing agent based on the degree of risks. In this paper, we give two scenarios for trading. One is to avoid free riders who get surplus without risks. Another one is to promote agents’ participation to increase social surplus. For risk aversion and promoting cooperation, we employ the side-payment policy, that is, cooperative agents’ risks are preserved to a minimum. Further, we give some discussions where agents must pay the negotiation costs and charge for storage.

Tokuro Matsuo
The Representation of e-Contracts as Default Theories

It is widely acknowledged that a temporal representation of e-contracts is essential in order to support e-contract execution and performance monitoring. One possibility that has been explored by many researchers is to represent e-contracts in Event Calculus. Although such representations are intuitive and facilitate temporal reasoning about actions/events and their factual and normative effects, they fall short in situations where domain knowledge cannot be assumed to be complete. Moreover, it is not clear how dynamic normative conflict resolution can be achieved, without resorting to unintuitive representations for conflict resolution strategies. In order to maintain the benefits of an underlying Event Calculus representation, and incorporate assumption-based reasoning and dynamic conflict management capability, we propose a representation of e-contracts as Default Theories, which are constructed by translating Event Calculus representations dynamically. Finally, we discuss how the resulting Default Theory representation enables a software agent to address various reasoning problems.

Georgios K. Giannikis, Aspassia Daskalopulu

Heuristic Search I

Competitive Ant Colony Optimisation

The usual assumptions of the ant colony meta-heuristic are that each ant constructs its own complete solution and that it will then operate relatively independently of the rest of the colony (with only loose communications via the pheromone structure). However, a more aggressive approach is to allow some measure of competition amongst the ants. Two ways in which this can be done are to allow ants to take components from other ants or limit the number of ants that can make a particular component assignment. Both methods involve a number of competitions so that the probabilistic best assignment of component to ant can be made. Both forms of competitive ant colony optimisation outperform a standard implementation on the benchmark set of the assignment type problem, generalised assignment.

Marcus Randall
Optimization of Dynamic Combinatorial Optimization Problems Through Truth Maintenance

Combinatorial optimization problems are embedded in dynamic environments, spanning many domains. As these problem environments may change repeatedly, agents that attempt to solve problems in such environments must be able to adapt to each change that occurs. We present a technique for a Tabu Search-based meta-heuristic agent that collaborates with a truth maintenance agent to maximize reuse of generated solutions that may become partially inconsistent when a change occurs in the problem space. By allowing the truth maintenance agent to perform partial plan repairs, we hope to mitigate the effect that a change has on the performance of the planning agent. Such a system is discussed in a global logistics scheduling program. The performance of our approach is analyzed with respect to a dynamic constrained vehicle routing problem. Our results show that partial plan repairs increase the stability of solutions in dynamic domains.

Brett Bojduj, Dennis Taylor, Franz Kurfess
A Microcanonical Optimization Algorithm for BDD Minimization Problem

Reduced ordered binary decision diagrams (ROBDDs) have become widely used for CAD applications such as logic synthesis, formal verification, and etc. The size of RDBDDs for a Boolean function is very sensitive to the ordering choices of input variables and the problem of finding a minimum-size variable ordering is known to be NP-complete. In this paper, we propose a new ROBDD minimization algorithm based on the microcanonical optimization (MO). MO iteratively applies an initialization phase and a sampling phase to combinatorial optimization problems. In the proposed MO-based algorithm, the initialization phase is replaced with the existing Sifting algorithm known to be a very fast local search algorithm to find a minimum-size ROBDD. We derived equations for the proposed MO-based algorithm parameters empirically. The algorithm has been experimented on well known benchmark circuits and the experiments show that, even with slightly better solutions, the run time of the algorithm is 24% and 48% of the Genetic and SA algorithms’, respectively, on average. The proposed MO-based algorithm is a good candidate for the large size problems that cannot be attacked by exact algorithms when a near-optimal solution is required.

Sang-Young Cho, Minna Lee, Yoojin Chung
A Lot Size Model for Deteriorating Inventory with Back-Order Cancellation

This paper investigates a production planning problem in which the inventory is deteriorating at a constant rate, demand is price dependent and back-order is allowed. It is assumed in most previous works that back-ordering customers may not cancel their orders. However, in reality, many customers may withdraw or cancel their orders before receiving their orders. Tacking the cancellation phenomenon into account, this paper develops a continuous-time model to simultaneously determine the production decision and the selling price. A simple algorithm is used to obtain the optimal solutions. Numerical examples are also used to illustrate the solution searching procedure and the characteristics of the optimal decisions.

Peng-Sheng You, Yi-Chih Hsieh

Application System

An Intermodal Transport Network Planning Algorithm Using Dynamic Programming

This paper presents a dynamic programming algorithm to draw optimal intermodal freight routing with regard to international logistics of container cargo for export and import. This study looks into the characteristics of intermodal transport using two or more modes, and presents a Weighted Constrained Shortest Path Problem (WCSPP) model. This study draws a Pareto optimal solution that can simultaneously meet two objective functions by applying the Label Setting algorithm, one of the Dynamic Programming algorithms, after setting feasible area using the objective function values drawn in the model. To improve the algorithm performance, pruning rules have also been presented. The algorithm is applied to real transport paths from Busan to Rotterdam. This study quantitatively measures the savings effect of transport cost and time by comparing single transport modes with existing intermodal transport paths. Lastly, this study applies the multiple Pareto optimal solutions drawn, to a mathematical model and MADM model, and compares the two evaluation methods as a means to evaluate the solutions.

Jae Hyung Cho, Hyun Soo Kim, Hyung Rim Choi, Nam Kyu Park, Moo Hong Kang
Immune Inspired Optimizer of Combustion Process in Power Boiler

The article presents an optimization method of combustion process in a power boiler. This solution is based on the artificial immune systems theory. A layered optimization system is used, to minimize CO and NOx emission. Immune inspired optimizer SILO is implemented in each of three units of Ostroleka Power Plant (Poland). The results from this implementation are presented. They confirm that presented solution is effective and usable in practice.

Konrad Świrski, Konrad Wojdan
Dynamic Search Spaces for Coordinated Autonomous Marine Search and Tracking

This paper presents a technique for dynamically determining search spaces in order to enable sensor exploration during autonomous search and tracking (SAT) missions. In particular, marine search and rescue scenarios are considered, highlighting the need for exploration during SAT. A comprehensive method which is independent of search space representation is introduced, based on exploration frontiers and reachable set analysis. The advantage of the technique is that recursive Bayesian estimation can be performed indefinitely, without loss of information. Numerical results involving multiple search vehicles and multiple targets demonstrate the efficacy of the approach for coordinated SAT. These examples also highlight the added benefit for human mission planners resulting from the technique’s simplification of the search space allocation task.

Benjamin Lavis, Tomonari Furukawa
Composite Endoscope Images from Massive Inner Intestine Photos

This paper presented an image reconstruction method for a capsule endoscope. The proposed method constructs a 3–D model of the intestine using massive images obtained from the capsule endoscope. It merges all images and yields a complete 3-D model of the intestine. This 3-D model is reformed as a 2-D plane image showing the inner side of the entire intestine. The proposed image composition has been evaluated using the OpenGL 3-D simulator. The composite image provides an easy-to-understand view for examining the intestine. In addition, it provides fast track-and-check diagnosis using the3-D model implementation.

Eunjung Kim, Kwan-Hee Yoo, Je-Hoon Lee, Yong-Dae Kim, Younggap You

[Special] E-commerce II

Using Trust in Collaborative Filtering Recommendation

Collaborative filtering (CF) technique has been widely used in recommending items of interest to users based on social relationships. The notion of trust is emerging as an important facet of relationships in social networks. In this paper, we present an improved mechanism to the standard CF techniques by incorporating trust into CF recommendation process. We derive the trust score directly from the user rating data and exploit the trust propagation in the trust web. The overall performance of our trust-based recommender system is presented and favorably compared to other approaches.

Chein-Shung Hwang, Yu-Pin Chen
AdaptRank: A Hybrid Method for Improving Recommendation Recall

A hybrid recommendation method is presented in this paper. Its main goal is to improve recommendation recall maintaining high recommendation precision and adaptive ability. The formal model is used to define the method and to analyze how the measures known from traditional Information Retrieval may be adapted to recommendation. The presented theorems show that the method is able to adapt to changing user’s needs and achieve the maximal effectiveness if the component methods work properly.

Maciej Kiewra, Ngoc Thanh Nguyen
Combinatorial Auction with Minimal Resource Requirements

Although combinatorial auction has been studied extensively, it is difficult to apply the existing results to a problem with minimal resource requirements. In this paper, we consider a combinatorial auction problem in which an auctioneer wants to acquire resources from a set of bidders to process the tasks on hand. Each task requires a minimal set of resources for executing the operations. Each bidder owns a set of resources to bid for the tasks. The problem is to determine the resource assignment to minimize the total cost to perform the tasks. The main results include: (1) a problem formulation for combinatorial auction with minimal resource requirements; (2) a solution methodology based on Lagrangian relaxation; (3) an economic interpretation and a proposed structure for implementing our solution algorithms.

Fu-Shiung Hsieh

Agent-Based System

Effectiveness of Autonomous Network Monitoring Based on Intelligent-Agent-Mediated Status Information

The growing complexity of communication networks and their associated information overhead have made network management considerably difficult. This paper presents a novel Network Management Scheme based on the novel concept of Active Information Resources (AIRs). Many types of information are distributed in the complex network, and they are changed dynamically. Under the AIR scheme, each piece of information in a network is activated as an intelligent agent: an I-AIR. An I-AIR has knowledge and functionality related to its information. The I-AIRs autonomously detect run-time operational obstacles occurring in the network system and specify the failures’ causes to the network administrator with their cooperation. Thereby, some network management tasks are supported. The proposed prototype system (AIR-NMS) was implemented. Experimental results indicate that it markedly reduces the network administrator workload, compared to conventional network management methods.

Susumu Konno, Sameer Abar, Yukio Iwaya, Tetsuo Kinoshita
Design and Implementation of Interactive Design Environment of Agent System

The agent-based systems have been designed and developed using the latest agent technologies. However, the design and the debugging of these systems contain some difficult problems due to the situational and nondeterministic nature of the agents, and the effective design-supporting technologies have not been proposed so far. In order to make efficient design process of agent system, we propose an interactive development method of agent system based on the agent-repository-based multiagent framework which focuses on an essential feature of agent system design, i.e., the reuse of existing agents stored in the agent repository. In this paper, we propose an Interactive Design Environment of Agent system called IDEA and demonstrate the effectiveness of the proposed method.

Takahiro Uchiya, Takahide Maemura, Xiaolu Li, Tetsuo Kinoshita
An Agent-Based Approach to Knapsack Optimization Problems

This paper proposes a new artificial intelligence framework that solves knapsack related problems (a class of complex combinatorial optimization problems). This framework, which is pseudo-parallel and stochastic, uses an agent based system to approximately solve the optimization problem. The system consists of active agents interacting in real time. They mimic the behavior of the parameters of the optimization problem while being individually driven by their own parameters, decision process, and fitness assessment. The application of the framework to the two-dimensional guillotine bin packing problem demonstrates its effectiveness both in terms of solution quality and run time.

Sergey Polyakovsky, Rym M’Hallah

Heuristic Search II

Constraint-Based Approach for Steelmaking–Continuous Casting Rescheduling

Steelmaking-continuous casting rescheduling problem is an NP-hard problem. In this paper, we present a constraint-based approach for steelmaking–continuous casting rescheduling problem in integrated steel production environment. We treat the steelmaking–continuous casting rescheduling problem as a dynamic constraint satisfaction problem. To maintain the production stability and the material flow continuity after rescheduling, the perturbation measure function of the model is defined as the summation of the absolute deviation between the new start times and the original start times of operations in continuous casting stage, the variable selection and value selection heuristics is designed based on the resource slackness to minimize the change extent of the rescheduling solution. The resource conflicts in the original schedule are incrementally repaired by the variable selection and value selection heuristics. The validity of the proposed model and approach are demonstrated by the computational examples.

Tieke Li, Dongfen Guo
Study on Loop Problem in Opening Database for Chinese Chess Programs

A Chinese chess program systematically constructs a large tree-based opening database by collecting knowledge from chess masters, books and game records in the opening phrase. However, those games with loops are not managed properly in the database. The perpetual situations are not recorded correctly in the database, and therefore the program will play a draw in an advantageous position and a loss in a draw position. This study describes a solution to the loop problem in opening database.

Shi-Jim Yen, Tai-Ning Yang, Jr-Chang Chen, Shun-Chin Hsu
Job Shop Scheduling Optimization Using Multi-modal Immune Algorithm

A multi-modal immune algorithm is utilized for finding optimal solutions to job-shop scheduling problem emulating the features of a biological immune system. Inter-relationships within the algorithm resemble antibody molecule structure, antibody-antigen relationships in terms of specificity, clonal proliferation, germinal center, and the memory characteristics of adaptive immune responses. In addition, Gene fragment recombination and several antibody diversification schemes were incorporated into the algorithm in order to improve the balance between exploitation and exploration. Moreover, niche scheme is employed to discover multi-modal solutions. Numerous well-studied benchmark examples were utilized to evaluate the effectiveness of the proposed approach.

Guan-Chun Luh, Chung-Huei Chueh
Simulated Annealing Algorithm for Solving Network Expanded Problem in Wireless ATM Network

In this paper, the

network expanded problem

(NEP) which optimally assigns new adding and splitting cells in

PCS

(Personal Communication Service) network to switches in an

ATM

(Asynchronous Transfer Mode) network is studied. In NEP, the locations of cells (or Base Stations, BSs) in PCS network are fixed and known, but new switches should be installed to ATM network and the topology of the backbone network may be changed.Given some potential sites of new switches, the problem is to determine how many switches should be added to the backbone network, the locations of new switches, the topology of the new backbone network, and the assignments of new adding and splitting cells in the PCS to switches of the new ATM backbone network in an optimal manner. The goal is to do the expansion in an attempt to minimize the total communication cost under budget and capacity constraints. The NEP is modelled as a complex integer programming problem. Since finding an optimal solution to this problem is Impractical. A simulated annealing (SA) algorithm is proposed to solve this problem. In the proposed SA, several heuristics are encoded into the perturbation to generate good solutions. Experimental results indicate that the proposed simulated annealing algorithm can get better performance than heuristic algorithm.

Der-Rong Din

Other Applications

An Intrusion Detection Based on Support Vector Machines with a Voting Weight Schema

Though IDS (Intrusion Detection System) have been used for many years, the large number of returned alert messages leads to management inefficiencies. In this paper, we propose a novel method based on SVM (Support Vector Machines) with a voting weight schema to detect intrusion. First, TF (Term Frequency), TF-IDF (Term Frequency-Inverse Document Frequency) and entropy features are extracted from processes. Next, these three features are sent to the SVM model for learning and then for testing. We then use a general voting schema and a voting weight schema to test attack detection rate, false positive rate and accuracy. Preliminary results show the SVM with a voting weight schema combines low the false positive rates and high accuracy.

Rung-Ching Chen, Su-Ping Chen
An Ontology-Supported and Fully-Automatic Annotation Technology for Semantic Portals

We employ the techniques of ontology and linguistics to develop a fully-automatic annotation technique, which, when coupled with an automatic ontology construction method, can play a key role in the development of semantic portals. Based on this technique, we also demonstrate a semantic-portal prototype which defines how a semantic portal is interacting with the user by providing five different types of interaction patterns, including keyword search, synonym search, POS (Part-of-Speech)-constrained keyword search, natural language query, and semantic index search. Our primarily demonstrations show that it can indeed retrieve better semantic-directed information to meet user requests.

Sheng-Yuan Yang
Development and Evaluation of an Intelligent Colour Planning Support System for Townscapes

Aesthetics of townscapes have been a main factor in urban development. This paper introduces

IroKage

, an intelligent colour planning support system. The system offers improved colour schemes for existing townscapes based on three elements: colour harmony, impressions of the townscape, and cost for the change of colours. The system is constructed using an evolutionary algorithm and the

Kansei

engineering approach. After the construction, system evaluation is conducted. The subjects evaluate fifteen colour schemes output from the system in terms of colour harmony and the ideal impressions for the townscapes using the semantic differential method introduced by Osgood et al. The results of the evaluation demonstrate that the system has sufficient ability to propose appropriate colour schemes for the ideal town impressions.

Yuichiro Kinoshita, Eric W. Cooper, Katsuari Kamei
A Testing Device for the Human Ability to Utilise Beneficial Environmental Features

In this paper, we propose a device for testing the human ability to utilise, even unconsciously, beneficial environmental features. An important characteristic of the proposed testing device is that additional stimuli, correlated to the correct task outcome, are introduced during the test without informing the subject. The device tests whether the subject will notice the subtly provided stimuli and use them, consciously or not, to improve the task performance. The use of these additional stimuli is confirmed by observing a sudden drop in the subject’s performance (slump-like effect) when the additional stimuli are reversed, i.e., correlate to the wrong task outcome. We present several facts suggesting that the ability tested by the proposed device supports the human skill acquisition. An illustrative example of a specific implementation of the proposed device, based on a visual alternative choice task with additional audio stimuli, is presented to explain the testing process.

Blagovest Vladimirov, Hiromi Mochiyama, Hideo Fujimoto
Backmatter
Metadaten
Titel
New Trends in Applied Artificial Intelligence
herausgegeben von
Hiroshi G. Okuno
Moonis Ali
Copyright-Jahr
2007
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-73325-6
Print ISBN
978-3-540-73322-5
DOI
https://doi.org/10.1007/978-3-540-73325-6

Premium Partner