Skip to main content

Über dieses Buch

This volume constitutes the refereed proceedings of the 6th Multi-disciplinary International Workshop On Artificial Intelligence, MIWAI 2012, held in Ho Chi Minh City, Vietnam, in December 2012. The 29 revised full papers presented were carefully reviewed and selected from numerous submissions. The papers are organized in topical sections in AI-GIS for climate change, computer vision, decision theory, e-commerce and AI, multiagent planning and learning, game theory, industrial applications of AI, multiagent systems and evolving intelligence, robotics and Web services.



An Approach for Tuning the Parametric Water Flow Algorithm Based on ANN

The manuscript proposed an approach for the optimization of the parametric water flow algorithm. This algorithm introduced a water flow function as a basis for the text-line segmentation process. The function is established as the power function. It exploited two parameters: water flow angle α and exponent n. In order to tune these parameters, the artificial neural network has been used. Results are encouraging because of the improvement of the text-line segmentation for the handwritten text.
Darko Brodić, Zoran N. Milivojević, Dejan Tanikić, Dragan R. Milivojević

Mining Frequent Common Families in Trees

This paper is concerned with mining the frequent common families from leaf-labeled tree database, in which supports for common families are established by not only exact family subtrees but also extended family subtrees. It proposes an algorithm to determine frequent common families with control over the coverage of extended family subtrees. The suggested method has been tested to both several synthetic data sets and a real data set.
Kyung Mi Lee, Chan Hee Lee, Keon Myung Lee

A Structure Based Approach for Mathematical Expression Retrieval

Mathematical expression (ME) retrieval problem has currently received much attention due to wide-spread availability of MEs on the World Wide Web. As MEs are two-dimensional in nature, traditional text retrieval techniques used in natural language processing are not sufficient for their retrieval. In this paper, we have proposed a novel structure based approach to ME retrieval problem. In our approach, query given in \(\mbox{\LaTeX}\) format is preprocessed to eliminate extraneous keywords (like \displaystyle, \begin{array} etc.) while retaining the structure information like superscript and subscript relationships. MEs in the database are also preprocessed and stored in the same manner. We have created a database of 829 MEs in \(\mbox{\LaTeX}\) form, that covers various branches of mathematics like Algebra, Trigonometry, Calculus etc. Preprocessed query is matched against the database of preprocessed MEs using Longest Common Subsequence (LCS) algorithm. LCS algorithm is used as it preserves the order of keywords in the preprocessed MEs unlike bag of words approach in the traditional text retrieval techniques. We have incorporated structure information into LCS algorithm and proposed a measure based on the modified algorithm, for ranking MEs in the database. As proposed approach exploits structure information, it is closer to human intuition. Retrieval performance has been evaluated using standard precision measure.
P. Pavan Kumar, Arun Agarwal, Chakravarthy Bhagvati

Toward Composite Object Classification Using a Probabilistic Inference Engine

This paper describes a new framework for classifying objects in still images using Description Logic reasoners. Unlike classical knowledge representation, features extracted from visual images are not always certain but rather ambiguous and probabilistic. To handle such uncertainty at the reasoning level, we employ the advantage of a probabilistic inference engine besides a classical reasoner, and design an image object ontology accordingly. The ontology defines composite objects in terms of basic objects, and basic objects in terms of visual features like shapes and colors. The proposed framework aims at improving on existing works in terms of both scalability and reusability. We demonstrate the performance of our object classification framework on a collection of car side images and compare to other approaches. Not only does our method show a distinctly better accuracy, but also each object classification result is equipped with a probability range.
Suwan Tongphu, Boontawee Suntisrivaraporn

Relocation Action Planning in Electric Vehicle Sharing Systems

This paper presents a design of a relocation planner for electric vehicle sharing systems, which periodically redistributes vehicles over multiple stations for better serviceability. For the relocation vector, or target vehicle distribution given by a relocation strategy, the proposed planner builds two preference lists, one for vehicles in overflow stations and the other for underflow stations. Then, the matching procedure assigns each electric vehicle to a station in such a way to minimize the relocation distance and time by means of a modified stable marriage problem solver. The performance measurement is conducted by a prototype implementation on top of the previously developed analysis framework and real-life trip records in Jeju City area. The morning-focused relocation strategy can best benefit from the proposed relocation planner in terms of both the relocation distance and the number of moves, mainly due to symmetric traffic patterns in the morning and in the evening.
Junghoon Lee, Hye-Jin Kim, Gyung-Leen Park

A Guide to Portfolio-Based Planning

In the recent years the field of automated planing has significantly advanced and several powerful domain-independent planners have been developed. However, none of these systems clearly outperforms all the others in every known benchmark domain. This observation motivated the idea of configuring and exploiting a portfolio of planners to achieve better performances than any individual planner: some recent planning systems based on this idea achieved significantly good results in experimental analysis and International Planning Competitions. Such results let suppose that future challenges of Automated Planning community will converge on designing different approaches for combining existing planning algorithms.
This paper reviews existing techniques and provides an exhaustive guide to portfolio-based planning. In addition, the paper outlines open issues of existing approaches and highlights possible future evolution of these techniques.
Mauro Vallati

Color and Texture Image Segmentation

For applications, such as image recognition or scene understanding, we cannot process the whole image directly for the reason that it is inefficient and unpractical. Therefore, to reduce the complexity of the recognition of the image, segmentation is a necessary step. Image segmentation divides an image into several parts (regions) according to some local homogeneous features of the image. For this purpose, understanding of the features of the image is important. Features such as color, texture, and patterns are considered for segmentation. Therefore, the thrust of our work is on the extraction of color textural features from images. Color measurement is done in Gaussian color space and texture features are extracted with Gabor filters. The paper proposes image segmentation based on recursive splitting k-means method and experiments are focused on color natural images taken from Berkeley Segmentation Dataset (BSD).
Chitti Kokil Kumar, Arun Agarwal, Raghavendra Rao Chillarige

Evolutionary Multi-objective Optimization Based Proportional Integral Controller Design for Induction Motor Drive

A new proportional-integral controller optimization methodology based on a multi-objective genetic algorithm for indirect field oriented controlled induction motor drive was proposed in this paper. GA-PI offers possibility of using the mathematical precision of PI algorithm with adaptability, and flexibility of genetic algorithm. This approach is independent of the system parameters, independent of the mathematical model, and can handle the system nonlinearity, allowing eliminates and reduces the overshoot, rise-time, settling time, load disturbance, and near-zero steady state error. The validity of proposed methods is confirmed by simulation results.
Moulay Rachid Douiri, Mohamed Cherkaoui

A Multi-agent Scheduling Model for Maximizing Agent Satisfaction

This paper presents a multi-agent scheduling model for selecting ecology-conservation activities in a large-scale ecological system. The overall goal is to maximize the total satisfaction of the multiple agents (stakeholders). The problem is motivated by the needs for sustainable development for the Dead Sea Basin in the Middle East. A new FPTAS algorithm for solving the scheduling problem is developed.
Eugene Levner, Amir Elalouf, Huajun Tang

Enhancing Pixel Oriented Visualization by Merging Circle View and Circle Segment Visualization Techniques

Analyzing large datasets is a difficult task in data analysis. For this many techniques have been proposed to display data in such a way that analysis can be done easily. There are many techniques used for data visualization, like geometric, icon based, hierarchical, graph based (line graph) and pixel oriented visualization techniques. In pixel oriented technique a single pixel is represented by a single data value and pixel color depends on the scale of data value i.e. if its scale is high then pixel is represented with light color and if the scale is low then it is represented with dark color. Visualizing large amount of data onto the screen is a challenge, because large data sets cannot be displayed on single screen at a time. There are two different types of pixel oriented techniques, query dependent and query independent pixel oriented techniques. Query dependent pixel oriented visualization technique are circle segment visualization technique, spiral and generalize spiral technique. A hierarchical technique: circle view technique is like a pie chart. For better visualization new technique is proposed by combining the idea of circle segment and circle view technique. This will enhance visualization technique and will display the data in circle form. The circle will be divided into segments and sub segments. This technique will display time series data in pixel oriented visualization for better analysis.
Zainab Aftab, Huma Tuaseef

Time Series Prediction Using Motif Information

Recent research works pay more attention to time series prediction, which some time series data mining approaches have been exploited. In this paper, we propose a new method for time series prediction which is based on the concept of time series motifs. Time series motif is a pattern appearing frequently in a time series. In the proposed approach, we first search for time series motif by using EP-C algorithm and then exploit motif information for forecasting in combination of a neural network model. Experimental results demonstrate our proposed method performs better than artificial neural network (ANN) in terms of prediction accuracy and time efficiency. Besides, our proposed method is more robust to noise than ANN.
Cao Duy Truong, Duong Tuan Anh

A New Approach for Measuring Semantic Similarity in Ontology and Its Application in Information Retrieval

Word similarity assessment is one of the most important elements in Natural Language Processing (NLP) and information retrieval. Evaluating semantic similarity of concepts is a problem that has been extensively investigated in the literature in different areas, such as artificial intelligence, cognitive science, databases and software engineering. Semantic similarity relates to computing the similarity between conceptually similar but not necessarily lexically similar terms. Currently, its importance is growing in different settings, such as digital libraries, heterogeneous databases and in particular the Semantic Web. In this paper, authors present a search engine framework using Google API that expands the user query based on similarity scores of each term of user’s query. The authors calculated the semantic similarity of noun words to obtain the related concepts described by the search query using WordNet. Users query is replaced with concepts discovered from the similarity measures. Authors present a new approach to compute the semantic similarity between words. A common data set of word pairs is used to evaluate the proposed approach: first calculate the semantic similarities of 30 word pairs, then the correlation coefficient between human judgement and three computational measures are calculated, the experimental result shows new approach is better than other existing computational models.
Kishor Wagh, Satish Kolhe

Local Stereo Matching by Joining Shiftable Window and Non-parametric Transform

In this paper, we propose a blocking-matching approach to the problem of correspondence in stereo matching. In blocking-matching methods, the correspondence or difference between pixels of a stereo pair is measured by a local window. Despite some area-based stereo matching algorithms have been developed and work well in a number of kinds of regions such as textureless or object boundary regions, their performance can debase once working in some sorts of radiometric conditions. Our proposed algorithm, in which non-parametric transform is used in the pre-processing step and an edge-preserving filter is used in the post-processing step, is an improved method of a shiftable window method and can work robustly in various radiometric conditions. Input images first are pre-processed by the census transform that makes our method robustly when the image pair is captured in various light sources or camera uncovering conditions. The window cost in our approach is calculated from the transformed images using in the Hamming distance, and the similarity is finally selected by a Winner-Takes-All strategy. The experimental results for the Middleburry images show that our proposed algorithm outperforms test local stereo algorithms in radiometrically dissimilarity images.
Hong Phuc Nguyen, Thi Dinh Tran, Quang Vinh Dinh

A Comprehensive Analysis and Study in Intrusion Detection System Using k-NN Algorithm

The security of computer networks has been in the focus of research for years. Organizations have realized that network security technology has become very important in protecting its information. Any attempt, successful or unsuccessful to compromise the confidentiality, integrity and availability of any information resource or the information itself is considered as a security threat or an intrusion. Every day, new kinds of threats are being faced by industries. One of the way-out to this problem is by using Intrusion Detection System (IDS). The main function of IDS is distinguishing and predicting normal or abnormal behaviors. This paper presents new implementation strategy performing the intrusion detection system, which gives better results by improving accuracy of classification. This approach is based on by defining addition and deletion rule and updating policy for intrusion detection. The experimental results, conducted on the KDD99 dataset, prove that, this new approach outperforms several state-of-the-art methods, particularly in detecting rare attack types.
Sharmila Wagh, Gaurav Neelwarna, Satish Kolhe

AI-Based Support for Experimentation in an Environmental Biotechnological Process

This paper presents an AI-based system that supports experimentation and control in the domain of environmental biotechnology. The objective of the experiments is to verify hypotheses on biostimulation of an activated sludge by sustaining oscillations in its metabolism to improve degradation of a hardly removable organic waste in the wastewater treatment plants. The presented system incorporates the application of a multi agent system (MAS), which uses ontologies and rules, and also a smart image processing method. One of the main tasks of the MAS is to provide a support for analysis of the off-line microscopic measurements based on both the rules describing the trends of analytical measurements and the quantitative on-line microscopic observations. Finally, the proposed MAS may keep track of results provided by the experts with results obtained on the basis of rules. As a result, the appropriate biostimulation control may prevent or reduce the climate changes.
Dariusz Choinski, Mieczyslaw Metzger, Witold Nocon, Grzegorz Polakow, Piotr Skupin

Stereo Matching by Fusion of Local Methods and Spatial Weighted Window

In this paper, we proposed two window-based methods, spatial weight shiftable window and spatial weight multiple window, for correspondence problem in stereo matching. The spatial weight shiftable window is an improvement of a shiftable window method while the spatial weight multiple window is an enhancement of a multiple window method. They combine spatial weighted window for each support window, and they hence can work well in the regions of disparity discontinuity or object boundaries. The window costs in our approaches is calculated by deploying spatial weighted window for each support window, and the similarity is finally selected by a Winner-Takes-All strategy. The experimental results for the Middleburry images illustrated that the proposed algorithms outperform test local stereo algorithms.
Thi Dinh Tran, Hong Phuc Nguyen, Quang Vinh Dinh

Efficient Handling of 2D Image Queries Using VPC + -tree

Handling queries over images is an interesting issue emerging recently in information systems. One of the most challenging problems on that work is how to process the image rotation efficiently since the query image and the ones stored in the database were typically not taken from the same angles. In this paper, an approach that employs time series representation of images is introduced. Subsequently, Fourier Transform technique can be performed to achieve the invariant rotation between images. Moreover, the data can be compressed efficiently on that representation when working on huge amount of data.
The major contribution on this work is the proposal of VPC  + -tree, extended from VPC-tree, a well-known structure supporting indexing and retrieving compressed objects. The VPC  + -tree not only supports faster and more accurate retrieval, but it also achieves the almost ideal ratio of disc access. It is a remarkable contribution in the field of time series data processing.
Tran Cong Doi, Quan Thanh Tho, Duong Tuan Anh

Novel Granular Framework for Attribute Reduction in Incomplete Decision Systems

Incomplete decision systems containing missing attribute values occur frequently in real world applications. This paper proposes IQRAIG_incomplete algorithm for reduct computation in Incomplete Decision Systems using a novel granular framework. The proposed granular framework enables computation of similarity classes for a set of objects simultaneously which helps in increased effienciency of computing positive region. The merits of the proposed algorithm over IFSPA-IPR algorithm has been demonstrated empirically.
Sai Prasad P.S.V.S., Raghavendra Rao Chillarige

Agent-Based Control System for Sustainable Wastewater Treatment Process

Biotechnological processes are difficult to control; many different state trajectories can be obtained from the same starting conditions. A well-known process of this class encountered in the industry is the wastewater treatment process with activated sludge. In this case, the quality of process control has a strong direct impact on the natural environment. Moreover, the crucial components of the processes are living organisms, which require appropriate actions to be taken to ensure their sustainability. This paper describes the agent-based approach to the operating control tasks for the process. The implemented control system is described, which is based on a real-time agent communication protocol implementing a blackboard knowledge system. Additional functionalities of the control system include the support for a cooperation between multiple experimenters, and on-line real-time modelling of the system providing the aid in a decision making.
Grzegorz Polaków, Mieczyslaw Metzger

Tuning the Optimization Parameter Set for Code Size

Determining nearly optimal optimization options for modern day compilers is a combinatorial problem. Added to this, specific to a given application, platform and optimization objective, fine tuning the parameter set being used by various optimization passes, enhance the complexity further. In this paper we propose a greedy based iterative approach and investigate the impact of fine-tuning the parameter set on the code size. The effectiveness of our approach is demonstrated on some of benchmark programs from SPEC2006 benchmark suite that there is a significant impact of tuning the parameter values on the code size.
N. A. B. Sankar Chebolu, Rajeev Wankar, Raghavendra Rao Chillarige

Mining Weighted Frequent Sub-graphs with Weight and Support Affinities

Mining weighted frequent sub-graphs in graph databases is possible to obtain more complex and various patterns compared with finding patterns in transactional databases, and the gained sub-graphs reflect object’s charateristics in the real world due to weight conditions. However, all of the patterns do not mean really valid patterns. Though any sub-graph is frequent, supports or weights of each element composing the sub-graph can be sharply different, where the graph is more likley to be a meaningless pattern. To solve the problem, we propose novel techniques for mining only meaningful sub-graphs by applying both weight and support affinities to graph mining and a corresponding algorithm, MWSA. Through MWSA, we can effectively eliminate invalid patterns with large gaps among the pattern’s elements. MWSA not only can gain valid sub-graphs but also can improve mining efficiency such as runtime and memory-usage by pruning needless patterns. These advantages are presented through various experiments.
Gangin Lee, Unil Yun

Simple Spatial Clustering Algorithm Based on R-tree

In this article, we present an algorithm based on R-tree structure to solve a clustering task in spatial data mining. The algorithm can apply to cluster not only point objects but also extended spatial objects such as lines and polygons. The experimental results show that our algorithm yields the same result as any other algorithm and accommodates to clustering task in spatial database.
Nam Nguyen Vinh, Bac Le

Minimal Generalization for Conjunctive Queries

Relaxation is a cooperative method to provide informative answers to failing queries of user. The combination of deductive generalization operators in a certain order can avoid unnecessary generalization of duplicate queries. However, it is not expected to return all generalized queries to the user because the theoretical search space exponentially increases with the number of literals of the original query. This paper identifies the minimal generalization in relaxation of conjunctive queries and analyses its properties. The minimal generalization has improved the cooperative behavior of a query answering system to find related information to the user without relaxing all generalized queries. The generalization operators are ordered in relaxation based on its properties. Moreover, it provides a solution to deal with the problem of overgeneralization that leads to queries far from the user’s original intent. The algorithm for finding all minimal generalized queries is proposed based on keeping the fixed order when applying these generalization operators.
Thu-Le Pham, Katsumi Inoue

Interruptibility and Its Negative Impact on Graph Exploration Missions by a Team of Robots

Exploring an unknown graph has several fields of applications such as search and rescue operations. A team of robots is used to speed up these exploration missions; provided that they synchronize and coordinate their activities. Here, several conditions may limit the communication capabilities of robots which are crucial for coordination (e.g. high temperature). Therefore, a periodic rendezvous is needed as a work around in order to overlap their communication ranges. Attending these periodic rendezvous sessions requires that robots interrupt their current exploration progress periodically and traverse back to the rendezvous points (i.e. Interruptibility). During their trips to these points, the robots cross the explored part of the graph. Thus, they do not gain new knowledge about the graph. Furthermore, it increases the required time of exploration. Evaluating the impact of these interruptions on the exploration process through several experiments under different exploration strategies is the scope of this paper.
Hamido Hourani, Eckart Hauck, Sabina Jeschke

Monte-Carlo Search for Snakes and Coils

The “Snake-In-The-Box” problem is a hard combinatorial search problem, first described more than 50 years ago, which consists in finding longest induced paths in hypercube graphs. Solutions to the problem have diverse and some quite surprising practical applications, but optimal solutions are known only for problems of small dimension, as the search space grows super-exponentially in the hypercube dimension. Incomplete search algorithms based on Evolutionary Computation techniques have been considered the state-of-the-art for finding near-optimal solutions, and have until recently held most significant records. This study presents the latest results of a new technique, based on Monte-Carlo search, which finds significantly improved solutions compared to prior techniques, is considerably faster, and, unlike EC techniques, requires no tuning.
David Kinny

Algorithms for Filtration of Unordered Sets of Regression Rules

This paper presents six filtration algorithms for the pruning of the unordered sets of regression rules. Three of these algorithms aim at the elimination of the rules which cover similar subsets of examples, whereas the other three ones aim at the optimization of the rule sets according to the prediction accuracy. The effectiveness of the filtration algorithms was empirically tested for 5 different rule learning heuristics on 35 benchmark datasets. The results show that, depending on the filtration algorithm, the reduction of the number of rules fluctuates on average between 10% and 50% and in most cases it does not cause statistically significant degradation in the accuracy of predictions.
Łukasz Wróbel, Marek Sikora, Adam Skowron

Evaluation of Jamendo Database as Training Set for Automatic Genre Recognition

Research on automatic music classification has gained significance in the recent years due to a significant increase in music collections size. Music is available very easily through the mobile and internet domain, so there is a need to manage music by categorizing it for search and discovery. This paper focuses on music classification by genre which is a type of supervised learning oriented problem. That means in order to build a formal classifier model it is necessary to train it using annotated data. Researchers have to build their own training sets or rely on others that are usually limited with regards to size or due to copyright restrictions. The approach described in this paper is to use the public Jamendo database for training the chosen classifier for genre recognition task.
Mariusz Kleć

An Integrated Model for Financial Data Mining

Nowadays, financial data analysis is becoming increasingly important in the business market. As companies collect more and more data from daily operations, they expect to extract useful knowledge from existing collected data to help make reasonable decisions for new customer requests, e.g. user credit category, churn analysis, real estate analysis, etc. Financial institutes have applied different data mining techniques to enhance their business performance. However, simple approach of these techniques could raise a performance issue. Besides, there are very few general models for both understanding and forecasting different financial fields. We present in this paper an integrated model for analyzing financial data. We also evaluate this model with different real-world data to show its performance.
Fan Cai, N-A. LeKhac, M-Tahar Kechadi

Correlation Based Feature Selection Using Quantum Bio Inspired Estimation of Distribution Algorithm

Correlation based feature Selection (CFS) evaluates different subsets based on the pairwise features correlations and the features-class correlations. Machine learning techniques are applied to CFS to help in discovering the most possible differnt combinations of features especillay in large feature spaces. This paper introduces a quantum bio inspired estimation of distribution algorithm (EDA) for CFS. The proposed algorithm integrates the quantum computing concepts, vaccination process with the immune clonal selection (QVICA) and EDA. It is employed as a search technique for CFS to find the optimal feature subset from the features space. It is implemented and evaluated using benchmark dataset KDD-cup99 and compared with the GA algorithm. The obtained results showed the ability of QVICA-with EDA to obtain better feature subsets with fewer length, higher fitness values and in a reduced computation time.
Omar S. Soliman, Aliaa Rassem


Weitere Informationen

Premium Partner