Skip to main content

2014 | Buch

PRICAI 2014: Trends in Artificial Intelligence

13th Pacific Rim International Conference on Artificial Intelligence, Gold Coast, QLD, Australia, December 1-5, 2014. Proceedings

herausgegeben von: Duc-Nghia Pham, Seong-Bae Park

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 13th Pacific Rim Conference on Artificial Intelligence, PRICAI 2014, held in Gold Coast, Queensland, Australia, in December 2014. The 74 full papers and 20 short papers presented in this volume were carefully reviewed and selected from 203 submissions. The topics include inference; reasoning; robotics; social intelligence. AI foundations; applications of AI; agents; Bayesian networks; neural networks; Markov networks; bioinformatics; cognitive systems; constraint satisfaction; data mining and knowledge discovery; decision theory; evolutionary computation; games and interactive entertainment; heuristics; knowledge acquisition and ontology; knowledge representation, machine learning; multimodal interaction; natural language processing; planning and scheduling; probabilistic.

Inhaltsverzeichnis

Frontmatter

PRICAI 2014 Main Track

Subpopulation Diversity Based Setting Success Rate of Migration for Distributed Evolutionary Algorithms

In most of distributed evolutionary algorithms (DEAs), migration interval is used to decide the frequent of migration. Nevertheless, a predetermined interval cannot match the dynamic situation of evolution. Consequently, migration may happen at a wrong moment and just exert a negative influence to evolution. In this paper, a scheme of setting the success rate of migration based on subpopulation diversity is proposed. In the scheme, migration still happens at intervals, but the probability of immigrants entering the target subpopulation will be decided by the diversity of this subpopulation. An analysis shows that the extra time consumption for our scheme in a DEA is acceptable. In our experiments, outcomes of the DEA based on the proposed scheme are compared with those of a traditional DEA on six benchmark instances of the Traveling Salesman Problem. The results show that the former performs better than its peer. Moreover, the DEA based on our scheme shows an advantage in stability.

Chengjun Li, Zhe Chen, Shuhua Gu, Muqing Li, Hongyuan Shan, Guangdao Hu
A More Expressive Behavioral Logic for Decision-Theoretic Planning

We examine the problem of compactly expressing models of non-Markovian reward decision processes (NMRDP). In the field of decision-theoretic planning NMRDPs are used whenever the agent’s reward is determined by the history of visited states. Two different propositional linear temporal logics can be used to describe execution histories that are rewarding. Called PLTL and $FLTL, they are backward and forward looking logics respectively. In this paper we find both to be expressively weak and propose a change to $FLTL resulting in a much more expressive logic that we have called $

*

FLTL

. The time complexities of $

*

FLTL

and $FLTL related model checking operations performed in planning are the same.

Charles Gretton
Detecting Keyphrases in Micro-blogging with Graph Modeling of Information Diffusion

The rapid increasing popularity of micro-blogging has made it an important information seeking channel. Keyphrase extraction is an effective way for summarizing and analyzing micro-blogging content, which can help users gain insights into internet hotspots. Existing methods for keyphrase extraction usually unilaterally consider phrase frequency or user retweet count as key factors. However, those methods may neglect the relationships between different phrases and the importance of user influence to further information diffusion. Generally, phrases shown in the influential users’ micro-blogs are more likely to attract other users’ interest, making them more likely to be diffused in the near future. Besides, phrases may have relations with each other, and some phrases usually have similar diffusion paths and attract the attention of the same population. In this paper, by comprehensively considering all the above mentioned factors to detect micro-blogging keyphrases, we proposed a novel model. The proposed model first detect high frequency term from abundant micro-blogs as candidate keyphrases, then construct a relation graph about them with user interest and user following web. Finally, we rank those candidates with graph models for realizing keyphrases detection. Experiments show this model is very effective for micro-blogging keyphrase extraction.

Shuangyong Song, Yao Meng, Jun Sun
Fast BMU Search in SOMs Using Random Hyperplane Trees

One of the most prominent Neural Networks (NNs) reported in the literature is the Kohonen’s Self-Organizing Map (SOM). In spite of all its desirable capabilities and the scores of reported applications, it, unfortunately, possesses some

fundamental

drawbacks. Two of these handicaps are the quality of the map learned and the time required to train it. The most demanding phase of the algorithm involves determining the so-called Best Matching Unit (BMU), which requires time that is proportional to the number of neurons in the NN. The focus of this paper is to reduce the time needed for this tedious task, and to attempt to obtain an approximation of the BMU is as little as logarithmic time. To achieve this, we depend heavily on the work of [3,6], where the authors focused on how to accurately learn the data distribution connecting the neurons on a self-organizing tree, and how the learning algorithm, called the Tree-based Topology-Oriented SOM (TTOSOM), can be useful for data clustering [3,6] and classification [5]. We briefly state how we intend to reduce the training time for identifying the BMU efficiently. First, we show how a novel hyperplane-based partitioning scheme can be used to accelerate the task. Unlike the existing hyperplane-based partitioning methods reported in the literature, our algorithm can avoid ill-conditioned scenarios. It is also capable of considering data points that are dynamic. We demonstrate how these hyperplanes can be recursively defined, represented and computed, so as to recursively divide the hyper-space into two halves. As far as we know, the use of random hyperplanes to identify the BMU is both pioneering and novel.

César A. Astudillo, B. John Oommen
Robust Abrupt Motion Tracking via Adaptive Hamiltonian Monte Carlo Sampling

In this paper, we propose an adaptive Hamiltonian Monte Carlo sampling based tracking scheme within the Bayesian filtering framework. At the proposal step, the ordered over relaxation method is used to draw the momentum item for the joint state variable, which can suppress the random walk behavior. In addition, we design adaptive step-size based scheme to simulate the Hamiltonian dynamics in order to reduce the simulation error. The proposed method is compared with several state-of-the-art tracking algorithms. Extensive experimental results have shown its superiority in handling various types of abrupt motions compared to the traditional tracking algorithms.

Fasheng Wang, Xucheng Li, Mingyu Lu, Zhibo Xiao
Integrating Case-Based Reasoning with Reinforcement Learning for Real-Time Strategy Game Micromanagement

This paper describes the conception of a hybrid Reinforcement Learning (RL) and Case-Based Reasoning (CBR) approach to managing combat units in strategy games. Both methods are combined into an AI agent that is evaluated by using the real-time strategy (RTS) computer game StarCraft as a test bed. The eventual aim of this approach is an AI agent that has the same actions and information at its disposal as a human player. As part of an experimental evaluation, the agent is tested in different scenarios using optimized algorithm parameters. The integration of CBR for memory management is shown to improve the speed of convergence to an optimal policy, while also enabling the agent to address a larger variety of problems when compared to simple RL. The agent manages to beat the built-in game AI and also outperforms a simple RL-only agent. An analysis of the evolution of the case-base shows how scenarios and algorithmic parameters influence agent performance and will serve as a foundation for future improvement to the hybrid CBR/RL approach.

Stefan Wender, Ian Watson
A Topological Characterisation of Belief Revision over Infinite Propositional Languages

Belief revision mainly concerns how an agent updates her belief with new evidence. The AGM framework of belief revision models belief revision as revising theories by propositions. To characterise AGM-style belief revision operators, Grove proposed in 1988 a representation model using systems of spheres. This ‘spheres’ model is very influential and has been extended to characterise multiple belief revision operators. Several fundamental problems remain unsettled regarding this ‘spheres’ model. In this paper we introduce a topology on the set of all worlds of an infinite propositional language and use this topology to characterise systems of spheres. For each AGM operator ∘, we show that, among all systems of spheres deriving ∘, there is a minimal one which is contained in every other system. We give a topological characterisation of these minimal systems. Furthermore, we propose a method for extending an AGM operator to a multiple revision operator and show by an example that the extension is not unique. This negatively answers an open problem raised by Peppas.

Hua Meng, Sanjiang Li
Enhancing Binary Relevance for Multi-label Learning with Controlled Label Correlations Exploitation

Binary relevance (BR) is regarded as the most intuitive solution to learn from multi-label data, which decomposes the multi-label learning task into a number of independent binary learning tasks (one per class label). To amend its potential weakness of ignoring label correlations, many correlation-enabling extensions to BR have been proposed based on two major strategies, i.e. assuming

random

correlations with chaining structure or taking

full-order

correlations with stacking structure. However, in both strategies label correlations are only exploited in an uncontrolled manner, which may be problematic when error-prone and uncorrelated class labels arise. In this paper, to fulfill controlled label correlations exploitation, a novel enhancement to BR is proposed based on a two-stage filtering procedure. In the first stage, error-prone class labels are pruned from the label space based on holdout validation. In the second stage, closely-related class labels are identified based on supervised feature selection by viewing unpruned labels as features. Extensive experiments across fourteen multi-label data sets confirm the superiority of controlled label correlations exploitation, especially when large number class labels exist in the label space.

Yu-Kun Li, Min-Ling Zhang
Detection of Rain in Acoustic Recordings of the Environment

Environmental monitoring has become increasingly important due to the significant impact of human activities and climate change on biodiversity. Environmental sound sources such as rain and insect vocalizations are a rich and underexploited source of information in environmental audio recordings. This paper is concerned with the classification of rain within acoustic sensor recordings. We present the novel application of a set of features for classifying environmental acoustics: acoustic entropy, the acoustic complexity index, spectral cover, and background noise. In order to improve the performance of the rain classification system we automatically classify segments of environmental recordings into the classes of

heavy

r

ain

or

non-rain

. A decision tree classifier is experientially compared with other classifiers. The experimental results show that our system is effective in classifying segments of environmental audio recordings with an accuracy of 93% for the binary classification of

heavy rain

/

non-rain

.

Meriem Ferroudj, Anthony Truskinger, Michael Towsey, Liang Zhang, Jinglan Zhang, Paul Roe
Improved Feature Transformations for Classification Using Density Estimation

Density based logistic regression (DLR) is a recently introduced classification technique, that performs a one-to-one non-linear transformation of the original feature space to another feature space based on density estimations. This new feature space is particularly well suited for learning a logistic regression model. Whilst performance gains, good interpretability and time efficiency make DLR attractive, there exist some limitations to its formulation. In this paper, we tackle these limitations and propose several new extensions: 1) A more robust methodology for performing density estimations, 2) A method that can transform two or more features into a single target feature, based on the use of higher order kernel density estimation, 3) Analysis of the utility of DLR for transfer learning scenarios. We evaluate our extensions using several synthetic and publicly available datasets, demonstrating that higher order transformations have the potential to boost prediction performance and that DLR is a promising method for transfer learning.

Yamuna Kankanige, James Bailey
Exploiting Description Knowledge for Keyphrase Extraction

Keyphrase extraction is essential for many IR and NLP tasks. Existing methods usually use the phrases of the document separately without distinguishing the potential semantic correlations among them, or other statistical features from knowledge bases such as WordNet and Wikipedia. However, the mutual semantic information between phrases is also important, and exploiting their correlations may potentially help us more effectively extract the keyphrases. Generally, phrases in the title are more likely to be keyphrases reflecting the document topics, and phrases in the body are usually used to describe the document topics. We regard the relation between the title phrase and body phrase as a description relation. To this end, this paper proposes a novel keyphrase extraction approach by exploiting massive description relations. To make use of the semantic information provided by the description relations, we organize the phrases of a document as a description graph, and employ various graph-based ranking algorithms to rank the candidates. Experimental results on the real dataset demonstrate the effectiveness of the proposed approach in keyphrase extraction.

Fang Wang, Zhongyuan Wang, Senzhang Wang, Zhoujun Li
Amino Acids Pattern-Biased Spiral Search for Protein Structure Prediction

Proteins are essentially sequences of amino acids. They adopt specific folded 3-dimensional structures to perform specific tasks. The formation of 3-dimensional structures is largely guided by the constituent amino acids. Therefore, the positional presence of amino acids in a sequence might play important roles during the protein folding process. In this paper, we present a new heuristic derived from the positional patterns of amino acids in a sequence. With the help of a biased tabu tenure, we apply this heuristic within a spiral search algorithm. The spiral search is an efficient algorithm to develop hydrophobic core in a protein structure pulling hydrophobic amino acids towards the core centre in a spiral fashion. On a set of standard benchmark proteins, we experimentally show that applying our new heuristic improves the performance of a spiral search algorithm consistently.

Mahmood A. Rashid, Md. Masbaul Alam Polash, M. A. Hakim Newton, Md. Tamjidul Hoque, Abdul Sattar
A HMM POS Tagger for Micro-blogging Type Texts

The high volume of communication via micro-blogging type messages has created an increased demand for text processing tools customised the unstructured text genre. The available text processing tools developed on structured texts has been shown to deteriorate significantly when used on unstructured, micro-blogging type texts. In this paper, we present the results of testing a HMM based POS (Part-Of-Speech) tagging model customized for unstructured texts. We also evaluated the tagger against published CRF based state-of-the-art POS tagging models customized for Tweet messages using three publicly available Tweet corpora. Finally, we did cross-validation tests with both the taggers by training them on one Tweet corpus and testing them on another one.

The results show that the CRF-based POS tagger from GATE performed approximately 8% better compared to the HMM (Hidden Markov Model) model at token level, however at the sentence level the performances were approximately the same. The cross-validation experiments showed that both tagger’s results deteriorated by approximately 25% at the token level and a massive 80% at the sentence level. A detailed analysis of this deterioration is presented and the HMM trained model including the data has also been made available for research purposes.

Since HMM training is orders of magnitude faster compared to CRF training, we conclude that the HMM model, despite trailing by about 8% for token accuracy, is still a viable alternative for real time applications which demand rapid as well as progressive learning.

Parma Nand, Rivindu Perera, Ramesh Lal
An Eye-Tracking Study of User Behavior in Web Image Search

Studies of Web search have mostly examined user behavior in text search. Recent studies begin to explore user’s Web image search behavior through survey, questionnaire, interview, etc. This study investigates user behavior on a Web image search engine using eye-tracking. The goal is to get insight into how users view search results and whether search task type and results presentation order influence their behavior. We found that search results at certain locations, e.g., the top-center area in a grid layout, were more attractive than others. The search task type significantly influenced user behavior while the results presentation order didn’t. In addition, we looked at the question of why a particular search result was selected, which showed variety of reasons. User behavior researchers and search engine developers can take advantage of these findings in order to create better search experiences to the users.

Wanxuan Lu, Yunde Jia
Privacy Preserving in Location Data Release: A Differential Privacy Approach

Communication devices with GPS chips allow people to generate large volumes of location data. However, location datasets have been confronted with serious privacy concerns. Recently, several privacy techniques have been proposed but most of them lack a strict privacy notion, and can hardly resist the number of possible attacks. This paper proposes a private release algorithm to randomize location datasets in a strict privacy notion,

differential privacy

. This algorithm includes three privacy-preserving operations:

Private Location Clustering

shrinks the randomized domain and

Cluster Weight Perturbation

hides the weights of locations, while

Private Location Selection

hides the exact locations of a user. Theoretical analysis on utility confirms an improved trade-off between the privacy and utility of released location data. The experimental results further suggest this private release algorithm can successfully retain the utility of the datasets while preserving users’ privacy.

Ping Xiong, Tianqing Zhu, Lei Pan, Wenjia Niu, Gang Li
DRWS: A Model for Learning Distributed Representations for Words and Sentences

Vector-space distributed representations of words can capture syntactic and semantic regularities in language and help learning algorithms to achieve better performance in natural language processing tasks by grouping similar words. With progress of machine learning techniques in recent years, much attention has been paid on this field. However, many NLP tasks such as text summary and sentence matching treat sentences as atomic units. In this paper, we introduce a new model called DRWS which can learn distributed representations for words and variable-length sentences. Feature vectors for words and sentences are learned based on their probability of co-occurrence between words and sentences using a neural network. To evaluate feature vectors learned by our model, we applied our model on the tasks of detecting word similarity and text summarization. Extensive experiments demonstrate the effectiveness of our proposed model in learning vector representations for words and sentences.

Chunwei Yan, Fan Zhang, Lian’en Huang
Quantum Computing for Pattern Classification

It is well known that for certain tasks, quantum computing outperforms classical computing. A growing number of contributions try to use this advantage in order to improve or extend classical machine learning algorithms by methods of quantum information theory. This paper gives a brief introduction into quantum machine learning using the example of pattern classification. We introduce a quantum pattern classification algorithm that draws on Trugenberger’s proposal for measuring the Hamming distance on a quantum computer [CA Trugenberger,

Phys Rev Let

87, 2001] and discuss its advantages using handwritten digit recognition as from the MNIST database.

Maria Schuld, Ilya Sinayskiy, Francesco Petruccione
Pivot-Based Bilingual Dictionary Extraction from Multiple Dictionary Resources

High quality bilingual dictionaries are rarely available for lower-density language pairs, especially for those that are closely related. Using a third language as a pivot to link two other languages is a well-known solution, and usually requires only two input bilingual dictionaries to automatically induce the new one. This approach, however, produces many incorrect translation pairs because the dictionary entries are normally are not transitive due to polysemy and the ambiguous words in the pivot language. Utilizing the complete structures of the input bilingual dictionaries positively influences the result since dropped meanings can be countered. Moreover, an additional input dictionary may provide more complete information for calculating the semantic distance between word senses which is key to suppressing wrong sense matches. This paper proposes an extended constraint optimization model to inducing new dictionaries of closely related languages from multiple input dictionaries, and its formalization based on Integer Linear Programming. Evaluations indicated that the proposal not only outperforms the baseline method, but also shows improvements in performance and scalability as more dictionaries are utilized.

Mairidan Wushouer, Donghui Lin, Toru Ishida, Katsutoshi Hirayama
Efficient Probabilistic Frequent Itemset Mining in Big Sparse Uncertain Data

Probabilistic frequent itemset (PFI) mining in uncertain data has been drawing increasing attention from data mining communities recently. However, data generated in network environments, such as machine logs and retail transactions, tends to be big, sparse and uncertain due to the influence of random factors including unavoidable network latency, unfaithful collection and unreliable transmission, etc. Therefore, most available PFI mining algorithms are not adequately effective on dealing with uncertain data which is greatly big and extremely sparse. To address this issue, we propose a novel tree structure, ApproxFP-Tree and a parallelized ApproxFP algorithm based on the MapReduce platform aiming to mine all PFIs in big, sparse and uncertain data efficiently. Experimental results on real-world and synthetic databases are illustrated and analyzed to show that our approach is significantly efficient than the state-of-the-art algorithms.

Jing Xu, Ning Li, Xiao-Jiao Mao, Yu-Bin Yang
Reliable Fault Diagnosis of Low-Speed Bearing Defects Using a Genetic Algorithm

This paper proposes a genetic algorithm-based dimensionality reduction approach for reliable low-speed rolling element bearing fault diagnosis by exploiting both inter-class separability and intra-class compactness. In this study, multiple bearing defects under different load conditions are used to validate the effectiveness of the proposed dimensionality reduction methodology. In addition, the classification accuracy of the proposed approach is compared with that using two conventional dimensionality reduction techniques and the experimental results show that the proposed approach outperforms these methods, achieving an average classification accuracy of 94.8%.

Phuong Nguyen, Myeongsu Kang, Jaeyoung Kim, Jong-Myon Kim
Intrinsic Learning of Dynamic Bayesian Networks

Programs that learn Bayesian networks normally learn directed acyclic graphs (DAGs) of arbitrary structure, including those with repeating structures, such as dynamic Bayesian networks (DBNs). Perhaps for that reason there is relatively little literature on learning DBNs specifically and more focusing on applying general learners to the task. Here we modify a general causal discovery program to search specifically for dynamic Bayesian networks, and we identify the benefits in the quality of the models discovered and the time taken to discover them.

Alex Black, Kevin B. Korb, Ann E. Nicholson
A Game Model with Private Goal and Belief

In real life, it is difficult to hold some assumptions of the classic game theory. For example, when a player of a game chooses a strategy, it should consider not only payoff from taking the strategy and others’ best responses, but also its goal and belief about others, which are normally private in real life. However, in game theory they are assumed “common knowledge” among players. To address the problem, this paper proposes a game model that allows the private goals and beliefs of players, which are represented in propositional formulae in order to specify reasoning procedures that the players choose their strategies in a game. Moreover, we reveal how players’ private goals and beliefs influence the outcomes of their game. Finally, we examplify how our model can be used to effectively explain an interesting game.

Guihua Wu, Xudong Luo, Qiaoting Zhong
On Efficient Evolving Multi-Context Systems

Managed Multi-Context Systems (mMCSs) provide a general framework for integrating knowledge represented in heterogeneous KR formalisms. Recently, evolving Multi-Context Systems (eMCSs) have been introduced as an extension of mMCSs that add the ability to both react to, and reason in the presence of commonly temporary dynamic observations, and evolve by incorporating new knowledge. However, the general complexity of such an expressive formalism may simply be too high in cases where huge amounts of information have to be processed within a limited short amount of time, or even instantaneously. In this paper, we investigate under which conditions eMCSs may scale in such situations and we show that such polynomial eMCSs can be applied in a practical use case.

Matthias Knorr, Ricardo Gonçalves, João Leite
Analyzing Mediator-Activity Effects for Trust-Network Evolution in Social Media

We analyze evolution of trust networks in social media sites from a perspective of mediators. To this end, we propose two stochastic models that simulate the dynamics of creating a trust link under the presence of mediators,

the A-ME and A-MAE models

, where the A-ME model analyzes mediator effects for trust-network evolution in terms of mediator types, and the A-MAE model, an extension of the A-ME model, analyzes mediator-activity effects for trust-network evolution. We present an efficient method of inferring the values of model parameters from an observed sequence of trust links and user activities. Using real data from Epinions, we experimentally show that the A-MAE model significantly outperforms the A-ME model for predicting trust links in the near future under the presence of mediators, and demonstrate the effectiveness of mediator-activity information for trust-network evolution. We further clarify, by using the A-ME and A-MAE models, several characteristic properties of trust-link creation probability in the Epinions data.

Keito Hatta, Masahito Kumano, Masahiro Kimura, Kazumi Saito, Kouzou Ohara, Hiroshi Motoda
A Weighted Minimum Distance Using Hybridization of Particle Swarm Optimization and Bacterial Foraging

In a previous work we used a popular bio-inspired algorithm; particle swam optimization (PSO) to improve the performance of a well-known representation method of time series data which is the symbolic aggregate approximation (SAX), where PSO was used to propose a new weighted minimum distance WMD for SAX to recover some of the information loss resulting from the original minimum distance MINDIST on which SAX is based. WMD sets different weights to different segments of the time series according to their information content, where these weights are determined using PSO. We showed how SAX in conjunction with WMD can give better results in times series classification than the original SAX which uses MINDIST. In this paper we revisit this problem and propose optimizing WMD by using a hybrid of PSO and another bio-inspired optimization method which is Bacterial Foraging (BF); an effective bio-inspired optimization algorithm in solving difficult optimization problems. We show experimentally how by using this hybrid to set the weights of WMD we can obtain better classification results than those obtained when using PSO to set these weights.

Muhammad Marwan Muhammad Fuad
Discriminative Metric Learning for Shape Variation Object Tracking

It is a challenging task to track a shape variation object. In this paper, a novel discriminative metric learning based on multi-features appearance model is proposed for shape variation object tracking. Initially,we exploit the shape invariant properties and form multi-features appearance model, which consists of hue features, center-symmetric local binary pattern (CSLBP) at multiple scales, and orientation features. With the obtained multi-features appearance descriptor, we propose an improved bias discriminative component analysis (BDCA) classifier to distinguish the target object and background. In addition, a novel Mahalanobis distance metric is learned by BDCA classifier, which project the original space into a new space. Furthermore, based on the learned distance metric, the tracked object can be located in the new transformed feature space by matching the candidate image regions with templates in library. Compared with several other tracking algorithms, the experimental results demonstrate that the proposed algorithm is able to track an object accurately especially for object pose change, rotation and occlusion.

Liujun Zhao, Qingjie Zhao, Wei Guo, Yuxia Wang
Constraint-Based Evolutionary Local Search for Protein Structures with Secondary Motifs

On-lattice protein structure prediction with empirical energy minimisation has drawn significant research effort. However, energy minimisation with free-modelling not necessarily leads to structures that are similar to the native structure of the given protein. In this paper, we show that energy minimisation has a positive correlation with structural similarity measures if we consider secondary motifs. We then present a constraint-based evolutionary local search framework for on-lattice protein structure prediction using secondary structural information. We approximate secondary motifs such as

α

-helix and

β

-strands on the lattice and propose a set of neighbourhood generation operators that respect those motifs. Our experimental results show significant improvement over the state-of-the-art methods in terms of similarity with the native structures determined by laboratory methods.

Swakkhar Shatabda, M. A. Hakim Newton, Abdul Sattar
A Fast and Robust Multi-color Object Detection Method with Application to Color Chart Detection

In this paper, we focus on robust multi-color object detection with cluttered backgrounds and variable illumination for a target application to color chart detection. The task is characterized by a wide range of color variation combined with complex background. Arbitrary placement of the chart in the scene will further complicate the detection task. Conventional methods to this problem normally give an approximate bounding box of the detection result, lacking in an internal geometrical representation. Our method adopts a coarse-to-fine strategy to predict the chart location and recover its accurate topological structure, e.g. the position and boundary of each constituent color area. With this fine detection result, color deviation in the input image can be easily corrected using off-the-shelf softwares such as Photoshop. Experiential results on a public dataset demonstrate that our system can work effectively in real time and give a superior detection rate to the state-of-art. The robustness of this method to large color distortion makes it equally applicable to detection of general multi-color object such as address plate, traffic sign, and so on.

Song Wang, Akihiro Minagawa, Wei Fan, Jun Sun, Liang Xu
Region-Based Object Categorisation Using Relational Learning

Inductive Logic Programming (ILP) is used to learn classifiers for generic object recognition from range images and 3D point clouds. The point cloud is segmented into primitive regions, followed by labelling subsets of regions representing an object. Predicates describing those regions and their relationships are constructed and used for learning. Using planar regions as the only primitive shape was examined in previous work. We extend this by adding two more primitives: cylinders and spheres. We compare the performance of learning with the planar-only method using some common household objects. The results show that the additional primitives reduce the number of features required to describe an instance and also significantly reduce the learning time without loss in accuracy.

Reza Farid, Claude Sammut
Competitive Learning with Pairwise Constraints for Text

Text clustering and constrained clustering both have been an important area of research over the years. The commonly used vector space representation of text data involves high dimensional sparse matrices. We present algorithms which are improvements over the existing algorithms namely, Online Linear Constrained Vector Quantization Error (O-LCVQE) and Constrained Rival Penalized Competitive Learning (C-RPCL). We use the concept of spherical k-means in place of Euclidean distance based traditional k-means. Several experiments demonstrate that the proposed algorithms work better for high dimensional text data in terms of normalized mutual information. We further show that k-means with rival penalized competitive learning is a much better alternative than simple k-means when applied on text data. The performances of k-means and spherical k-means come closer, when the distance function is weighted by the neuron winning frequency.

Muktamala Chakrabarti, Asim Kumar Pal
Hierarchical Meta-Rules for Scalable Meta-Learning

The Pairwise Meta-Rules (PMR) method proposed in [18] has been shown to improve the predictive performances of several meta-learning algorithms for the algorithm ranking problem. Given

m

target objects (e.g., algorithms), the training complexity of the PMR method with respect to

m

is quadratic:

$\binom{m}{2} = m \times (m - 1) / 2$

. This is usually not a problem when

m

is moderate, such as when ranking 20 different learning algorithms. However, for problems with a much larger

m

, such as the meta-learning-based parameter ranking problem, where

m

can be 100+, the PMR method is less efficient. In this paper, we propose a novel method named Hierarchical Meta-Rules (HMR), which is based on the theory of orthogonal contrasts. The proposed HMR method has a linear training complexity with respect to

m

, providing a way of dealing with a large number of objects that the PMR method cannot handle efficiently. Our experimental results demonstrate the benefit of the new method in the context of meta-learning.

Quan Sun, Bernhard Pfahringer
Combining Career Progression and Profile Matching in a Job Recommender System

In this paper we consider the problem of job recommendation, suggesting suitable jobs to users based on their profiles. We compare a baseline method treating users and jobs as documents, where suitability is measured using cosine similarity, with a model that incorporates job transitions trained on the career progressions of a set of users. We show that the job transition model outperforms cosine similarity. Furthermore, a cascaded system combining career transitions with cosine similarity generates more recommendations of a similar quality. The analysis is conducted by examining data from 2,400 LinkedIn users, and evaluated by determining how well the methods predict users’ current positions from their profiles and previous position history.

Bradford Heap, Alfred Krzywicki, Wayne Wobcke, Mike Bain, Paul Compton
IR Stereo Kinect: Improving Depth Images by Combining Structured Light with IR Stereo

RGB-D sensors such as the Microsoft Kinect or the Asus Xtion are inexpensive 3D sensors. A depth image is computed by calculating the distortion of a known infrared light (IR) pattern which is projected into the scene. While these sensors are great devices they have some limitations. The distance they can measure is limited and they suffer from reflection problems on transparent, shiny, or very matte and absorbing objects. If more than one RGB-D camera is used the IR patterns interfere with each other. This results in a massive loss of depth information. In this paper, we present a simple and powerful method to overcome these problems. We propose a stereo RGB-D camera system which uses the pros of RGB-D cameras and combine them with the pros of stereo camera systems. The idea is to utilize the IR images of each two sensors as a stereo pair to generate a depth map. The IR patterns emitted by IR projectors are exploited here to enhance the dense stereo matching even if the observed objects or surfaces are texture-less or transparent. The resulting disparity map is then fused with the depth map offered by the RGB-D sensor to fill the regions and the holes that appear because of interference, or due to transparent or reflective objects. Our results show that the density of depth information is increased especially for transparent, shiny or matte objects.

Faraj Alhwarin, Alexander Ferrein, Ingrid Scholl
Polynomially Bounded Forgetting

Forgetting is one of the most important concepts in logic based problem solving, both from a theoretical and a practical point of view. However, the size of the forgetting result is exponential in worst case. To address this issue, we consider the problem of polynomially bounded forgetting, i.e., when the size of the forgetting result can be expressed polynomially. We coin the notion of polynomially bounded forgetting and distinguish several different levels. We then show that forgetting a set of variables under a polynomial bound can be reduced to forgetting a single one. However, checking variable polynomially bounded forgetting is

$\Sigma_2^P$

complete. Hence, we identify some sufficient conditions for this problem. Finally, we consider polynomially bounded forgetting in CNF formulas.

Yi Zhou
Similarity Search by Generating Pivots Based on Manhattan Distance

We address a problem of improving the search efficiency of range queries based on Manhattan distance. To this end, we propose a new pivot generation method (the PGM method) formulated as an iterative algorithm, where its convergence is guaranteed within a finite number of iterations. In our experiments using three databases of hand-written characters, newspaper articles and book reviews, we confirmed that our proposed method overcomes a representative conventional method (the BNC method) whose pivots are limited to objects in the datasets, in terms of improvements of objective function values, computation times of pivot selection or generation, the range query performance with arbitrary range setting, and qualitative comparison of visualization results. Moreover, we experimentally show that the PGM method works much better than the BNC method in the case of sparse high-dimensional objects, rather than the case of dense low-dimensional ones.

Eri Kobayashi, Takayasu Fushimi, Kazumi Saito, Tetsuo Ikeda
MDSR: An Eigenvector Approach to Core Analysis of Multiple Directed Graphs

In this paper, we address a problem of extracting core portions of a network represented as a multiple directed graph. For this purpose, we propose a new core extraction method called MDSR (Multiple-Directed-Spectral-Relaxation) based on an eigenvector approach. The MDSR method extracts a user-defined number of core portions by repeating the following three steps; 1) calculating the left- and right-eigenvectors of an adjacency matrix of a network, 2) quantizing elements of these eigenvectors to binary ones as an indicator of extracting core portion, and 3) removing links of the extracted one. The left- and right-eigenvectors at the first iteration respectively correspond to the hub and authority vectors of the HITS algorithm. In our experiments using a reply network on Twitter constructed as a multiple directed graph, we demonstrate that the MDSR method was able to uncover some part of communities, such as groups of similar account names, users who like to send messages to ‘null’, and so on. We also show that some communities were overlapped ones. Furthermore, we confirm that such communities were hard to be automatically found by two methods, which were constructed by straightforwardly extending the conventional

k

-core method.

Shoko Kato, Kazumi Saito, Kazuhiro Kazama, Tetsuji Satoh
BEST: An Efficient Algorithm for Mining Frequent Unordered Embedded Subtrees

This paper presents an algorithm for mining unordered embedded subtrees using the balanced-optimal-search canonical form (BOCF). A tree structure guided scheme based enumeration approach is defined using BOCF for systematically enumerating the valid subtrees only. Based on this canonical form and enumeration technique, the

b

alanced optimal search

e

mbedded

s

ub

t

ree miningalgorithm (BEST) is introduced for mining embedded subtrees from a database of labelled rooted unordered trees. The extensive experiments on both synthetic and real datasets demonstrate the efficiency of BEST over the two state-of-the-art algorithms for mining embedded unordered subtrees, SLEUTH and U3.

Israt Jahan Chowdhury, Richi Nayak
Arduface: An Embedded System Analysis Tool

An embedded system combines many hardware and software components. The more hardware and software components are used in an embedded system, the more complicated relationships occur. In order to understand the embedded system appropriately, it is necessary to have high-level expertise in both hardware and software components. However, not all the developers are expert in these two components. This research aims to resolve hardware and software components mapping problem in the embedded systems. The software model graph and the hardware model graph were extracted from source code and device configuration respectively. Using a customised graph matching technique, our method automatically identifies the code block corresponding to any hardware components selected. Our experimental results show that our method exhibits high precision for most hardware components, but low recall in general. We discuss the reason and suggest possible extensions.

Wanli Xue, Hyunsuk Chung, Soyeon Caren Han, Yangsok Kim, Byeong Ho Kang
K-means Pattern Learning for Move Evaluation in the Game of Go

The Game of Go is one of the biggest challenge in the field of Computer Game. The large board makes Go very complex and hard to evaluate. In this paper, we propose a method that reduce the complexity of Go by learning and extracting patterns from game records. This method is more efficient and stronger than the baseline method we have chosen. Our method has two major components: a) a pattern learning method based on K-means, it will learn and extract patterns from game records, b) a perceptron which learns the win rates of Go situations. We build an agent to evaluate the performance of our method, and get at least 20% of performance improvement or 25% of computing power saving in most circumstances.

Yunzhao Liang, Shuoying Chen
Semantic Interpretation of Requirements through Cognitive Grammar and Configuration

Many attempts have been made to apply Natural Language Processing to requirements specifications. However, typical approaches rely on shallow parsing to identify object-oriented elements of the specifications (e.g. classes, attributes, and methods). As a result, the models produced are often incomplete, imprecise, and require manual revision and validation. In contrast, we propose a deep Natural Language Understanding approach to create complete and precise formal models of requirements specifications. We combine three main elements to achieve this: (1) acquisition of lexicon from a user-supplied glossary requiring little specialised prior knowledge; (2) flexible syntactic analysis based purely on word-order; and (3) Knowledge-based Configuration unifies several semantic analysis tasks and allows the handling of ambiguities and errors. Moreover, we provide feedback to the user, allowing the refinement of specifications into a precise and unambiguous form. We demonstrate the benefits of our approach on an example from the PROMISE requirements corpus.

Matt Selway, Wolfgang Mayer, Markus Stumptner
Rotation-Based Learning: A Novel Extension of Opposition-Based Learning

Opposition-based learning (OBL) scheme is an effective mechanism to enhance soft computing techniques, but it also has some limitations. To extend the OBL scheme, this paper proposes a novel rotation-based learning (RBL) mechanism, in which a rotation number is achieved by applying a specified rotation angle to the original number along a specific circle in two-dimensional space. By assigning different angles, RBL can search any point in the search space. Therefore, RBL could be more flexible than OBL to find the promising candidate solutions in the complex search spaces. In order to verify its effectiveness, the RBL mechanism is embedded into differential evolution (DE) and the rotation-based differential evolution (RDE) algorithm is introduced. Experimental studies are conducted on a set of widely used benchmark functions. Simulation results demonstrate the effectiveness of RBL mechanism, and the proposed RDE algorithm performs significantly better than, or at least comparable to, several state-of-the-art DE variants.

Huichao Liu, Zhijian Wu, Huanzhe Li, Hui Wang, Shahryar Rahnamayan, Changshou Deng
Complexity of Exploiting Privacy Violations in Strategic Argumentation

Recently, Governatori et al. (2014) formulated a notion of strategic argumentation in the context of a dialogue game with incomplete knowledge, where arguments are constructed in a defeasible logic. Such a framework reflects aspects of the practice of legal argumentation. They show that a certain element of reasoning within strategic argumentation is NP-complete. In this paper we establish several related complexity results. To begin with, we present a much simpler proof of this result. Then the result is extended to allow the players the flexibility to have a wider variety of aims, and to address reasoning in a broad range of defeasible logics. Finally, we introduce some computational problems arising from violation of the privacy of a player in a strategic argumentation game, and establish their complexity.

Michael J. Maher
Efficient Vehicle Localization Based on Road-Boundary Maps

Localization is a critical task of autonomous vehicles, and can provide a foundation for the planning and perception modules. In this paper, we propose a novel vehicle localization method based on road-boundary maps. Firstly, a fast road boundary detection method based on random forests is presented. Secondly, two road-boundary maps, global and local maps, are built based on the boundary detection results respectively. Finally, an efficient localization algorithm via the road-boundary maps in Bayes framework is implemented. Our method is evaluated with data collected from an urban environment and the results show that the proposed method can be used for efficient road boundary detection and accurate vehicle localization.

Dawei Zhao, Tao Wu, Yuqiang Fang, Ruili Wang, Jing Dai, Bin Dai
An Assessment of Online Semantic Annotators for the Keyword Extraction Task

The task of keyword extraction aims at capturing expressions (or entities) that best represent the main topics of a document. Given the rapid adoption of these online semantic annotators and their contribution to the growth of the Semantic Web, one important task is to assess their quality. This article presents an evaluation of the quality and stability of semantic annotators on domain-specific and open domain corpora. We evaluate five semantic annotators and compare them to two state-of-the-art keyword extractors, namely KP-miner and Maui. Our evaluation demonstrates that semantic annotators are not able to outperform keyword extractors and that annotators perform best on domains having a high keyword density.

Ludovic Jean-Louis, Amal Zouaq, Michel Gagnon, Faezeh Ensan
Evaluation of Terminological Schema Matching and Its Implications for Schema Mapping

Recently large amounts of schema data, which describe data structure of various domains such as purchase order, health, publication, geography, agriculture, environment and music, are available over the Web. Schema mapping aims to solve schema heterogeneity problem in schema data. This research thoroughly examines how string similarity metrics and text processing techniques impact on the performance of terminological schema mapping and highlights their limitations. Our experimental study demonstrates that the performance of terminological schema matching is significantly improved by using text processing techniques. However, the performance improvement is slightly different between datasets because of the characteristics of the datasets, and in spite of applying all text processing techniques, some datasets still exhibit low performance. Our research supports the claim that a system which can manage the context dependent characteristics of terminological schema matching is essential for better schema mapping algorithms.

Sarawat Anam, Yang Sok Kim, Byeong Ho Kang, Qing Liu
The Role of Linked Data in Content Selection

This paper explores the appropriateness of utilizing Linked Data as a knowledge source for content selection. Content Selection is a crucial subtask in Natural Language Generation which has the function of determining the relevancy of contents from a knowledge source based on a communicative goal. The recent online era has enabled us to accumulate extensive amounts of generic online knowledge some of which has been made available as structured knowledge sources for computational natural language processing purposes. This paper proposes a model for content selection by utilizing a generic structured knowledge source, DBpedia, which is a replica of the unstructured counterpart, Wikipedia. The proposed model uses log likelihood to rank the contents from DBpedia Linked Data for relevance to a communicative goal. We performed experiments using DBpedia as the Linked Data resource using two keyword datasets as communicative goals. To optimize parameters we used keywords extracted from QALD-2 training dataset and QALD-2 testing dataset is used for the testing. The results was evaluated against the verbatim based selection strategy. The results showed that our model can perform 18.03% better than verbatim selection.

Rivindu Perera, Parma Nand
On Adding Inverse Features to the Description Logic $\mathcal{CFD}^{\forall}_{nc}$

We consider how inverse features can be added to the description logic

$\mathcal{CFD}^{\forall}_{nc}$

, a feature-based dialect with PTIME algorithms for various reasoning tasks over

$\mathcal{CFD}^{\forall}_{nc}$

knowledge bases. We show how a straightforward addition of unqualified inverse features makes the tasks of reasoning about logical consequences and about knowledge base consistency intractable. We then present syntactic restrictions on

$\mathcal{CFD}^{\forall}_{nc}$

knowledge bases that relate to combinations of value restrictions and inverses and to combinations of value restrictions and path functional dependencies, and show how such restrictions lead to PTIME algorithms for both tasks. Finally, we show how the resulting dialect called

$\mathcal{CFDI}^{\forall-}_{nc}$

can be used to address performance issues relating to relational data sources as well as RDF data sources conforming to DL-Lite

$^{\mathcal F}_{\mathrm{core}}$

, a description logic dialect of relevance to the W3C OWL 2 QL profile.

David Toman, Grant Weddell
Tracking Perceptually Indistinguishable Objects Using Spatial Reasoning

Intelligent agents perceive the world mainly through images captured at different time points. Being able to track objects from one image to another is fundamental for understanding the changes of the world. Tracking becomes challenging when there are multiple perceptually indistinguishable objects (PIOs), i.e., objects that have the same appearance and cannot be visually distinguished. Then it is necessary to reidentify all PIOs whenever a new observation is made. In this paper we consider the case where changes of the world were caused by a single physical event and where matches between PIOs of subsequent observations must be consistent with the effects of the physical event.

We present a solution to this problem based on qualitative spatial representation and reasoning. It can improve tracking accuracy significantly by qualitatively predicting possible motions of objects and discarding matches that violate spatial and physical constraints. We evaluate our solution in a real video gaming scenario.

Xiaoyu Ge, Jochen Renz
A Personalized Gesture Interaction System with User Identification Using Kinect

In this paper, we present a Kinect-based real time personalized gesture interaction system with user identification targeting for tiled-display environments. By applying a HMM-GSS model and DTW algorithm respectively during user identification and personalized gesture recognition, the system offers more intuitive and user-friendly experience. Our experiment shows that the HMM-GSS model achieves nearly 13.95% accuracy increment than the conventional HMM-based classifier. With feature selection and classifying strategy comparisons, over 95.7% accuracy is obtained by the DTW classifier. Finally, the prototype system can demonstrate a high gesture recognition accuracy in both phases of user identification and real-time interaction with a tiled-display based visualization application.

Haikuo Zhang, Wenjun Wu, Yihua Lou
POPVI: A Probability-Based Optimal Policy Value Iteration Algorithm

Point-based value iteration methods are a family of effective algorithms for solving POMDP models and their performance mainly depends on the exploration of the search space. Although global optimization can be obtained by algorithms such as HSVI and GapMin, their exploration of the optimal action is overly optimistic which therefore slows down the efficiency. In this paper, we propose a novel heuristic search method POPVI (Probability-based Optimal Policy Value Iteration) which explores the optimal action based on probability. In depth-first heuristic exploration, this algorithm uses a Monte-Carlo method to estimate the probabilities that actions are optimal according to the distribution of actions’ Q-value function, applies the action of the maximum probability and greedily explores subsequent belief point of the greatest uncertainty. Experimental results show that POPVI outperforms HSVI, and by a large margin when the scale of the POMDP increases.

Feng Liu, Bin Luo
Bidding with Fees and Setting Effective Fees in a Double Auction Marketplace

The double auction marketplace usually charges fees to traders to make profits. Although the bidding strategies in double auctions have been widely studied, little work has been done on analysing how market fees can affect bidding strategies, and how the marketplace selects appropriate fees to make profits. In this paper, we will investigate these two problems. Specifically, we choose four typical types of fees, and use a computational learning approach to analyse the Nash equilibrium bidding strategies of traders when different types of fees are charged. In doing so, we draw insights about how different types of market fees can affect the Nash equilibrium bidding strategies. Furthermore, we investigate which type of market fees is the most effective in terms of maximising market profits while keeping traders staying in the marketplace.

Bing Shi
Grounding Epistemic Modality in Speakers’ Judgments

As an alternative to subjective and informal expert opinions, we here propose to ground natural language expressions for epistemic modality, such as

“possibly”

, or

“might indicate”

, in empirical data solicited from members of the speakers’ community in epistemic judgment tasks. In an online questionnaire study, we asked 86 subjects to rate the degree of certainty of propositions relative to 27 linguistic expressions. Based on a thorough statistical analysis of the responses we received, we are able to quantify the likelihood as well as ranges of certainty associated with these propositions. Moreover, we generalize these data in terms of an empirically motivated, tri-partite category system of expression sets for propositional certainty which can be reused, e.g. for standardizing annotation guidelines.

Udo Hahn, Christine Engelmann
Exploring Review Content for Recommendation via Latent Factor Model

Recommender systems have been widely studied and applied in many real applications such as e-commerce sites, product review sites, and mobile App stores. In these applications, users can provide their feedback towards the items in the form of ratings, and they usually accompany the feedback with a few words (i.e., review content) to justify their ratings. Such review content may contain rich information about user tastes and item characteristics. However, existing recommendation methods (e.g., collaborative filtering) mainly make use of the historical ratings while ignore the content information. In this paper, we propose to explore the review content for better recommendation via latent factor model. In particular, we propose two strategies to leverage the review content. The first strategy incorporates review content as a guidance term to guide the learnt latent factors of user preferences; the second strategy formulates a regularization term to constrain the preference differences between similar users. Experimental evaluations on two real data sets demonstrate the usefulness of review content and the effectiveness of the proposed method for recommendation.

Xiaoyu Chen, Yuan Yao, Feng Xu, Jian Lu
Question Classification Based on Fine-Grained PoS Annotation of Nouns and Interrogative Pronouns

Question classification is one of the key components of Open Domain Question-Answering System. It has become a research focus for its capability to perform Natural Language Processing. The task of question classification is to assign a class label to each question according to the semantic types of answer. Since the classification precision is affected by the coarse annotation granularity of syntactic features and noises of lexical features, we propose new classification features based on fine-grained PoS annotation of nouns and interrogative pronouns. We firstly refine annotation granularity of syntactic features and then extract the head words with high occurrence frequency and the fine-grained PoS tagging to produce new features so as to reduce the noises of lexical features. A new feature extracting algorithm based on fine-grained PoS annotation is applied to improve the precision of feature extracting. The experimental results demonstrate the effectiveness of the proposed method both in Chinese and English question classification.

Juan Le, ZhenDong Niu, Chunxia Zhang
Probabilistic Belief Revision via Imaging

While Bayesian conditioning fits in nicely with probabilistic belief expansion, its use is problematic in the context of non-trivial belief revision. Lewis’ use of

imaging

based on

closeness

between possible worlds offers a way to overcome this limitation in the context of belief

update

(in a dynamic environment). In this paper, we explore the use of imaging as a means to construct probabilistic belief

revision

. Specifically, we present explicit constructions of three candidates strategies, dubbed

Naive

,

Gullible

and

Cunning

, that are based on imaging, and investigate their properties.

Kinzang Chhogyal, Abhaya Nayak, Rolf Schwitter, Abdul Sattar
Classification with Sign Random Projections

Sign random projections (SRP) transform multi-dimensional vector into a binary string storing only the sign of the random projection values. Previous works showed that the obtained binary strings can be used to estimate the angle between vectors which can be used to speed up the nearest neighbors search. In this paper, we investigate their application to classification problem. We introduce an SRP classifier which works on these binary strings. The training procedure of this new classifier is very simple, yet producing correct classification result with high probability if the two classes are linearly separable.

Sanparith Marukatat
Cortically-Inspired Overcomplete Feature Learning for Colour Images

The Hierarchical Temporal Memory (HTM) framework is a deep learning system inspired by the functioning of the human neocortex. In this paper we investigate the feasibility of this framework by evaluating the performance of one component, the spatial pooler. Using a recently developed implementation, the augmented spatial pooler (ASP), as a single layer feature detector, we test its performance using a standard image classification pipeline. The main contributions of the paper are the implementation and evaluation of modifications to ASP that enable it to form overcomplete representations of the input and to form connections with multiple data channels. Our results show that these modifications significantly improve the utility of ASP, making its performance competitive with more traditional feature detectors such as sparse restricted Boltzmann machines and sparse auto-encoders.

Benjamin Cowley, Adam Kneller, John Thornton
GDL Meets ATL: A Logic for Game Description and Strategic Reasoning

This paper presents a logical framework that extends the Game Description Language with coalition operators from Alternating-time Temporal Logic and prioritised strategy connectives. Our semantics is built upon the standard state transition model. The new framework allows us to formalise van Benthem’s game-oriented principles in multi-player games, and formally derive Weak Determinacy and Zermelo’s Theorem for two-player games. We demonstrate with a real-world game how to use our language to specify a game and design a strategy, and how to use our framework to verify a winning/no-losing strategy. Finally, we show that the model-checking problem of our logic is in 2EXPTIME with respect to the size of game structure and the length of formula, which is no worse than the model-checking problem in ATL

 ⋆ 

.

Guifei Jiang, Dongmo Zhang, Laurent Perrussel
Towards Optimal Lifetime in Wireless Sensor Networks for QoS Guaranteed Service Selection

Due to the efficiency and practicability, workflow has been successfully used in service-oriented

Wireless Sensor Networks

(WSNs). In general,

Quality of Service

(QoS) can be utilized to select the optimal service. However,

WSN

s are resource constrained, especially the energy. If we ignore the issue of limited energy, services with best

QoS

will consume their energy heavily and disabled earlier, which will shorten the network lifetime. Hence, in this paper, we will propose an energy efficient and

QoS

guaranteed service selection approach in

WSN

s. Through decomposing the global

QoS

constraints into a set of local

QoS

constraints, we can get a group of

QoS

guaranteed candidate services. Furthermore, considering of the service profile (i.e. running status, energy and

QoS

), we adopt fuzzy logic technique to rank the candidate services and then select the optimal one. Experimental evaluations demonstrate the capability of the proposed approach.

Endong Tong, Lan Chen, Ying Li
Predicting Stock Market Trends by Recurrent Deep Neural Networks

Investors make decisions based on various factors, including consumer price index, price-earnings ratio, and also miscellaneous events reported by newspapers. In order to assist their decisions in a timely manner, many studies have been conducted to automatically analyze those information sources in the last decades. However, the majority of the efforts was made for utilizing numerical information, partly due to the difficulty to process natural language texts and to make sense of their temporal properties. This study sheds light on this problem by using deep learning, which has been attracting much attention in various areas of research including pattern mining and machine learning for its ability to automatically construct useful features from a large amount of data. Specifically, this study proposes an approach to market trend prediction based on a recurrent deep neural network to model temporal effects of past events. The validity of the proposed approach is demonstrated on the real-world data for ten Nikkei companies.

Akira Yoshihara, Kazuki Fujikawa, Kazuhiro Seki, Kuniaki Uehara
Constructing Consumer-Oriented Medical Terminology from the Web A Supervised Classifier Ensemble Approach

Increasingly, people turn to the Web for health related questions and even medical advice. Despite this rising demand, it remains non-trivial to access reliable consumer-oriented medical information for self-diagnosis, especially when presenting with multiple symptoms. In this project, we apply information extraction techniques to build a relational graph database of clinical entities, visualising and retrieving the relationships between symptoms and conditions. Since there are no readily available taxonomies on consumer-oriented medical terminology, accuracy of the classification is of paramount importance. To ensure medical terms on the Internet can be reliably classified into proper semantic categories, we develop a method to identify best performed classifiers across multiple feature sets, and assess the effectiveness of combining these features using ensemble learning techniques. Experiment results confirm that the classifier ensemble, when intelligently configured, can provide significant increases in performance. An interactive web-based graph interface and a mobile app are developed to demonstrate the potential use of this consumer-oriented terminology.

Wei Liu, Harrison J. Sweeney, Bo Chung, David G. Glance
An AdaBoost for Efficient Use of Confidences of Weak Hypotheses on Text Categorization

We propose a boosting algorithm based on AdaBoost for using real-valued weak hypotheses that return confidences of their classifications as real numbers with an approximated upper bound of the training error. The approximated upper bound is induced with Bernoulli’s inequality and the upper bound enables us to analytically calculate a confidence-value that satisfies a reduction in the original upper bound. The experimental results on the Reuters-21578 data set and an Amazon review data show that our boosting algorithm with the perceptron attains better accuracy than Support Vector Machines, decision stumps-based boosting algorithms and a perceptron.

Tomoya Iwakura, Takahiro Saitou, Seishi Okamoto
Reasoning about Constraint Models

We propose a simple but powerful framework for reasoning about properties of models specified in languages like AMPL, OPL, Zinc or Essence. Using this framework, we prove that reasoning problems like detecting symmetries, redundant constraints or dualities between models are undecidable even for a very limited modelling language that only generates simple problem instances. To provide tools to help the human modeller (for example, to identify when a model has a particular symmetry), it would nevertheless be useful to automate many of these reasoning tasks. To explore the possibility of doing this, we describe two case-studies. The first uses the ACL2 inductive prover to prove inductively that a model contains a symmetry. The second identifies a tractable fragment of MiniZinc and uses a decision procedure to prove that a model implies a parameterized constraint.

Christian Bessiere, Emmanuel Hebrard, George Katsirelos, Zeynep Kiziltan, Nina Narodytska, Toby Walsh
A Multi-objective Genetic Algorithm for Model Selection for Support Vector Machines

Selecting the proper Kernel function in SVMs and the specific parameters for that kernel is an important step in achieving a high performance learning machine. The objective of this research is to optimize SVMs parameters using different kernel functions. We cast this problem as a multi-objective optimization problem, where the classification accuracy, the number of support vectors and the margin define our objective functions. So, we introduce a method based on multi-objective evolutionary algorithm NSGA-II to solve this problem. We also introduce a multi-criteria selection operator for our NSGA-II. The proposed method is applied on some benchmark datasets. The experimental obtained results show the efficiency of the proposed method.

Amal Bouraoui, Yassine Ben Ayed, Salma Jamoussi
Fast Learning of Deep Neural Networks via Singular Value Decomposition

In this paper, we propose a new fast training methodology for learning of Deep Neural Networks (DNNs) via Singular Value Decomposition (SVD). The fast training methodology uses a supervised pre-adjusting process to adjust roughly parameters of weight matrices of DNNs and change distributions of singular values. SVD is applied to pre-adjusted DNNs, reducing quantities of parameters in DNNs. An unconventional Back Propagation (BP) algorithm is used to train the models restructured by SVD, which has lower time complexity than the conventional BP algorithm. Experimental results indicate that on Large Vocabulary Continuous Speech Recognition (LVCSR) tasks, using the fast training methodology, the unconventional BP algorithm achieves almost 2 times speed-up without any loss of recognition performance and almost 4 times speed-up with only a tiny loss of recognition performance.

Chenghao Cai, Dengfeng Ke, Yanyan Xu, Kaile Su
A Simple Approach to Solving Cooperative Path-Finding as Propositional Satisfiability Works Well

This paper addresses makespan optimal solving of cooperative path-finding problem (CPF) by translating it to propositional satisfiability (SAT). A novel very simple SAT encoding of CPF is proposed and compared with existing elaborate encodings. The conducted experimental evaluation shown that the simple design of the encoding allows solving it faster than existing encodings for CPF in cases with higher density of agents.

Pavel Surynek
Gene Selection Based on Supervised Vector Representation of Genes

Gene ranking is widely employed in gene selection. Most criteria adopt a single quantitative value to rank genes. To some degree, they hardly provide comprehensive discriminative information of genes. In this paper, the supervised vector representation is proposed. The supervised vector reflecting the spatial distribution in the space expended by the gene is used as the gene representation. The possible problems of “bias” and “cumulative error”, which may be induced by the criteria based on a single value, can be avoided by the proposed criterion. An algorithm of gene selection based on supervised vector representation is also proposed to select gene subsets with the new criterion and its performance is compared with that of some existing gene selection algorithms. Experimental results demonstrate that the proposed algorithm is capable of generating the final gene subsets with higher predictive capability in most cases.

Tian Yu, Fei Gao, Han Jin, JinMao Wei
Wallace: Incorporating Search into Chatting

Chatbots are a well-established technology, however the conversational ability of the typical chatbot is greatly restricted. This paper investigates how the performance of a chatbot could be improved by connecting it with a knowledge source that could be used during its interactions with users. A new chatbot, Wallace, was created by extending Alice to incorporate knowledge from Wikipedia into its conversations. Mechanisms were designed and developed to retrieve Wikipedia pages, parse them, and select suitable sentences for the conversation. A user evaluation was conducted on the prototype, which showed that Wallace was generally more effective than Alice at providing factual answers to questions denoting an informational need. Participants also expressed that Wallace was more specific and more entertaining than Alice.

Alexandre Sawczuk da Silva, Xiaoying Gao, Peter Andreae
Symbiotic Evolution to Generate Chord Progression Consisting of Four Parts for a Music Composition System

Automatic systems that compose music adapted to personal sensibility have been proposed. These systems induce a personal sensibility model by using a listener’s emotional impressions of music and compose music based on the model using evolutionary computation algorithms. The objective of this study was to compose music consisting of four parts: introduction, development, turn, and conclusion. We propose a method for generating chord progressions based on a symbiotic evolution algorithm that is characterized by parallel evolutions of both partial and whole solutions. Our experimental results show that the proposed method effectively composes musical pieces having the target structure.

Noriko Otani, Shoko Shirakawa, Masayuki Numao
Dialogue Management in Spoken Dialogue System with Visual Feedback

Dialogue Management (DM) is an essential issue in Spoken Dialogue Systems (SDS). Most of previous studies on DM do not consider the visual feedback from machine to user that could accelerate the dialogue process dramatically. Thus, in this paper, we firstly model the DM problem in SDS with visual feedback as Partially Observable Markov Decision Processes (POMDP). Additionally, Reinforcement Learning (RL) approach is utilized to solve this problem, which yields the Vision and Audition-based DM (VADM) scheme. Finally, extensive experiment results illustrate the performance improvements of the proposed VADM scheme over the existing scheme in different scenarios.

Wendong Ge, Bo Xu
Content-Based Readability Assessment: A Study Using A Syllabic Alphabetic Language (Thai)

Text readability is typically defined in terms of “grade level”; the expected educational level of the reader at which the text is directed. Mechanisms for measuring readability in English documents are well established; however this is not in case in many other languages, such as syllabic alphabetic languages. In this paper seven different mechanisms for assessing the readability of syllabic alphabetic language texts are proposed and compared. The mechanism are grouped under three headings: (i) graph ranking, (ii) document ranking, and (iii) hybrid. The presented comparison was conducted using the Thai language with respect to the reading age associated with secondary school, high school, and undergraduate students in the context of scientific abstract.

Nattapong Tongtep, Frans Coenen, Thanaruk Theeramunkong
Quantified Coalition Logic for BDI-Agents: Completeness and Complexity

This paper introduces a multi-dimensional modal logic of Quantified Coalition Logic of Beliefs, Desires and Intentions (QCLBDI) to reason about how agents’ mental attitudes evolve by cooperation in game-like multi-agent systems. We present a complete axiomatic system of QCLBDI with the formal proof and show that the satisfiability for QCLBDI is PSPACE-complete, computationally no harder than that of CL.

Qingliang Chen, Qun Li, Kaile Su, Xiangyu Luo
Using Asymmetric Associations for Commonsense Causality Detection

Human actions in this world are based on exploiting knowledge of causality. Humans find it easy to connect a cause to the subsequent effect but formal reasoning about causality has proved to be a difficult task in automated NLP applications because it requires rich knowledge of all the relevant events and circumstances. Automated approaches to detecting causal connections attempt to partially capture this knowledge using commonsense reasoning based on lexical and semantics constraints. However, their performance is limited by the lack of sufficient breadth of commonsense knowledge to draw causal inferences. This paper presents a commonsense causality detection system using a new semantic measure based on asymmetric associations on the Choice Of Plausible Alternatives (COPA) task. When evaluated on three COPA benchmark datasets, the causality detection system using asymmetric association based measures demonstrates a superior performance to other symmetric measures.

Shahida Jabeen, Xiaoying Gao, Peter Andreae
A Community-Based Collaborative Filtering System Dealing with Sparsity Problem and Data Imperfections

In this paper, we develop a collaborative filtering system for not only tackling the sparsity problem by exploiting community context information but for also dealing with data imperfections by means of Dempster-Shafer theory. The experimental results show that the proposed system achieves better performance when comparing it with a similar system, CoFiDS.

Van-Doan Nguyen, Van-Nam Huynh
Effect of Weighting Factors and Unit-Selection Factors on Text Summarization

In abstraction-based summarization, term weighting and unit selection plays an important role to affect summary quality. Towards implementation of Thai news summarization, we propose unit segmentation using Thai elementary discourse units (TEDUs) and common phrases (COMPs), unit-graph formation using iterative unit weighting and cosine similarity, and unit selection using multi-criteria; a PageRank consideration, node weight, query relevance, centroid-based selection, and redundancy removal. To examine performance of the proposed methods, a number of experiments are conducted using fifty sets of Thai news articles with their manually constructed reference summary. By four common evaluation measures of ROUGE-1, ROUGE-2, ROUGE-S, and ROUGE-SU4, the results evidence that (1) our iterative weighting is superior to traditional TF-IDF, (2) the node weight inclusion, query relevance, centroid-based selection, and unit redundancy consideration helps improving summary quality, and (3) our summarization method outperforms the baselines.

Nongnuch Ketui, Thanaruk Theeramunkong
Domain Adaptive Neural Networks for Object Recognition

We propose a simple neural network model to deal with the domain adaptation problem in object recognition. Our model incorporates the Maximum Mean Discrepancy (MMD) measure as a regularization in the supervised learning to reduce the distribution mismatch between the source and target domains in the latent space. From experiments, we demonstrate that the MMD regularization is an effective tool to provide good domain adaptation models on both SURF features and raw image pixels of a particular image data set.

Muhammad Ghifary, W. Bastiaan Kleijn, Mengjie Zhang
A Correlation Based Imputation Method for Incomplete Traffic Accident Data

Death, injury and disability from road traffic crashes continue to be a major global public health problem. Recent data suggest that the number of fatalities from traffic crashes is in excess of 1.25 million people each year with non-fatal injuries affecting a further 20-50 million people. It is predicted that by 2030 road traffic accidents will have progressed to be the 5

th

leading cause of death and that the number of people who will die annually from traffic accidents will have doubled from current levels. Therefore, methods to reduce accident severity are of great interest to traffic agencies and the public at large. Road accident fatality rate depends on many factors and it is a very challenging task to investigate the dependencies between the attributes because of the many environmental and road accident factors. Any missing data in the database could obscure the discovery of important factors and lead to invalid conclusions. In order to make the traffic accident datasets useful for analysis, it should be preprocessed properly. In this paper, we present a novel method based on sampling of distributions obtained from correlation measures for the imputation of missing values to improve the quality of the traffic accident data. We evaluated our algorithm using two publicly available traffic accident databases of United States (explore.data.gov, data. opencolorado.org). Our results indicate that the proposed method performs significantly better than the three existing algorithms.

Rupam Deb, Alan Wee-Chung Liew, Erwin Oh
A Method to Divide Stream Data of Scores over Review Sites

The word of mouth information over certain review sites affects various activities from person to person. In large-scale review sites, it can happen that evaluation tendency of a product changes in a large way by only a few reviews that were rated and posted by certain users. Thus, it is very important to be able to detect those influential reviews in social media analysis. We propose an algorithm that can efficiently divide stream data of review scores by maximizing the likelihood of generating the observed sequence data. We assume that the user’s fundamental scoring behavior follows a multinomial distribution model and formulate a division problem.

Yuki Yamagishi, Seiya Okubo, Kazumi Saito, Kouzou Ohara, Masahiro Kimura, Hiroshi Motoda
Shift from Forward to Backward Deliberation in Search of Reconciliation

Desire conflicts arise in several real-world contexts. In this paper we propose a mixed deliberation dialogue for reconciliation. A mixed deliberation dialogue is defined as a combination of

forward

and

backward

deliberation dialogues whose goals are

subordinate

and

superordinate

desires of a given desire, respectively. This research and the introduction of mixed deliberation dialogue have been motivated by Kowalski and Toni’s reconciliatory scenario: indeed we show that an instantiation of a mixed deliberation dialogue implements key parts of Kowalski and Toni’s reconciliatory solution. We also proved the correctness of the mixed deliberation dialogues.

Hiroyuki Kido, Federico Cerutti
Cost Sensitive Decision Forest and Voting for Software Defect Prediction

While traditional classification algorithms optimize for accuracy, cost-sensitive classification methods attempt to make predictions that produce the lowest classification cost. In this paper we propose a cost-sensitive classification technique called

CSForest

which is an ensemble of decision trees. We also propose a cost-sensitive voting technique called

CSVoting

. The proposed techniques are empirically evaluated by comparing them with five (5) classifier algorithms on six (6) publicly available clean datasets that are commonly used in the research on software defect prediction. Our initial experimental results indicate a clear superiority of the proposed techniques over the existing ones.

Michael J. Siers, Md Zahidul Islam
A Randomized Game-Tree Search Algorithm for Shogi Based on Bayesian Approach

We propose a new randomized game-tree search algorithm based on Bayesian Approach. It consists of two main concepts; (1) using multiple game-tree search with a randomized evaluation function as simulations, (2) treating evaluated values as probability distribution and propagating it through the game-tree using the Bayesian Approach concept. Proposed method is focusing on applying to tactical games such as Shogi, in which MCTS is not currently effective. We apply the method for Shogi using a top-level computer player application which is constructed with many domain-specific search techniques. Through large amount of self-play evaluations, we conclude our method can achieve good win ratio against an ordinary game-tree search based player when enough computing resource is available. We also precisely examine performance behaviors of the method, and depict designing directions.

Daisaku Yokoyama, Masaru Kitsuregawa

Special Track: Commonsense Cognitive Robotics

Online Agent Logic Programming with oClingo

The online answer set solver oClingo offers a powerful new technique for uniting the speed of Answer Set Programming (ASP) with dynamic events. The price of this power is paid by increased constraints on the construction of a ‘safe’ program—one that satisfies an arcane modularity condition. We provide an alternative in the form of so-called Agent Logic Programs—a concise declarative language for describing agent control strategies. Specifically, we take an ASP-compatible subset of Agent Logic Programs, extend it with exogenous actions, argue this translation is faithful to the original definition, and prove that it guarantees oClingo’s modularity condition. The result is a safe, clean input language for oClingo and a new implementation for Agent Logic Programs.

Timothy Cerexhe, Martin Gebser, Michael Thielscher
Grounding Dynamic Spatial Relations for Embodied (Robot) Interaction

This paper presents a computational model of the processing of dynamic spatial relations occurring in an embodied robotic interaction setup. A complete system is introduced that allows autonomous robots to produce and interpret dynamic spatial phrases (in English) given an environment of moving objects. The model unites two separate research strands: computational cognitive semantics and on commonsense spatial representation and reasoning. The model for the first time demonstrates an integration of these different strands.

Michael Spranger, Jakob Suchan, Mehul Bhatt, Manfred Eppe

Special Track: Intelligent Health Services

Load Balancing for Imbalanced Data Sets: Classifying Scientific Artefacts for Evidence Based Medicine

Data skewness is a challenge encountered, in particular, when applying supervised machine learning approaches in various domains, such as in healthcare and biomedical information engineering. Evidence Based Medicine (EBM) is a clinical strategy for prescribing treatment based on current best evidence for individual patients. Clinicians need to query publication repositories in order to find the best evidence to support their decision-making processes. This sophisticated information is materialised in the form of scientific artefacts in scholarly publications and the automatic extraction of these artefacts is a technical challenge for current generic search engines. Many classification approaches have been proposed for identifying key scientific artefacts in EBM, however their performance is affected by the imbalanced characteristic of data in this domain. In this paper, we present four data balancing approaches applied in a binary ensemble classifier framework for classifying scientific artefacts in the EBM domain. Our balancing approaches improve the ensemble classifier’s F-score by up to 15% for classes of scientific artefacts with extremely low coverage in the domain. In addition, we propose a classifier selection method for choosing the best classifier based on the distributional feature of classes. The resulting classifiers show improved classification performances when compared to state of the art approaches.

Hamed Hassanzadeh, Tudor Groza, Anthony Nguyen, Jane Hunter
Modeling the Tail of a Hyperexponential Distribution to Detect Abnormal Periods of Inactivity in Older Adults

The number of elderly people requiring different levels of care in their homes has increased in recent times, with further increases expected. User studies show that the main concern of elderly people and their families is “fall detection and safe movement in the home”. We view abnormally long periods of inactivity as indicators of unsafe situations, and present a new method that models the tail of a hyperexponential distribution in order to reliably identify such inactivity periods from unintrusive sensor observations. The performance of our method was evaluated on two real-life datasets, and compared with that of a state-of-the-art technique, with our method outperforming this technique.

Masud Moshtaghi, Ingrid Zukerman
Predicting Procedure Duration to Improve Scheduling of Elective Surgery

The accuracy of surgery schedules depends on precise estimation of surgery duration. Current approaches employed by hospitals include historical averages and surgical team estimates which are not accurate enough. The inherent complexity of surgery duration estimation contributes significantly to increased procedure cancellations and reduced utilisation of already encumbered resources. In this study we employ administrative and perioperative data from a large metropolitan hospital to investigate the performance of different machine learning approaches for improving procedure duration estimation. The predictive modelling approaches applied include linear regression (LR), multivariate adaptive regression splines (MARS), and random forests (RF). Cross validation results reveal that the random forest model outperforms other methods, reducing mean absolute percentage error by 28% when compared to current hospital estimation approaches.

Zahra ShahabiKargar, Sankalp Khanna, Norm Good, Abdul Sattar, James Lind, John O’Dwyer
Relational Agents to Promote eHealth Advice Adherence

Adherence to medical treatment advice is necessary to gain the intended improvement in patient outcomes. This paper looks at how empathic dialogues delivered by a relational agent can build an ongoing therapeutic alliance with the patient with the intention of improving access and adherence to the treatment advice. We present a case study and design currently under development for the domain of pediatric incontinence.

Scott Baker, Deborah Richards, Patrina Caldwell
Predicting Consumer Familiarity with Health Topics by Query Formulation and Search Result Interaction

Searching for understandable health information on the Internet remains difficult for most consumers. Every consumer has different health topic familiarity. This diversity may cause misunderstanding because the information presented during health information searches may not fit the consumer’s understanding. This study aimed to develop health topic familiarity prediction models based on the consumer’s searching behavior, how the consumers formulate the query and how they interact with the search results. The experimental results show that Naïve Bayes and Sequential Minimal Optimization classifiers achieved high accuracy on the combination of query formulation and search result interaction feature sets in predicting consumer’s health topic familiarity. This finding suggests that health topic familiarity identification based on the query formulation and the search result interaction is feasible and effective.

Ira Puspitasari, Ken-ichi Fukui, Koichi Moriyama, Masayuki Numao

Special Track: Smart Modelling and Simulation

Task-Based Wireless Mobile Agents Search and Deployment for Ad Hoc Network Establishment in Disaster Environments

In disaster environments, due to the destruction of local communication infrastructures, wireless mobile agents (robots) are employed to search and deploy to establish ad hoc networks. With the guidance of the network, first responders can efficiently perform tasks in disaster environments. However, due to the uncertainties and complexities of disaster environments and the limited capabilities of wireless mobile agents, it is challenging to apply wireless mobile agents to disaster environments in both theory and practice. To this end, a task-based wireless mobile agents search and deployment approach is proposed for ad hoc network establishment in disaster environments. The proposed approach consists of a search module and a deployment module. The search module enables wireless mobile agents to efficiently move and collect information in an unknown and complex disaster environment. The deployment module enables wireless mobile agents to find suitable deployment locations based on the collected information. The ad hoc networks established by the proposed approach can guarantee the communications of wireless mobile agents in ad hoc networks. In addition, it can cover the maximum number of tasks and maximum size of area in the disaster environment. The experimental results demonstrate the advantages of the proposed approach in terms of wireless mobile agents search and deployment for ad hoc network establishment in disaster environments.

Xing Su, Minjie Zhang, Quan Bai
From a Local to a Global Perspective of Community Detection in Networks

We propose a novel, distributed approach for analyzing communities in social networks. In this approach, we define communities from two perspectives: local and global. Firstly, the local communities are identified by each node in a self-centred manner. Then, the global communities are captured using the notion of tendency among local communities. Our approach is especially suitable for decentralised and dynamic networks. We present formal definitions and experimentally verify our model on both static and dynamic networks.

Jiamou Liu, Ziheng Wei
Managing Parking Fees Based on Massive Parking Accounting Data

As parking accounting data of automatic payment system is accumulated, a managing parking fees in accordance with characteristics of parking utilization is expected. The purpose of this paper is to analyze the characteristics of parking utilization from a big data and to propose a procedure of parking fee management by developing of a simple simulator from a history of parking utilization. In concrete terms, we classify 1,050 parking lots by cluster analysis and analyze influence of a charge revision on parking time by survival analysis from 22.5 million parking accounting data in the past year. Further, we consider the appropriateness of modified fee by estimating parking time with a hazard-based duration model.

Yuichi Enoki, Ryo Kanamori, Takayuki Ito
Accountable Individual Trust from Group Reputations in Multi-agent Systems

Studies of reputation and trust in multi-agent systems so far have been concentrated on individual reputation. In many applications, agents need to form groups to provide services. Reputations of agent groups and individuals are mutually influenced due to the fact that they can either use these reputation of their groups or individuals to establish interactions. In multi-agent systems research, most reputation models have been constructed to capture individual reputation based on direct evidence. This paper proposes a computational model for inferring reputations and contributions of members in agent groups. We argue that the proposed model can be used as an estimation for individual reputation when such information is not available, and is suitable for distributed environments.

Tung Doan Nguyen, Quan Bai
An Innovative Approach for Predicting Both Negotiation Deadline and Utility in Multi-issue Negotiation

In agent negotiation, agents usually need to know their opponents’ negotiation parameters (i.e preference, deadline, reservation utility) to effectively adjust their negotiation strategies, thus an agreement can be reached. However, in a competitive negotiation environment, agents may not be willing to reveal their negotiation parameters, which increases the difficulty of reaching an agreement. In order to solve this problem, agents need to have the learning ability to predict their opponents’ negotiation parameters. In this paper, a Bayesian-based prediction approach is proposed to help an agent to predict its opponent’s negotiation deadline and reservation utility in bilateral multi-issue negotiation. Besides, a concession strategy adjustment algorithm is integrated into the proposed prediction approach to improve the negotiation result. The experimental results indicate that the proposed approach can increase the profit and the success rate of bilateral multi-issue negotiation.

Jihang Zhang, Fenghui Ren, Minjie Zhang
LAKUBE: An Improved Multi-Armed Bandit Algorithm for Strongly Budget-Constrained Conditions on Collecting Large-Scale Sensor Network Data

Continuously gathering data from wireless sensor network is one of crucial issue to be resolved. In the problem, there are multiple sensors to be transmitted, however, their data distributions are unknown at the starting point and to know such distributions we should try to gather data from them, and the resources to be used for it is also limited. The problem is often called Budget-Limited Multi-Armed Bandit problem, and several approaches have been proposed. However, often a wireless sensor network has a number of nodes to be retrieved so that it is difficult to try the all nodes to gather their potential possibilities because of very limited budgets, i.e., limited electricity power or limited bandwidth of the network. In this paper, we present an improved BLMAB algorithm that is more suitable for highly budget-constrained situation. The proposed approach can effectively limit sources to be retrieved when a relatively hard budget-limitation has applied. We conduct its experiments on a simulation environment to evaluate the potential performance of the approach.

Yoshiaki Kadono, Naoki Fukuta

Erratum

Erratum: Reliable Fault Diagnosis of Low-Speed Bearing Defects Using a Genetic Algorithm

Andy C.C. Tan and Eric Y.H. Kim should have been included as authors of the paper starting on page 248 of this volume, because the data and images (Figs. 1 and 2) presented originated from a low-speed machinery fault simulator, developed under the leadership of Andy Tan at CRC IEAM, Queensland University of Technology, Australia. The header should have read as follows:

Reliable Fault Diagnosis of Low-Speed Bearing Defects Using a Genetic Algorithm

Phuong Nguyen

1

, Myeongsu Kang

1

, Jaeyoung Kim

1

, Jong-Myon Kim

1,*

, Andy C.C. Tan

2

, and Eric Y. Kim

3

1

School of Electrical, Electronics, and Computer Engineering, University of Ulsan, Ulsan, South Korea

2

School of Chemistry, Physics and Mechanical Engineering, Science and Engineering Faculty, Queensland University of Technology, Australia

3

Asset Management Department, CMOC Northparkes Mine, Australia

{phuongnguyen.cse,ilmareboy,kjy7097,jongmyon.kim, yonghan.kim}@gmail.com

,

a.tan@qut.edu.au

The first paragraph of Section 3 should have been written as follows:

“In this study, data obtained from a low-speed machinery fault simulator developed by CRC-IEAM, Queensland University of Technology (QUT) was used, as shown in Fig. 1(a) [13, 14, 15]. At the drive end of the test rig, the shaft is connected to a reduction gear box (10.1:1) through a coupling. The fault simulator allows a constant radial load to be applied to the driven-end support and the load is measured by a load cell. To capture AE signals, a wideband AE sensor (type R3

α

from Physical Acoustic Corporations) was attached on top of the bearing housing as depicted in Fig. 1(b).”

The caption underneath Fig. 1 should have been written as follows:

“Fig. 1. (a) QUT test rig for experiments [13] and (b) location of AE sensor to record continuous AE signals [14, 15]”

The second paragraph of Section 3 should have been written as follows:

“Cylindrical roller bearings (i.e., SKF NF307) were used in the test rig. In order to diagnose multiple bearing defects, five different bearing fault types were developed by QUT which include inner-race crack (IRC), inner-race spall (IRS), outer-race crack (ORC), outer-race spall (ORS) and roller medium spall (RMS) as illustrated in Fig. 2. The AE signals were acquired from the test bearing rotating at a low speed of 20 RPM with 500-N and 2-kN load conditions. To produce crack and spall on the surface of a bearing, they used a diamond cutter bit and air-speed grinding tool. In the tests, 90 AE signals with 1.5-seconds-long of data were obtained for each bearing defect and sampled at 500 kHz in this study.”

The caption underneath Fig. 2 should have been written as follows:

“Fig. 2. Seeded bearing defects [14, 15]: (a) crack on inner-race (IRC, 0.1mm in width), (b) spall on inner-race (IRS, 0.6mm), (c) crack on outer-race (ORC, 0.1mm), (d) spall on outer-race (ORS, 0.7mm), (e) spall on roller (RMS, 1.6mm)”

The following three references should have been included in the References section:

13. Kosse, V., Tan, A.C.C.: Development of Test Facilities for Verification of Machine Condition Monitoring Methods for Low Speed Machinery. In: Proc. World Cong. Eng. Asset. Manag., pp. 192–197, Gold Coast, Australia (2006)

14. Kim, Y.-H., Tan, A.C.C., Mathew, J., Kosse, V., Yang, B.S.: A Comparative Study on the Application of Acoustic Emission Technique and Acceleration Measurements for Low Speed Condition Monitoring. In: Proc. Asia-Pacific Vib. Conf., Sapporo, Japan (2007)

15. Kim, E.Y., Tan, A.C.C., Yang, B.S., Kosse, V.: Experimental Study on Condition Monitoring of Low Speed Bearings: Time Domain Analysis. In: Proc. Australasian Cong. Appl. Mechatronics, Brisbane, Australia (2007)

Phuong Nguyen, Myeongsu Kang, Jaeyoung Kim, Jong-Myon Kim, Andy C. C. Tan, Eric Y. Kim
Backmatter
Metadaten
Titel
PRICAI 2014: Trends in Artificial Intelligence
herausgegeben von
Duc-Nghia Pham
Seong-Bae Park
Copyright-Jahr
2014
Verlag
Springer International Publishing
Electronic ISBN
978-3-319-13560-1
Print ISBN
978-3-319-13559-5
DOI
https://doi.org/10.1007/978-3-319-13560-1

Premium Partner