Skip to main content
main-content

Über dieses Buch

This book constitutes the refereed proceedings of the 28th Australasian Joint Conference on Artificial Intelligence, AI 2015, held in Canberra, Australia, in November/December 2015.

The 39 full papers and 18 short papers presented were carefully reviewed and selected from 102 submissions.

Inhaltsverzeichnis

Frontmatter

Exploiting the Beta Distribution-Based Reputation Model in Recommender System

Reputation systems are employed to measure the quality of items on the Web. Incorporating accurate reputation scores in recommender systems is useful to provide more accurate recommendations as recommenders are agnostic to reputation. The ratings aggregation process is a vital component of a reputation system. Reputation models available do not consider statistical data in the rating aggregation process. This limitation can reduce the accuracy of generated reputation scores. In this paper, we propose a new reputation model that considers previously ignored statistical data. We compare our proposed model against state-of the-art models using top-N recommender system experiment.

Ahmad Abdel-Hafez, Yue Xu

A Heuristic Search Approach to Find Contrail Avoidance Flight Routes

Contrails are line-shaped clouds or “condensation trails,” composed of ice particles that are visible behind jet aircraft engines. Contrails can affect the formation of clouds effecting climate change. This paper proposes an integrated model of atmosphere, airspace and flight routing with a Gradient Descent based heuristic search algorithm to find Contrail avoidance trajectories with climb, descent and vector maneuvers. Trade off analysis of Contrails avoidance with fuel burn/CO2 and distance flown is also presented.

Rubai Amin, Sameer Alam

Temporal Conjunctive Queries in Expressive Description Logics with Transitive Roles

In Ontology-Based Data Access (OBDA), user queries are evaluated over a set of facts under the open world assumption, while taking into account background knowledge given in the form of a Description Logic (DL) ontology. In order to deal with dynamically changing data sources, temporal conjunctive queries (TCQs) have recently been proposed as a useful extension of OBDA to support the processing of temporal information. We extend the existing complexity analysis of TCQ entailment to very expressive DLs underlying the OWL 2 standard, and in contrast to previous work also allow for queries containing transitive roles.

Franz Baader, Stefan Borgwardt, Marcel Lippmann

Scaling up Multi-island Competitive Cooperative Coevolution for Real Parameter Global Optimisation

A major challenge in using cooperative coevolution (CC) for global optimisation is the decomposition of a given problem into subcomponents. Variable interaction is a major constraint that determines the decomposition strategy of a problem. Hence, finding an optimal decomposition strategy becomes a burdensome task as inter-dependencies between decision variables are unknown for these problems. In recent related work, a multi-island competitive cooperative coevolution (MICCC) algorithm was introduced which featured competition and collaboration of several different decomposition strategies. MICCC used five different uniform problem decomposition strategies that were implemented as independent islands. This paper presents an analysis of the MICCC algorithm and also extends it to more than five islands. We incorporate arbitrary (non-uniform) problem decomposition strategies as additional islands in MICCC and monitor how each different problem decomposition strategy contributes towards the global fitness over different stages of optimisation.

Kavitesh K. Bali, Rohitash Chandra

An Evolutionary Algorithm with Classifier Guided Constraint Evaluation Strategy for Computationally Expensive Optimization Problems

Practical optimization problems often involve objective and constraint functions evaluated using computationally expensive numerical simulations e.g. computational fluid dynamics (CFD), finite element methods (FEM) etc. In order to deal with such problems, existing methods based on surrogates/approximations typically use cheaper and less accurate models of objectives and constraint functions during the search. Promising solutions identified using approximations or surrogates are only evaluated using computationally expensive analysis. In the event the constraints and objectives are evaluated using independent computationally expensive analysis (e.g. multi-disciplinary optimization), there exists an opportunity to only evaluate relevant constraints and/or objectives that are necessary to ascertain the utility of such solutions. In this paper, we introduce an efficient evolutionary algorithm for the solution of computationally expensive single objective constrained optimization problems. The algorithm is embedded with selective evaluation strategies guided by Support Vector Machine (SVM) models. Identification of promising individuals and relevant constraints corresponding to each individual is based on SVM classifiers, while partially evaluated solutions are ranked using SVM ranking models. The performance of the approach has been evaluated using a number of constrained optimization benchmarks and engineering design optimization problems with limited computational budget. The results have been compared with a number of established approaches based on full and partial evaluation strategies. Hopefully this study will prompt further efforts in the direction of selective evaluation, which so far had attracted little attention.

Kalyan Shankar Bhattacharjee, Tapabrata Ray

Cost to Evaluate Versus Cost to Learn? Performance of Selective Evaluation Strategies in Multiobjective Optimization

Population based stochastic algorithms have long been used for the solution of multiobjective optimization problems. In the event the problem involves computationally expensive analysis, the existing practice is to use some form of surrogates or approximations. Surrogates are either used to screen promising solutions or approximate the objective functions corresponding to the solutions. In this paper, we investigate the effects of selective evaluation of promising solutions and try to derive answers to the following questions: (a) should we discard the solution right away relying on a classifier without any further evaluation? (b) should we evaluate its first objective function and then decide to select or discard it? (c) should we evaluate its second objective function instead and then decide its fate or (d) should we evaluate both its objective functions before selecting or discarding it? The last form is typically an optimization algorithm in its native form. While evaluation of solutions generate information that can be potentially learned by the optimization algorithm, it comes with a computational cost which might still be insignificant when compared with the cost of actual computationally expensive analysis. In this paper, a simple scheme, referred as Combined Classifier Based Approach (CCBA) is proposed. The performance of CCBA along with other strategies have been evaluated using five well studied unconstrained bi-objective optimization problems (DTLZ1-DTLZ5) with limited computational budget. The aspect of selective evaluation has rarely been investigated in literature and we hope that this study would prompt design of efficient algorithms that selectively evaluate solutions on the fly i.e. based on the trade-off between need to learn/evaluate and cost to learn.

Kalyan Shankar Bhattacharjee, Tapabrata Ray

A Propositional Plausible Logic

A new non-monotonic propositional logic called PPL — for Propositional Plausible Logic — is defined. An example is worked and four theorems about PPL are stated.

David Billington

Monte Carlo Analysis of a Puzzle Game

When a puzzle game is created, its design parameters must be chosen to allow solvable and interesting challenges to be created for the player. We investigate the use of random sampling as a computationally inexpensive means of automated game analysis, to evaluate the BoxOff family of puzzle games. This analysis reveals useful insights into the game, such as the surprising fact that almost 100 % of randomly generated challenges have a solution, but less than 10 % will be solved using strictly random play, validating the inventor’s design choices. We show the 1D game to be trivial and the 3D game to be viable.

Cameron Browne, Frederic Maire

Tracking Drift Severity in Data Streams

The evolution of data or concept drift is a common phenomena in data streams. Currently most drift detection methods are able to locate the point of drift, but are unable to provide important information on the characteristics of change such as the magnitude of change which we refer to as drift severity. Monitoring drift severity provides crucial information to users allowing them to formulate a more adaptive response. In this paper, we propose a drift detector, MagSeed, which is capable of tracking drift severity with a high rate of true positives and a low rate of false positives. We evaluate MagSeed on synthetic and real world data, and compare it to state of the art drift detectors ADWIN2 and DDM.

Kylie Chen, Yun Sing Koh, Patricia Riddle

Probabilistic Belief Contraction: Considerations on Epistemic Entrenchment, Probability Mixtures and KL Divergence

Probabilistic belief contraction is an operation that takes a probability distribution P representing a belief state along with an input sentence a representing some information to be removed from this belief state, and outputs a new probability distribution $$P^-_a$$. The contracted belief state $$P^-_a$$ can be represented as a mixture of two states: the original belief state P, and the resultant state $$P^*_{\lnot a}$$ of revising P by $$\lnot a$$. Crucial to this mixture is the mixing factor $$\epsilon $$ which determines the proportion of P and $$P^*_{\lnot a}$$ that are used in this process in a uniform manner. Ideas from information theory such as the principle of minimum cross-entropy have previously been used to motivate the choice of the probabilistic contraction operation. Central to this principle is the Kullback-Leibler (KL) divergence. In an earlier work we had shown that the KL divergence of $$P^-_a$$ from P is fully determined by a function whose only argument is the mixing factor $$\epsilon $$. In this paper we provide a way of interpreting $$\epsilon $$ in terms of a belief ranking mechanism such as epistemic entrenchment that is in consonance with this result. We also provide a much needed justification for why the mixing factor $$\epsilon $$ must be used in a uniform fashion by showing that the minimal divergence of $$P^-_{a}$$ from P is achieved only when uniformity is respected.

Kinzang Chhogyal, Abhaya Nayak, Abdul Sattar

DMAPP: A Distributed Multi-agent Path Planning Algorithm

Multi-agent path planning is a very challenging problem that has several applications. It has received a lot of attention in the last decade. Multi-agent optimal path planning is computationally intractable. Some algorithms have been suggested that may not return optimal plans but are useful in practice. These works mostly use centralized algorithms to compute plans. However in a multi-agent setting it would be more appropriate for the agents, with limited information, to compute the plans. In this paper, we suggest a distributed multi-agent path planning algorithm DMAPP, where all the phases are distributed. We have implemented DMAPP and have compared its performance with some existing algorithms. The results show the effectiveness of our approach.

Satyendra Singh Chouhan, Rajdeep Niyogi

Graph-Based Collaborative Filtering Using Rating Nodes: A Solution to the High Ratings/Low Ratings Problem

Graph-based random walk models have recently become a popular approach to collaborative filtering recommendation systems. Under the conventional graph-based approach, a user node and item node are connected if the user has rated the item, and the value of the rating is represented as the weight of the connection. Commencing from some target user, a random walk is performed on the graph, and the results used to perform useful tasks such as ranking items in order of their importance to the user. Because random walk favors large-weighted connections, walk is more likely to proceed through two users that share a high rating for some item, than through users who share a low rating. This is a problem because there are similarity relations implicit in the data that are not being captured under this representation. We refer to this as the ‘High Ratings/Low Ratings’ problem. This paper proposes a novel graph representation scheme in which item ratings are represented using multiple nodes, allowing flow of information through both low-rating and high-rating connections. Empirical results on the MovieLens dataset show that recommendation rankings made using the proposed scheme are much better correlated with results in the test ratings, and that under a top-k evaluation, there is an improvement of up to 15 % in precision and recall. An attractive feature of the approach is that it also associates a confidence value with a recommendation.

Alphan Culha, Andrew Skabar

A Comparative Study on Vector Similarity Methods for Offer Generation in Multi-attribute Negotiation

Offer generation is an important mechanism in automated negotiation, in which a negotiating agent needs to select bids close to the opponent preference to increase their chance of being accepted. The existing offer generation approaches are either random, require partial knowledge of opponent preference or are domain-dependent. In this paper, we investigate and compare two vector similarity functions for generating offer vectors close to opponent preference. Vector similarities are not domain-specific, do not require different similarity functions for each negotiation domain and can be computed in incomplete-information negotiation. We evaluate negotiation outcomes by the joint gain obtained by the agents and by their closeness to Pareto-optimal solutions.

Aodah Diamah, Michael Wagner, Menkes van den Briel

Analytical Results on the BFS vs. DFS Algorithm Selection Problem. Part I: Tree Search

Breadth-first search (BFS) and depth-first search (DFS) are the two most fundamental search algorithms. We derive approximations of their expected runtimes in complete trees, as a function of tree depth and probabilistic goal distribution. We also demonstrate that the analytical approximations are close to the empirical averages for most parameter settings, and that the results can be used to predict the best algorithm given the relevant problem features.

Tom Everitt, Marcus Hutter

Analytical Results on the BFS vs. DFS Algorithm Selection Problem: Part II: Graph Search

The algorithm selection problem asks to select the best algorithm for a given problem. In the companion paper (Everitt and Hutter 2015b), expected runtime was approximated as a function of search depth and probabilistic goal distribution for tree search versions of breadth-first search (BFS) and depth-first search (DFS). Here we provide an analogous analysis of BFS and DFS graph search, deriving expected runtime as a function of graph structure and goal distribution. The applicability of the method is demonstrated through analysis of two different grammar problems. The approximations come surprisingly close to empirical reality.

Tom Everitt, Marcus Hutter

Region-Growing Planar Segmentation for Robot Action Planning

Object detection, classification and manipulation are some of the capabilities required by autonomous robots. The main steps in object classification are: segmentation, feature extraction, object representation and learning. To address the problem of learning object classification using multi-view range data, we used a relational approach. The first step of our object classification method is to decompose a scene into shape primitives such as planes, followed by extracting a set of higher-level, relational features from the segmented regions. In this paper, we compare our plane segmentation algorithm with state-of-the-art plane segmentation algorithms which are publicly available. We show that our segmentation outperforms visually and also produces better results for the robot action planning.

Reza Farid

A Differentially Private Random Decision Forest Using Reliable Signal-to-Noise Ratios

When dealing with personal data, it is important for data miners to have algorithms available for discovering trends and patterns in the data without exposing people’s private information. Differential privacy offers an enforceable definition of privacy that can provide each individual in a dataset a guarantee that their personal information is no more at risk than it would be if their data was not in the dataset at all. By using mechanisms that achieve differential privacy, we propose a decision forest algorithm that uses the theory of Signal-to-Noise Ratios to automatically tune the algorithm’s parameters, and to make sure that any differentially private noise added to the results does not outweigh the true results. Our experiments demonstrate that our differentially private algorithm can achieve high prediction accuracy.

Sam Fletcher, Md Zahidul Islam

Learning from Demonstration Using GMM, CHMM and DHMM: A Comparison

Greater production and improved safety in the mining industry can be enhanced by the use of automated vehicles. This paper presents results in applying Learning from Demonstration (LfD) to a laboratory semi-automated mine inspection robot following a path through a simulated mine. Three methods, Gaussian Mixture Model (GMM), Continuous Hidden Markov Model (CHMM), and Discrete Hidden Markov Model (DHMM) were used to implement the LfD and a comparison of the implementation results is presented. The results from the different models were then used to implement a novel, optimised path decomposition technique that may be suitable for possible robot use within an underground mine.

Fenglu Ge, Wayne Moore, Michael Antolovich

Implementing Modal Tableaux Using Sentential Decision Diagrams

A Sentential Decision Diagram (SDD) is a novel representation of a boolean function which contains a Binary Decision Diagram (BDD) as a subclass. Previous research suggests that BDDs are effective in implementing tableaux-based automated theorem provers. We investigate whether SDDs can offer improved efficiency when used in the same capacity. Preliminarily, we found that SDDs compile faster than BDDs only on large CNF formulae. In general, we found the BDD-based modal theorem prover still outperforms our SDD-based modal theorem prover. However, the SDD-based approach excels over the BDD-based approach in a select subset of benchmarks that have large sizes and modalities that are less nested or fewer in number.

Rajeev Goré, Jason Jingshi Li, Thomas Pagram

Decision Making Strategy Based on Time Series Data of Voting Behavior

In gambling such as horse racing, we are sometimes able to peep peculiar voting behavior by a punter with the advantageous information closely related to the results. The punter is often referred as an insider. In this study, our goal is to propose a reasonable investment strategy by peeping insiders’ decision-making based on the time series odds data in horse racing events held by JRA. We have found the conditions that the rate of return is more than 642 % for races whose winner’s prize money is 20 million yens or more. That suggests the possibility of Knowledge Peeping.

Shogo Higuchi, Ryohei Orihara, Yuichi Sei, Yasuyuki Tahara, Akihiko Ohsuga

Finding Within-Organisation Spatial Information on the Web

Information available on company websites can help people navigate to the offices of groups and individuals within the company. Automatically retrieving this within-organisation spatial information is a challenging AI problem This paper introduces a novel unsupervised pattern-based method to extract within-organisation spatial information by taking advantage of HTML structure patterns, together with a novel Conditional Random Fields (CRF) based method to identify different categories of within-organisation spatial information. The results show that the proposed method can achieve a high performance in terms of F-Score, indicating that this purely syntactic method based on web search and an analysis of HTML structure is well-suited for retrieving within-organisation spatial information.

Jun Hou, Ruth Schulz, Gordon Wyeth, Richi Nayak

Knowledge Sharing in Coalitions

The aim of this paper is to investigate the interplay between knowledge shared by a group of agents and its coalition ability. We characterize this relation in the standard context of imperfect information concurrent game. We assume that whenever a set of agents form a coalition to achieve a goal, they share their knowledge before acting. Based on this assumption, we propose a new semantics for alternating-time temporal logic with imperfect information and perfect recall. It turns out this semantics is sufficient to preserve all the desirable properties of coalition ability in traditional coalitional logics. Meanwhile, we also show that the fixed-point characterisations of coalition operators which normally fail in the context of imperfect information can be recovered through the interplay of epistemic and coalition modalities. This work provides a partial answer to the question: which kind of group knowledge is required for a group to achieve their goals in the context of imperfect information.

Guifei Jiang, Dongmo Zhang, Laurent Perrussel

Possibilistic Inferences in Answer Set Programming

Answer set programming (ASP) has been extended to possibilistic ASP (PASP), in which the notion of possibilistic stable models is defined for possibilistic logic programs. However, possibilistic inferences that correspond to the three inferences in ordinary possibilistic logic have not been explored in PASP yet. In this paper, based on the skeptical reasoning determined by possibilistic stable models, we define three inference relations for PASP, provide their equivalent characterisations in terms of possibility distribution, and develop algorithms for these possibilistic inferences. Our algorithms are achieved by generalising some important concepts (Clarke’s completion, loop formulas, and guarded resolution) and properties in ASP to PASP. Besides their theoretical importance, these results can be used to develop efficient implementations for possibilistic reasoning in ASP.

Yifan Jin, Kewen Wang, Zhe Wang

A Two Tiered Finite Mixture Modelling Framework to Cluster Customers on EFTPOS Network

This paper proposes a framework to build a clustering model of customers of the retailers on the EFTPOS network of a major bank in Australia. The framework consists of two clustering tiers using Finite Mixture Modelling (FMM) that segments customers based on their probabilities of generating transactions of different categories. The first tier generates the transaction categories and the second tier segments the customers, each with a vector of the fractions of their transaction categories as parameters. For each tier, we determine the optimal number of clusters based on the Minimum Message Length (MML) criterion. With the premise that the most valuable customer segment is one that is most likely to generate the most valuable transaction category, we rank the customer segments based on their respective joint probabilities with the most valuable transaction category. By doing so, we are able to reveal the relative value of each customer segment.

Yuan Jin, Grace Rumantir

A Dual Network for Transfer Learning with Spike Train Data

A massive amount of data is being produced in a continual stream, in real time, which renders traditional batch processing based data mining and neural network techniques as incapable. In this paper we focus on transfer learning from Spike Train Data, for which traditional techniques often require tasks to be distinctively identified during the training phase. We propose a novel dual network model that demonstrates transfer learning from spike train data without explicit task specification. An implementation of the proposed approach was tested experimentally to evaluate its ability to use previously learned knowledge to improve the learning of new tasks.

Keith Johnson, Wei Liu

Stable Feature Selection with Support Vector Machines

The support vector machine (SVM) is a popular method for classification, well known for finding the maximum-margin hyperplane. Combining SVM with $$l_{1}$$-norm penalty further enables it to simultaneously perform feature selection and margin maximization within a single framework. However, $$l_{1}$$-norm SVM shows instability in selecting features in presence of correlated features. We propose a new method to increase the stability of $$l_{1}$$-norm SVM by encouraging similarities between feature weights based on feature correlations, which is captured via a feature covariance matrix. Our proposed method can capture both positive and negative correlations between features. We formulate the model as a convex optimization problem and propose a solution based on alternating minimization. Using both synthetic and real-world datasets, we show that our model achieves better stability and classification accuracy compared to several state-of-the-art regularized classification methods.

Iman Kamkar, Sunil Kumar Gupta, Dinh Phung, Svetha Venkatesh

Gene Transfer: A Novel Genetic Operator for Discovering Diverse-Frequent Patterns

Genetic algorithm (GA) based on evolution principles has found its strong base in pattern set mining. GA has proved to generate more accurate results when compared to other formal methods available in the past years. In this paper, we present a new genetic operator called gene transfer within the GA framework. Genes are part of the genetic code that represents a partial solution to the problem. The proposed GA operator creates a pool of genes or partial solutions which are of good health and tries to replace genes of similar size in the population to improve the fitness of the individuals. This operation applied along with other traditional genetic operators like crossover and mutation results in speed up in convergence and finding individuals with better fitness function. On a set of standard benchmark dataset, we experimentally show that our new genetic operator improves the performance of a genetic algorithm consistently.

Shanjida Khatun, Hasib Ul Alam, Mahmood A. Rasid, Swakkhar Shatabda

Task Allocation Using Particle Swarm Optimisation and Anomaly Detection to Generate a Dynamic Fitness Function

In task allocation a group of agents perform search and discovery of tasks, then allocate themselves to complete those tasks. Tasks are assumed to have a strong signature by which they can be identified. This paper considers task allocation in environments where the definition of a task is weak and can change over time. Specifically, we define tasks as environmental anomalies and present a new optimisation-based task allocation algorithm using anomaly detection to generate a dynamic fitness function. We present experiments in a simulated environment to show that agents using this algorithm can generate a dynamic fitness function using anomaly detection. They can then converge on optima in this function using particle swarm optimisation. The demonstration is conducted in a workplace hazard identification simulation.

Adam Klyne, Kathryn Merrick

Evolving High Fidelity Low Complexity Sheepdog Herding Simulations Using a Machine Learner Fitness Function Surrogate for Human Judgement

Multi-agent simulations facilitate modelling the complex behaviours through simple rules codified within the agents. However, the manual exploration of effective rules is generally prohibitive or at least sub-optimal in these types of simulations. Evolutionary techniques can effectively explore the rule and parameter space for high fidelity simulations; however, the pairing of these two techniques is challenging in classes of problems where the evaluation of simulation fidelity is reliant on human judgement. In this work we present a machine learning approach to evolve high fidelity low complexity sheepdog herding simulations. A multi-objective evolutionary algorithm is applied to evolve simulations with both high fidelity and low complexity as the two objectives to be optimised. Fidelity is measured via a machine learning system trained using a small set of training samples of human judgements; whereas the complexity is measured using the number of rules and parameters used to codify the agents in the system. The experimental results demonstrate the effectiveness of the approach in evolving high fidelity and low complexity multi-agent simulations when the fidelity cannot be measured objectively.

Erandi Lakshika, Michael Barlow, Adam Easton

An Episodic Memory Retrieval Algorithm for the Soar Cognitive Architecture

Episodic memory in cognitive architectures allows intelligent agents to query their past experiences to influence decision making. Episodic memory in the Soar cognitive architecture must efficiently encode, store, search and reconstruct episodes as snapshots of short term working memory. The performance of the current search algorithm can be improved by doing a structural match phase first on all stored data and enforcing arc consistency in the first stage of the structural match. We demonstrate experimentally the performance of the improved search algorithm in two Soar environments.

Francis Li, Jesse Frost, Braden J. Phillips

Finding the k in K-means Clustering: A Comparative Analysis Approach

This paper explores the application of inequality indices, a concept successfully applied in comparative software analysis among many application domains, to find the optimal value k for k-means when clustering road traffic data. We demonstrate that traditional methods for identifying the optimal value for k (such as gap statistic and Pham et al.’s method) are unable to produce meaningful values for k when applying them to a real-world dataset for road traffic. On the other hand, a method based on inequality indices shows significant promises in producing much more sensible values for the number k of clusters to be used in k-means clustering for the same road network traffic dataset.

Markus Lumpe, Quoc Bao Vo

Discovering Causal Structures from Time Series Data via Enhanced Granger Causality

Granger causality has been applied to explore predictive causal relations among multiple time series in various fields. However, the existence of non-stationary distributional changes among the time series variables poses significant challenges. By analyzing a real dataset, we observe that factors such as noise, distribution changes and shifts increase the complexity of the modelling, and large errors often occur when the underlying distribution shifts with time.Motivated by this challenge, we propose a new regression model for causal structure discovery – a Linear Model with Weighted Distribution Shift (linear WDS), which improves the prediction accuracy of the Granger causality model by taking into account the weights of the distribution-shift samples and by optimizing a quadratic-mean based objective function. The linear WDS is integrated in the Granger causality model to improve the inference of the predictive causal structure. The performance of the enhanced Granger causality model is evaluated on synthetic datasets and real traffic datasets, and the proposed model is compared with three different regression-based Granger causality models (standard linear regression, robust regression and quadratic-mean-based regression). The results show that the enhanced Granger causality model outperforms the other models especially when there are distribution shifts in the data.

Ling Luo, Wei Liu, Irena Koprinska, Fang Chen

Automating Marine Mammal Detection in Aerial Images Captured During Wildlife Surveys: A Deep Learning Approach

Aerial surveys conducted using manned or unmanned aircraft with customized camera payloads can generate a large number of images. Manual review of these images to extract data is prohibitive in terms of time and financial resources, thus providing strong incentive to automate this process using computer vision systems. There are potential applications for these automated systems in areas such as surveillance and monitoring, precision agriculture, law enforcement, asset inspection, and wildlife assessment. In this paper, we present an efficient machine learning system for automating the detection of marine species in aerial imagery. The effectiveness of our approach can be credited to the combination of a well-suited region proposal method and the use of Deep Convolutional Neural Networks (DCNNs). In comparison to previous algorithms designed for the same purpose, we have been able to dramatically improve recall to more than 80 % and improve precision to 27 % by using DCNNs as the core approach.

Frederic Maire, Luis Mejias Alvarez, Amanda Hodgson

Path Algebra for Mobile Robots

In this paper, we introduce a path algebra well suited for navigation in environments that can be abstracted as topological graphs. From this path algebra, we derive algorithms to reduce routes in such environments. The routes are reduced in the sense that they are shorter (contain fewer edges), but still connect the endpoints of the initial routes. Contrary to planning methods descended from Disjktra’s Shortest Path Algorithm like $$D^\star $$, the navigation methods derived from our path algebra do not require any graph representation. We prove that the reduced routes are optimal when the graphs are without cycles. In the case of graphs with cycles, we prove that whatever the length of the initial route, the length of the reduced route is bounded by a constant that only depends on the structure of the environment.

Frederic Maire, Gavin Suddrey

Abduction in PDT Logic

Probabilistic Doxastic Temporal (PDT) Logic is a formalism to represent and reason about belief evolutions in multi–agent systems. In this work we develop a theory of abduction for PDT Logic. This gives means to novel reasoning capabilities by determining which epistemic actions can be taken in order to induce an evolution of probabilistic beliefs into a desired goal state. Next to providing a formal account of abduction in PDT Logic, we identify pruning strategies for the solution space, and give a sound and complete algorithm to find minimal solutions to the abduction problem.

Karsten Martiny, Ralf Möller

Exploiting Innocuousness in Bayesian Networks

Boolean combination functions in Bayesian networks, such as noisy-or, are often credited a property stating that inactive dependences (e.g., observed to false) do not “cause any harm” and an arc becomes vacuous and could have been left out. However, in classic Bayesian networks we are not able to express this property in local CPDs. By using novel ADBNs, we formalize the innocuousness property in CPDs and extend previous work on context-specific independencies. With an explicit representation of innocuousness in local CPDs, we provide a higher causal accuracy for CPD specifications and open new ways for more efficient and less-restricted reasoning in (A)DBNs.

Alexander Motzek, Ralf Möller

A Tweet Classification Model Based on Dynamic and Static Component Topic Vectors

This paper presents an unsupervised architecture for retrieving and ranking conceptually related tweets which can be used in real time. We present a model for ranking tweets with respect to topic relevance in order to improve the accuracy of information extraction.The proposed architecture uses concept enrichment from a knowledge source in order to expand the concept beyond the search keywords. The enriched concept is used to determine similarity levels between tweets and the given concept followed by a ranking of those tweets based on different similarity values. Tweets above a certain similarity threshold are considered as useful for providing relevant information (this is not part of this paper). We obtained precision values up to 0.81 and F values up to 0.61 for a tweet corpus of 2400 Tweets on the topic related to 2014 NZ general elections.

Parma Nand, Rivindu Perera, Gisela Klette

Understanding Toxicities and Complications of Cancer Treatment: A Data Mining Approach

Cancer remains a major challenge in modern medicine. Increasing prevalence of cancer, particularly in developing countries, demands better understanding of the effectiveness and adverse consequences of different cancer treatment regimes in real patient population. Current understanding of cancer treatment toxicities is often derived from either “clean” patient cohorts or coarse population statistics. It is difficult to get up-to-date and local assessment of treatment toxicities for specific cancer centres. In this paper, we applied an Apriori-based method for discovering toxicity progression patterns in the form of temporal association rules. Our experiments show the effectiveness of the proposed method in discovering major toxicity patterns in comparison with the pairwise association analysis. Our method is applicable for most cancer centres with even rudimentary electronic medical records and has the potential to provide real-time surveillance and quality assurance in cancer care.

Dang Nguyen, Wei Luo, Dinh Phung, Svetha Venkatesh

A Representation Theorem for Spatial Relations

Spatial relations have been investigated in various inter-related areas such as qualitative spatial reasoning (for agents moving in an environment), geographic information science, general topology, and others. Most of the results are specific constructions of spatial relations that fulfill some required properties. Results on setting up axioms that capture the desired properties of the relations are rare. And results that characterize spatial relations in the sense that they give a complete set of axioms for the intended spatial relations still have to be presented. This paper aims at filling the gap by providing a representation theorem: It shows that there is a finite set of axioms that are fulfilled by a binary relation if and only if it can be constructed as a binary spatial relation based on a nested partition chain.

Özgür Lütfü Özçep

Stream-Query Compilation with Ontologies

Rational agents perceiving data from a dynamic environment and acting in it have to be equipped with capabilities such as decision making, planning etc. We assume that these capabilities are based on query answering with respect to (high-level) streams of symbolic descriptions, which are grounded in (low-level) data streams. Queries need to be answered w.r.t. an ontology. The central idea is to compile ontology-based stream queries (continuous or historical) to relational data processing technology, for which efficient implementations are available. We motivate our query language STARQL (Streaming and Temporal ontology Access with a Reasoning-Based Query Language) with a sensor data processing scenario, and compare the approach realized in the STARQL framework with related approaches regarding expressivity.

Özgür Lütfü Özçep, Ralf Möller, Christian Neuenstadt

Vote Counting as Mathematical Proof

Trust in the correctness of an election outcome requires proof of the correctness of vote counting. By formalising particular voting protocols as rules, correctness of vote counting amounts to verifying that all rules have been applied correctly. A proof of the outcome of any particular election then consists of a sequence (or tree) of rule applications and provides an independently checkable certificate of the validity of the result. This reduces the need to trust, or otherwise verify, the correctness of the vote counting software once the certificate has been validated. Using a rule-based formalisation of voting protocols inside a theorem prover, we synthesise vote counting programs that are not only provably correct, but also produce independently verifiable certificates. These programs are generated from a (formal) proof that every initial set of ballots allows to decide the election winner according to a set of given rules.

Dirk Pattinson, Carsten Schürmann

Answer Presentation with Contextual Information: A Case Study Using Syntactic and Semantic Models

Answer presentation is a subtask in Question Answering that investigates the ways of presenting an acquired answer to the user in a format that is close to a human generated answer. In this research we explore models to retrieve additional, relevant, contextual information corresponding to a question and present an enriched answer by integrating the additional information as natural language. We investigate the role of Bag of Words (BoW) and Bag of Concepts (BoC) models to retrieve the relevant contextual information. The information source utilized to retrieve the information is a Linked Data resource, DBpedia, which encodes large amounts of knowledge corresponding to Wikipedia in a structured form as triples. The experiments utilizes the QALD question sets consisted of training and testing sets each containing 100 questions. The results from these experiments shows that pragmatic aspects, which are often neglected by BoW (syntactic models) and BoC (semantic models), form a critical part of contextual information selection.

Rivindu Perera, Parma Nand

A Multi-Agent Approach for Decentralized Voltage Regulation by Considering Distributed Generators

Distributed generators (DGs) are considered as significant components to modern micro grids in recent years because they can provide instant and renewable electric power to consumers without using transmission networks. However, the use of DGs may affect the use of voltage regulators in a micro grid because the DGs are usually privately owned and cannot be centrally managed. In this paper, an innovative multi-agent approach is proposed to perform automatic and decentralized controls of distributed electric components in micro grids. Autonomous software agents are employed to make local optimal decisions on voltage regulation by considering multiple objectives and local information; and agent-based communication and collaboration are employed toward a global voltage regulation through dynamic task allocation. The proposed approach contains three layers for representing the micro grid, the multi-agent system and the human-computer interface, and is implemented by using three Java-based packages, i.e. InterPSS, JADE and JUNG respectively.

Fenghui Ren, Minjie Zhang, Chao Yu

Turning Gaming EEG Peripherals into Trainable Brain Computer Interfaces

Companies such as NeuroSky and Emotiv Systems are selling non-medical EEG devices for human computer interaction. These devices are significantly more affordable than their medical counterparts, and are mainly used to measure levels of engagement, focus, relaxation and stress. This information is sought after for marketing research and games. However, these EEG devices have the potential to enable users to interact with their surrounding environment using thoughts only, without activating any muscles. In this paper, we present preliminary results that demonstrate that despite reduced voltage and time sensitivity compared to medical-grade EEG systems, the quality of the signals of the Emotiv EPOC neuroheadset is sufficiently good in allowing discrimination between imaging events. We collected streams of EEG raw data and trained different types of classifiers to discriminate between three states (rest and two imaging events). We achieved a generalisation error of less than 2 % for two types of non-linear classifiers.

Manisha Senadeera, Frederic Maire, Andry Rakotonirainy

Event Classification Using Adaptive Cluster-Based Ensemble Learning of Streaming Sensor Data

Sensor data stream mining methods have recently brought significant attention to smart homes research. Through the use of sliding windows on the streaming sensor data, activities can be recognized through the sensor events. However, it remains a challenge to attain real-time activity recognition from the online streaming sensor data. This paper proposes a new event classification method called Adaptive Cluster-Based Ensemble Learning of Streaming sensor data (ACBEstreaming). It contains desirable features such as adaptively windowing sensor events, detecting relevant sensor events using a time decay function, preserving past sensor information in its current window, and forming online clusters of streaming sensor data. The proposed approach improves the representation of streaming sensor-events, learns and recognizes activities in an on-line fashion. Experiments conducted using a real-world smart home dataset for activity recognition have achieved better results than the current approaches.

Ahmad Shahi, Brendon J. Woodford, Jeremiah D. Deng

Standoff-Balancing: A Novel Class Imbalance Treatment Method Inspired by Military Strategy

A class imbalanced dataset contains a disproportionate number of a certain class’ records compared to other classes. Classifiers which are built from class imbalanced datasets are biased and thus under-perform for the minority class. Treatment methods such as sampling and cost-sensitivity can be used to negate the bias induced by class imbalance. In this study, we present an analogy between class imbalance and war. By creating this analogy, we make it possible for military strategies to be applied to class imbalanced datasets. We propose a novel class imbalance treatment method Standoff-Balancing which uses a well-known mathematical law from military strategy literature. We compare the proposed technique with four existing techniques on five real world data sets. Our experiments show that the proposed technique may provide a higher AUC to existing techniques.

Michael J. Siers, Md Zahidul Islam

Wisdom of Crowds: An Empirical Study of Ensemble-Based Feature Selection Strategies

The accuracy of feature selection methods is affected by both the nature of the underlying datasets and the actual machine learning algorithms they are combined with. The role these factors have in the final accuracy of the classifiers is generally unknown in advance. This paper presents an ensemble-based feature selection approach that addresses this uncertainty and mitigates against the variability in the generalisation of the classifiers. The study conducts extensive experiments with combinations of three feature selection methods on nine datasets, which are trained on eight different types of machine learning algorithms. The results confirm that the ensemble based approaches to feature selection tend to produce classifiers with higher accuracies, are more reliable due to decreased variances and are thus more generalisable.

Teo Susnjak, David Kerry, Andre Barczak, Napoleon Reyes, Yaniv Gal

Modeling Ice Storm Climatology

Extreme weather events such as ice storms cause significant damage to life and property. Accurately forecasting ice storms sufficiently in advance to offset their impacts is very challenging because they are driven by atmospheric processes that are complex and not completely defined. Furthermore, such forecasting has to consider the influence of a changing climate on relevant atmospheric variables, but it is difficult to generalise existing expertise in the absence of observed data, making the underlying computational challenge all the more formidable. This paper describes a novel computational framework to model ice storm climatology. The framework is based on an objective identification of ice storm events by key variables derived from vertical profiles of temperature, humidity, and geopotential height (a measure of pressure). Historical ice storm records are used to identify days with synoptic-scale upper air and surface conditions consistent with an ice storm. Sophisticated classification algorithms and feature selection algorithms provide a computational representation of the behavior of the relevant physical climate variables during ice storms. We evaluate the proposed framework using reanalysis data of climate variables and historical ice storm records corresponding to the north eastern USA, demonstrating the effectiveness of the climatology models and providing insights into the relationships between the relevant climate variables.

Ranjini Swaminathan, Mohan Sridharan, Gillian Dobbie, Katharine Hayhoe

Investigating Japanese Ijime (Bullying) Behavior Using Agent-Based and System Dynamics Models

In the recent years, observing Ijime (i.e. Japanese bully) among Japanese students has been a growing concern. To understand the effect of this behavior in a detail level, we build an Agent-Based Model (ABM) and conduct a set of experiments. In the model, interactions occur between victim and bully as Prisoner Dilemma game. A System Dynamics model is also built and simulated to verify robustness and examine effects of the assumptions in ABM model. The results indicate that students can attend higher social standing by being not cooperative. As such, if one is victim, it is recommended not to comply with the bully. In the case of being bully, one needs to take the role seriously. Thus, by encouraging victim to be aggressive toward bully, the effect of Ijime can be alleviated.

Chaiwat Thawiworadilok, Mohsen Jafari Songhori, Takao Terano

On the Krom Extension of $$\mathcal {CFDI}^{\forall -}_{nc}$$

We consider the consequences on basic reasoning problems of the Krom extension to the description logic dialect $$\mathcal {CFDI}^{\forall -}_{nc}$$, that is, of allowing negated primitive concepts on left-hand-sides of inclusion dependencies. Specifically, we show that TBox consistency and concept satisfiability remain in PTIME, but that this extension leads to intractability for both knowledge base consistency and instance retrieval. We then trace the roots of intractability by presenting tight conditions that recover PTIME complexity for both of these problems. The conditions relate to the structure of functional constraints in $$\mathcal {CFDI}^{\forall -}_{nc}$$ and to the unique name assumption.

David Toman, Grant Weddell

Understanding People Relationship: Analysis of Digitised Historical Newspaper Articles

The study of historical persons and their relationships gives an insight into the lives of people and the way society functioned in early times. Such information concerning Australian history can be gleaned from Trove’s digitized collection of historical newspapers (1803–1954). This research aims to mine Trove’s articles using closed and maximal association rules mining along with visualization tools to discover, conceptualize and understand the type, size and complexity of the notable relationships that existed between persons in historical Australia. Before the articles could be mined, they needed vigorous cleaning. Given the data’s source, type and extraction methods, estimated word-error rates were at 50–75 %. Pre-processing efforts were aimed at reducing errors originating from optical character recognition (OCR), natural language processing and some co-referencing both within and between articles. Only after cleaning were the datasets able to return interesting associations at higher support thresholds.

Sharon Torao-Pingi, Richi Nayak

Towards the Elimination of the Miscommunication Between Users in Twitter

Tweet Classification Based on Expected Responses by User

In recent years, a Twitter response from another user who does not share the intentions and expectations of the original poster may cause discomfort and stress, which is a social phenomenon known as SNS fatigue. For example, a user may receive answers that are different from her/his expectation after the user posts a question on the timeline. In the background of such responses there is a miscommunication between users. In order to resolve the problem, it is important to know what the original poster expected as responses to her/his tweet. In this paper, we propose a classification method of tweets according to the response that users expect, and experimentally evaluate it. As a result, we have shown that tweets which the poster does not expect any replies can be classified with 76.2 % of the average precision.

Tomoaki Ueda, Ryohei Orihara, Yuichi Sei, Yasuyuki Tahara, Akihiko Ohsuga

Reinforcement Learning of Pareto-Optimal Multiobjective Policies Using Steering

There has been little research into multiobjective reinforcement learning (MORL) algorithms using stochastic or non-stationary policies, even though such policies may Pareto-dominate deterministic stationary policies. One approach is steering which forms a non-stationary combination of deterministic stationary base policies. This paper presents two new steering algorithms designed for the task of learning Pareto-optimal policies. The first algorithm (w-steering) is a direct adaptation of previous approaches to steering, and therefore requires prior knowledge of recurrent states which are guaranteed to be revisited. The second algorithm (Q-steering) eliminates this requirement. Empirical results show that both algorithms perform well when given knowledge of recurrent states, but that Q-steering provides substantial performance improvements over w-steering when this knowledge is not available.

Peter Vamplew, Rustam Issabekov, Richard Dazeley, Cameron Foale

Absorption for ABoxes and TBoxes with General Value Restrictions

We consider the instance checking problem for $$\mathcal {SHIQ(\mathbf {D})}$$ knowledge bases. In particular, we present a procedure that significantly reduces the number of ABox individuals that need to be examined for a given instance checking problem over a consistent $$\mathcal {SHIQ(\mathbf {D})}$$ knowledge base that contains arbitrary occurrences of value restrictions. The procedure extends earlier work that assumed value restrictions were predominantly used to establish global domain and range restrictions, and, consequently, in which other applications of value restrictions had a significant risk of requiring an infeasible number of individuals to be examined for a given problem. Finally, experimental results are given that validate the effectiveness of the procedure.

Jiewen Wu, Taras Kinash, David Toman, Grant Weddell

Optimal Hyper-Parameter Search in Support Vector Machines Using Bézier Surfaces

We consider the problem of finding the optimal specification of hyper-parameters in Support Vector Machines (SVMs). We sample the hyper-parameter space and then use Bézier curves to approximate the performance surface. This geometrical approach allows us to use the information provided by the surface and find optimal specification of hyper-parameters. Our results show that in most cases the specification found by the proposed algorithm is very close to actual optimal point(s). The results suggest that our algorithm can serve as a framework for hyper-parameter search, which is precise and automatic.

Shinichi Yamada, Kourosh Neshatian, Raazesh Sainudiin

Hierarchical Learning for Emergence of Social Norms in Networked Multiagent Systems

In this paper, a hierarchical learning framework is proposed for emergence of social norms in networked multiagent systems. This framework features a bottom level of agents and several levels of supervisors. Agents in the bottom level interact with each other using reinforcement learning methods, and report their information to their supervisors after each interaction. Supervisors then aggregate the reported information and produce guide policies by exchanging information with other supervisors. The guide policies are then passed down to the subordinate agents in order to adjust their learning behaviors heuristically. Experiments are carried out to explore the efficiency of norm emergence under the proposed framework, and results verify that learning from local interactions integrating hierarchical supervision can be an effective mechanism for emergence of social norms.

Chao Yu, Hongtao Lv, Fenghui Ren, Honglin Bao, Jianye Hao

Information Extraction to Improve Standard Compliance

The Case of Clinical Handover

Clinical handover refers to healthcare workers transferring responsibility and accountability for patient care, e.g., between shifts or wards. Safety and quality health standards call for this process to be systematically structured across the organisation and synchronous with its documentation. This paper evaluates information extraction as a way to help comply with these standards. It implements the handover process of first specifying a structured handover form, whose hierarchy of headings guides the handover narrative, followed by the technology filling it out objectively and almost instantly for proofing and sign-off. We trained a conditional random field with 8 feature types on 101 expert-annotated documents to 36-class classify. This resulted in good generalisation to an independent set of 50 validation and 50 test documents that we now release: 77.9 % F1 in filtering out irrelevant information, up to 98.4 % F1 for the 35 classes for relevant information, and 52.9 % F1 after macro-averaging over these 35 classes, whilst these percentages were 86.2, 100.0, and 70.2 for the leave-one-document-out cross-validation across the first set of 101 documents. Also as a result of this study, the validation and test data were released to support further research.

Liyuan Zhou, Hanna Suominen

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise