Skip to main content
Top

2019 | Book

PRICAI 2019: Trends in Artificial Intelligence

16th Pacific Rim International Conference on Artificial Intelligence, Cuvu, Yanuca Island, Fiji, August 26–30, 2019, Proceedings, Part I

insite
SEARCH

About this book

This three-volume set, LNAI 11670, LNAI 11671, and LNAI 11672 constitutes the thoroughly refereed proceedings of the 16th Pacific Rim Conference on Artificial Intelligence, PRICAI 2019, held in Cuvu, Yanuca Island, Fiji, in August 2019.

The 111 full papers and 13 short papers presented in these volumes were carefully reviewed and selected from 265 submissions. PRICAI covers a wide range of topics such as AI theories, technologies and their applications in the areas of social and economic importance for countries in the Pacific Rim.

Table of Contents

Frontmatter

Learning

Frontmatter
Explaining Black-Box Models Using Interpretable Surrogates

Explaining black-box machine learning models is important for their successful applicability to many real world problems. Existing approaches to model explanation either focus on explaining a particular decision instance or are applicable only to specific models. In this paper, we address these limitations by proposing a new model-agnostic mechanism to black-box model explainability. Our approach can be utilised to explain the predictions of any black-box machine learning model. Our work uses interpretable surrogate models (e.g. a decision tree) to extract global rules to describe the preditions of a model. We develop an optimization procedure, which helps a decision tree to mimic a black-box model, by efficiently retraining the decision tree in a sequential manner, using the data labeled by the black-box model. We demonstrate the usefulness of our proposed framework using three applications: two classification models, one built using iris dataset, other using synthetic dataset and a regression model built for bike sharing dataset.

Deepthi Praveenlal Kuttichira, Sunil Gupta, Cheng Li, Santu Rana, Svetha Venkatesh
Classifier Learning from Imbalanced Corpus by Autoencoded Over-Sampling

Class imbalance is a common problem in classifier learning but it is difficult to solve. Textual data are ubiquitous and their analytics have great potential in many applications. In this paper, we propose a solution to build accurate sentiment classifiers from imbalanced textual data. We first establish topic vectors to capture local and global patterns from a corpus. Synthetic minority over-sampling technique is then used to balance the data while avoiding overfitting. However, we found that residue overfitting is still prominent. To address this problem, we propose an autoencoded oversampling framework to reconstruct balanced datasets. Our extensive experiments on different datasets with various imbalanced ratios and number of classes have found that our approach is sound and effective.

Eunkyung Park, Raymond K. Wong, Victor W. Chu
Semisupervised Cross-Media Retrieval by Distance-Preserving Correlation Learning and Multi-modal Manifold Regularization

Due to the heterogeneous representation and incongruous distribution of cross-media data, like text, image, audio, video, and 3D model, how to capture the correlations of heterogeneous data for cross-media retrieval is a challenging problem. In order to handle with multiple media types, this paper proposes a novel distance-preserving correlation learning and multi-modal manifold regularization (DCLMM) approach to exploit the common representation of heterogeneous data. The method mines the distance-preserving correlation by minimizing (maximizing) the distances between media samples with positive (negative) semantic correlations, while most existing methods only focus on positive correlations of pairwise media types. DCLMM also utilizes an intrinsic multi-modal manifold to well describe the geometry distribution of both labeled and unlabeled heterogeneous cross-media data. Moreover, DCLMM incorporates the distance-preserving correlation and multi-modal manifold into a kernel based regularization framework to explore more rich complementary information from high dimensional space. Extensive experimental results on two widely-used cross-media datasets with up to five media types demonstrate the effectiveness of DCLMM for cross-media retrieval, compared with the state-of-the-art methods.

Ting Wang, Hong Zhang, Bo Li, Xin Xu
Explaining Deep Learning Models with Constrained Adversarial Examples

Machine learning algorithms generally suffer from a problem of explainability. Given a classification result from a model, it is typically hard to determine what caused the decision to be made, and to give an informative explanation. We explore a new method of generating counterfactual explanations, which instead of explaining why a particular classification was made explain how a different outcome can be achieved. This gives the recipients of the explanation a better way to understand the outcome, and provides an actionable suggestion. We show that the introduced method of Constrained Adversarial Examples (CADEX) can be used in real world applications, and yields explanations which incorporate business or domain constraints such as handling categorical attributes and range constraints.

Jonathan Moore, Nils Hammerla, Chris Watkins
Time-Guided High-Order Attention Model of Longitudinal Heterogeneous Healthcare Data

Due to potential applications in chronic disease management and personalized healthcare, the EHRs data analysis has attracted much attentions of both researchers and practitioners. There are three main challenges in modeling longitudinal and heterogeneous EHRs data: heterogeneity, irregular temporality and interpretability. A series of deep learning methods have made remarkable progress in resolving these challenges. Nevertheless, most of existing attention models rely on capturing the 1-order temporal dependencies or 2-order multimodal relationships among feature elements. In this paper, we propose a time-guided high-order attention (TGHOA) model. The proposed method has three major advantages. (1) It can model longitudinal heterogeneous EHRs data via capturing the 3-order correlations of different modalities and the irregular temporal impact of historical events. (2) It can be used to identify the potential concerns of medical features to explain the reasoning process of healthcare model. (3) It can be easily expanded into cases with more modalities and flexibly applied in different prediction tasks. We evaluate the proposed method in two tasks of mortality prediction and disease ranking on two real world EHRs datasets. Extensive experimental results show the effectiveness of the proposed model.

Yi Huang, Xiaoshan Yang, Changsheng Xu
Towards Understanding Classification and Identification

The paper focuses on two pivotal cognitive functions of both natural and AI agents, namely classification and identification. Inspired from the theory of teleosemantics, itself based on neuroscientific results, we show that these two functions are complementary and rely on distinct forms of knowledge representation. We provide a new perspective on well-known AI techniques by categorising them as either classificational or identificational. Our proposed Teleo-KR architecture provides a high-level framework for combining the two functions within a single AI system. As validation and demonstration on a concrete application, we provide experiments on the large-scale reuse of classificational (ontological) knowledge for the purposes of learning-based schema identification.

Mattia Fumagalli, Gábor Bella, Fausto Giunchiglia
What Prize Is Right? How to Learn the Optimal Structure for Crowdsourcing Contests

In crowdsourcing, one effective method for encouraging par-ticipants to perform tasks is to run contests where participants compete against each other for rewards. However, there are numerous ways to implement such contests in specific projects. They could vary in their structure (e.g., performance evaluation and the number of prizes) and parameters (e.g., the maximum number of participants and the amount of prize money). Additionally, with a given budget and a time limit, choosing incentives (i.e., contest structures with specific parameter values) that maximise the overall utility is not trivial, as their respective effectiveness in a specific project is usually unknown a priori. Thus, in this paper, we propose a novel algorithm, BOIS (Bayesian-optimisation-based incentive selection), to learn the optimal structure and tune its parameters effectively. In detail, the learning and tuning problems are solved simultaneously by using online learning in combination with Bayesian optimisation. The results of our extensive simulations show that the performance of our algorithm is up to 85% of the optimal and up to 63% better than state-of-the-art benchmarks.

Nhat Van-Quoc Truong, Sebastian Stein, Long Tran-Thanh, Nicholas R. Jennings
Simple Is Better: A Global Semantic Consistency Based End-to-End Framework for Effective Zero-Shot Learning

In image recognition, there are many cases where training samples cannot cover all target classes. Zero-shot learning (ZSL) addresses such cases by classifying the samples of unseen categories that have no corresponding samples contained in the training set via class semantic information. In this paper, we propose a novel and simple end-to-end framework, called Global Semantic Consistency Network (GSC-Net for short), which makes complete use of the semantic information of both seen and unseen classes to support effective zero-shot learning. We also employ a soft label embedding loss to further exploit the semantic relationships among classes and use a seen-class weight regularization to balance attribute learning. Moreover, to adapt GSC-Net to the setting of Generalized Zero-shot Learning (GZSL), we introduce a parametric novelty detection mechanism. Experiments on all the three widely-used ZSL datasets show that GSC-Net performs better than most existing methods under both ZSL and GZSL settings. Especially, GSC-Net achieves the state of the art performance on two datasets (AWA2 and CUB). We explain the effectiveness of GSC-Net from the perspectives of class attribute learning and visual feature learning, and discover that the validation accuracy of seen classes can serve as an indicator of ZSL performance.

Fan Wu, Shuigeng Zhou, Kang Wang, Yi Xu, Jihong Guan, Jun Huan
A Reinforcement Learning Approach to Gaining Social Capital with Partial Observation

Social capital brings individuals benefits and advantages in societies. In this paper, we formalize two types of social capital: bonding capital refers to links to neighbours, while bridging capital refers to brokerages between others. We ask the questions: How would a marginal individual gain social capital with imperfect information of the society? We formalize this issue as the partially observable network building problem and propose two reinforcement learning algorithms: one guarantees the convergence to optimal values in theory, while the other is efficient in practice. We conduct simulations over a real-world dataset, and experimental results coincide with our theoretical analysis.

He Zhao, Hongyi Su, Yang Chen, Jiamou Liu, Hong Zheng, Bo Yan

Knowledge Handling

Frontmatter
Emotion Recognition from Music Enhanced by Domain Knowledge

Music elements have been widely used to influence the audiences’ emotional experience by its music grammar. However, these domain knowledge, has not been thoroughly explored as music grammar for music emotion analyses in previous work. In this paper, we propose a novel method to analyze music emotion via utilizing the domain knowledge of music elements. Specifically, we first summarize the domain knowledge of music elements and infer probabilistic dependencies between different main musical elements and emotions from the summarized music theory. Then, we transfer the domain knowledge to constraints, and formulate affective music analysis as a constrained optimization problem. Experimental results on the Music in 2015 database and the AMG1608 database demonstrate that the proposed music content analyses method outperforms the state-of-the-art performance prediction methods.

Yangyang Shu, Guandong Xu
A Better Understanding of the Interaction Between Users and Items by Knowledge Graph Learning for Temporal Recommendation

Recently the knowledge graph (KG) as extra auxiliary information is widely used to improve recommendation. Existing methods usually treat knowledge representation as characteristic information for addressing data sparsity and cold start issues. However, they ignore the implicit and explicit interaction between users and items, which may be gained by the relation extraction and knowledge reasoning, to lead to suboptimal performance. Thus, we believe that it is crucial to incorporate both relations and attributes of users and items into recommender system. That can better capture the extent that a user prefer to an item. In this paper, we propose a novel knowledge graph-based temporal recommendation (KGTR) model. Firstly, we design a lightweight KG on the basis of a single independent domains knowledge without extra supplement. We define three relationships to express interactions within/between users and items, including the interaction of a user browsing an item, the social relation of two users browsing one item, and the behavior of a user browsing items in the meantime. Different from previous knowledge translation-based recommendation methods, we embed interactions by adding them to the transformation from one entity to another in KG. Extensive experiments on real world dataset show that our KGTR outperforms several state-of-the-art recommendation methods.

Chunjing Xiao, Cong Xie, Shuyan Cao, Yuxiang Zhang, Wei Fan, Hongjun Heng
Knowledge-Aware and Retrieval-Based Models for Distantly Supervised Relation Extraction

Distantly supervised relation extraction (RE) has been an effective way to find novel relational facts from text without a large amount of well-labeled training data. However, distant supervision always suffers from wrong labelling problem. Many neural approaches have been proposed to alleviate this problem recently, but none of them can make use of the rich semantic knowledge in the knowledge bases (KBs). In this paper, we propose a knowledge-aware attention model, which can leverage the semantic knowledge in the KB to select the valid sentences. Furthermore, based on knowledge representation learning (KRL), we formalize distantly supervised RE as relation retrieval instead of relation classification to leverage the semantic knowledge further. Experimental results on widely used datasets show that our approaches significantly outperform the popular benchmark methods.

Xuemiao Zhang, Kejun Deng, Leilei Zhang, Zhouxing Tan, Junfei Liu
Two-Stage Entity Alignment: Combining Hybrid Knowledge Graph Embedding with Similarity-Based Relation Alignment

Entity alignment aims to automatically determine whether an entity pair in different knowledge graphs refers to the same entity in reality. Existing entity alignment methods can be classified into two categories: string-similarity-based methods and embedding-based methods. String-similarity-based methods have higher accuracy, however, they might have difficulty in dealing with literal heterogeneity, i.e., an entity pair in diverse forms. Though embedding-based entity alignment can deal with literal heterogeneity, they also suffer the shortcomings of higher time complexity and lower accuracy. Moreover, there remain limitations and challenges due to only using the structure information of triples for existing embedding methods. Therefore, in this study, we propose a two-stage entity alignment framework, which can combine the advantages of both methods. In addition, to enhance the embedding performance, a hybrid knowledge graph embedding model with both fact triples and logical rules is introduced for entity alignment. Experimental results on two real-world datasets show that the proposed method is significantly better than the state-of-the-art embedding-based entity alignment methods.

Tingting Jiang, Chenyang Bu, Yi Zhu, Xindong Wu
A Neural User Preference Modeling Framework for Recommendation Based on Knowledge Graph

To address the data sparsity and cold start problems in the traditional recommender systems, lots of researchers aim at incorporating knowledge graphs (KG) into recommender systems to enhance the recommendation performance. However, existing efforts mainly rely on hand-engineered features from KG (e.g., meta paths), which requires domain knowledge. What’s more, as relations are usually excluded from meta paths, they hardly specify the holistic semantics of paths. To address the limitations of existing methods, we propose an end-to-end neural user preference modeling framework (UPM) to incorporate features of entity and relation of KG into the representations of users and items, so as to learn user latent interests precisely. Specifically, UPM first propagate user’s interests along links between entities in KG iteratively to learn user’s potential preferences for the item. Furthermore, these preference features are dynamically during the preference propagation process. That is to say, the importance of these preference features to characterize user is different. Therefore, an attention network is used in UPM to calculate the influence of preference features at different propagating stages, then the final preference vector of the user is calculated from the preference features and the corresponding weights. Lastly, the final prediction probability of user-item interaction is obtained by inner product operation between the embedding of item and user. To evaluate our framework, extensive experiments on two real-world datasets demonstrate significant performance improvements over state-of-the-art methods.

Guiming Zhu, Chenzhong Bin, Tianlong Gu, Liang Chang, Yanpeng Sun, Wei Chen, Zhonghao Jia
Jointing Knowledge Graph and Neural Network for Top-N Recommendation

Currently, neutral networks attract much attention and show great potential in recommendation systems. The existing works mainly aim at leveraging neural network to model the nonlinear representations of users and items. However, they only use historical interaction sequence of user-items to learn the latent features of users and items, while ignoring the rich self-attributes of items. Recent methods utilize knowledge graphs as auxiliary information to learn the latent features between users and items, but they fail to represent the relevance and similarity of attributes among items. Based on this observation, we propose a novel model named JKN that incorporates knowledge graph and a neural network for item recommendation. The key point of JKN is to learn accurate latent representations of item attributes through knowledge graph, then to integrate them into a feedforward neural network to model user-item interactions in nonlinear. Empirical results on a real-world dataset demonstrate the superior performance of our model in Top-n recommendation task.

Wei Chen, Liang Chang, Chenzhong Bin, Tianlong Gu, Zhonghao Jia
A Novel Genetic Programming Algorithm with Knowledge Transfer for Uncertain Capacitated Arc Routing Problem

Uncertain Capacitated Arc Routing Problem (UCARP) is a challenging optimization problem. Genetic Programming (GP) has been successfully applied to train routing policies (heuristics to make decisions in real time rather than a fixed solution) to respond to uncertain environments effectively. However, the effectiveness of routing policy is scenario dependent, and it takes time to train a new routing policy for each scenario. In this paper, we investigate GP with knowledge transfer to improve the training efficiency by reusing useful knowledge from previously solved related scenarios. We propose a novel knowledge transfer approach which our experimental results show that it obtained significantly higher training efficiency than the existing GP knowledge transfer methods, and the vanilla training process without knowledge transfer.

Mazhar Ansari Ardeh, Yi Mei, Mengjie Zhang

Image Recognition and Manipulation

Frontmatter
Multi-label Recognition of Paintings with Cascaded Attention Network

Convolutional neural networks (CNNs) have demonstrated advanced performance on image multi-label classification. However, recognizing labels of paintings is still a challenging problem due to the huge collection and labeling cost on painting training set. Inspired by the similarity between natural image and painting image, we propose an approach based on progressive learning to solve this issue by use of a few labeled paintings. In addition, we set up an effective framework built upon visual cascaded attention for multi-label image classification. Different from the existing approaches, the proposed model extracts and integrates multi-scale features to learn discriminative feature representations, which are then fed to the class-wise attention module with a simple scheme. Experimental results on the challenging benchmark MS-COCO dataset show that our proposed model achieves the best performance compared to the state-of-the-art models. We also demonstrate the effectiveness of the model on our constructed painting testing datasets (Datasets will be made publicly available soon.).

Yue Li, Tingting Wang, Guangwei Huang, Xiaojun Tang
Stacked Mixed-Scale Networks for Human Pose Estimation

Human pose estimation is an important problem in computer vision, which has been dominated by deep learning techniques in recent years. In this paper, we propose a novel model, named Mixed-Scale Dense Block, that exploits dilation convolution layers and dense concatenation connections to maximise the information flow through the block. Consequently, it captures the feature representation in different scales more effectively and efficiently. Comparing with the baseline method, Hourglass models, our model employs fewer learning parameters. Nevertheless, experiments demonstrate that the proposed model produces more accurate predictions. Meanwhile, our method achieves the comparable accuracy to state-of-the-art techniques. Especially in some indicators, our approach has better performance. In addition, this model is easy to implement and could be improved by most existing techniques that are adopted to promote the hourglass models.

Xuan Wang, Zhi Li, Yanan Chen, Peilin Jiang, Fei Wang
MIDCN: A Multiple Instance Deep Convolutional Network for Image Classification

For the image classification task, usually, the image collected in the wild contains multiple objects instead of a single dominant one. Besides, the image label is not explicitly associated with the object region, i.e., it is weakly annotated. In this paper, we propose a novel deep convolutional network for image classification under a weakly supervised condition. The proposed method, namely MIDCN, formulate the problem into Multiple Instance Learning (MIL), where each image is a bag which contains multiple instances (objects). Different with previous deep MIL methods which predict the label of each bag (i.e., image) by simply performing pooling/voting strategy over their instance (i.e., region) predictions, MIDCN directly predicts the label of a bag via bag features learned by measuring the similarities between instance features and a set of learned informative prototypes. Specifically, the prototypes are obtained by a newly proposed Global Contrast Pooling (GCP) layer which leverages instances not only coming from the current bag but also the other bags. Thus the learned bag features also contain global information of all the training bags, which is more robust and noise free. We did extensive experiments on two real-world image datasets, including both natural image dataset (PASCAL VOC 07) and pathological lung cancer image dataset, and show the results of the proposed MIDCN consistently outperforms the state-of-the-art methods.

Kelei He, Jing Huo, Yinghuan Shi, Yang Gao, Dinggang Shen
Discriminative Deep Attention-Aware Hashing for Face Image Retrieval

Although the power of hashing methods has been proved in image retrieval, they cannot effectively extract discriminative features for face image retrieval as the discriminative differences in face regions are subtle and the background information interferes with the feature expression. To solve this problem, we propose an end-to-end deep hashing method with attention mechanisms to learn discriminative hash codes. Specifically, a face spatial network is designed to enhance the discrimination of face features from the spatial aspect. With a specially designed face spatial loss, it can automatically mine differentiated facial regions, and reduce the interference of background information. Furthermore, an attention-aware hash network, in which facial features could be enhanced by fusing strategy and channel attention module, is designed to learn compact and discriminative hash codes. Experimental results on two widely used datasets demonstrate the inspiring performance over several state-of-the-art hashing methods.

Zhi Xiong, Bo Li, Xiaoyan Gu, Wen Gu, Weiping Wang
Texture Deformation Based Generative Adversarial Networks for Multi-domain Face Editing

Despite the significant success in image-to-image translation and latent representation based facial attribute editing and expression synthesis, the existing approaches still have limitations of preserving the identity and sharpness of details, and generating distinct image translations. To address these issues, we propose a Texture Deformation Based GAN, namely TDB-GAN, to disentangle texture from original image. The disentangled texture is used to transfer facial attributes and expressions before the deformation to target shape and poses. Sharper details and more distinct visual effects are observed in the synthesized faces. In addition, it brings faster convergence during training. In the extensive ablation studies, we also evaluate our method qualitatively and quantitatively on facial attribute and expression synthesis. The results on both the CelebA and RaFD datasets suggest that TDB-GAN achieves better performance.

Wenting Chen, Xinpeng Xie, Xi Jia, Linlin Shen
Towards Generating Stylized Image Captions via Adversarial Training

While most image captioning aims to generate objective descriptions of images, the last few years have seen work on generating visually grounded image captions which have a specific style (e.g., incorporating positive or negative sentiment). However, because the stylistic component is typically the last part of training, current models usually pay more attention to the style at the expense of accurate content description. In addition, there is a lack of variability in terms of the stylistic aspects. To address these issues, we propose an image captioning model called ATTEND-GAN which has two core components: first, an attention-based caption generator to strongly correlate different parts of an image with different parts of a caption; and second, an adversarial training mechanism to assist the caption generator to add diverse stylistic components to the generated captions. Because of these components, ATTEND-GAN can generate correlated captions as well as more human-like variability of stylistic patterns. Our system outperforms the state-of-the-art as well as a collection of our baseline models. A linguistic analysis of the generated captions demonstrates that captions generated using ATTEND-GAN have a wider range of stylistic adjectives and adjective-noun pairs.

Omid Mohamad Nezami, Mark Dras, Stephen Wan, Cécile Paris, Len Hamey

Language and Speech

Frontmatter
Keywords-Based Auxiliary Information Network for Abstractive Summarization

Automatic text summarization is an important research task in the field of natural language processing (NLP). The abstractive approach to automatic text summarization produces the condensed version of the source text by generating new words and phrases. Recently, the attentional sequence-to-sequence models have shown good ability in abstractive text summarization. Nevertheless, these neural network models are still hard to cover most key points of the source text and may produce unfactual details. To address these issues, we proposed a keywords-based auxiliary information model to guide the process of encoding and decoding. Firstly, we proposed an auxiliary information network based on the keywords of the document, which aims to generate the modified encoded representation. In addition, we designed a novel selective beam search mechanism to keep more keywords and reduce redundancy in the decoded summaries. We evaluated our model on different datasets including the benchmark CNN/Daily Mail dataset. The experimental results show that our model leads to significant improvements compared with abstractive baseline models.

Haihan Wang, Jinlong Li, Xuewen Chen
Training with Additional Semantic Constraints for Enhancing Neural Machine Translation

Replacing the traditional cross-entropy loss with BLEU as the optimization objective is a successful application of reinforcement learning (RL) in neural machine translation (NMT). However, a considerable weakness of the approach is that the monotonic optimization of BLEU’s training algorithm ignores the semantic fluency of the translation. One phenomenon is an incomprehensible translation accompanied by an ideal BLEU. In addition, sampling inefficiency as a common shortcoming of RL is more prominent in NMT. In this study, we address these issues in two ways. (1) We use the annealing schedule algorithm to add semantic evaluation for reinforcement training as part of the training objective. (2) We further attach a value iteration network to RL to transform the reward into a decision value, thereby making model training highly targeted and efficient. We use our approach on three representative language machine translation tasks, including low resource Mongolian-Chinese, agglutinative Japanese-English, and common task English-Chinese. Experiments show that our approach achieves significant improvements over the strong baselines, besides, it also saves nearly one-third of training time on different tasks.

Yatu Ji, Hongxu Hou, Junjie Chen, Nier Wu
Automatic Acrostic Couplet Generation with Three-Stage Neural Network Pipelines

As one of the quintessence of Chinese traditional culture, couplet compromises two syntactically symmetric clauses equal in length, namely, an antecedent and subsequent clause. Moreover, corresponding characters and phrases at the same position of the two clauses are paired with each other under certain constraints of semantic and/or syntactic relatedness. Automatic couplet generation is recognized as a challenging problem even in the Artificial Intelligence field. In this paper, we comprehensively study on automatic generation of acrostic couplet with the first characters defined by users. The complete couplet generation is mainly divided into three stages, that is, antecedent clause generation pipeline, subsequent clause generation pipeline and clause re-ranker. To realize semantic and/or syntactic relatedness between two clauses, attention-based Sequence-to-Sequence (S2S) neural network is employed. Moreover, to provide diverse couplet candidates for re-ranking, a cluster-based beam search approach is incorporated into the S2S network. Both BLEU metrics and human judgments have demonstrated the effectiveness of our proposed method. Eventually, a mini-program based on this generation system is developed and deployed on Wechat for real users.

Haoshen Fan, Jie Wang, Bojin Zhuang, Shaojun Wang, Jing Xiao
Named Entity Recognition with Homophones-Noisy Data

General named entity recognition systems exclusively focus on higher accuracy regardless of dirty data. However, raw source data face serious challenges specially that are originated from automated speech recognition systems’ results. In this paper, we propose Pinyin (Pinyin is the official romanization system for Standard Chinese, each Chinese character has its own pinyin sequence which is composed of Latin alphabet) Hierarchical Attention Encoder-Decoder network and Character Alternate Network to overcome Chinese homophones’ problems which frequently frustrate researchers in consecutive Natural Language Understanding (NLU). Our models present a none word segmentation structure to effectively avoid secondary data corruption and adequately extract words’ internal features. Besides, corrupted sequences can be revised by character-level network. Evaluation demonstrates that our proposed method achieves 93.73% F1 scores which are higher than 90.97% F1 scores using baseline models in homophone-noisy dataset. Additional experiments are conducted to show equivalent results in the universal dataset.

Zhicheng Liu, Gang Wu
Robust Sentence Classification by Solving Out-of-Vocabulary Problem with Auxiliary Word Predictor

In recent years, deep learning methods have achieved outstanding performances in sentence classification. However, many sentence classification models do not consider the out-of-vocabulary (OOV) problem, which generally appears in sentence classification tasks. Input units smaller than words, such as characters or subword units, have been considered the basic unit for sentence classification to cope with the OOV problem. Although this approach naturally solves the OOV problem, it has obvious performance limitations because a character by itself has no meaning, whereas a word has a definite meaning. In this paper, we propose a neural sentence classification model that is robust to the OOV problem, even though the proposed model utilizes words as the basic unit. To this end, we introduce the unknown word prediction (UWP) task as an auxiliary task to train the proposed model. Owing to joint training of the proposed model with the objectives of classification and UWP, the proposed model can represent the meanings of entire sentences robustly even if a sentence includes a number of unseen words. To demonstrate the effectiveness of the proposed model, a number of experiments are conducted using several sentence classification benchmarks. The proposed model consistently outperforms two baselines over all four benchmark datasets in terms of the classification accuracy.

Sang-Seok Park, Yunseok Noh, Seyoung Park, Seong-Bae Park
Effective Representation for Easy-First Dependency Parsing

Easy-first parsing relies on subtree re-ranking to build the complete parse tree. Whereas the intermediate state of parsing processing is represented by various subtrees, whose internal structural information is the key lead for later parsing action decisions, we explore a better representation for such subtrees. In detail, this work introduces a bottom-up subtree encoding method based on the child-sum tree-LSTM. Starting from an easy-first dependency parser without other handcraft features, we show that the effective subtree encoder does promote the parsing process, and can make a greedy search easy-first parser achieve promising results on benchmark treebanks compared to state-of-the-art baselines. Furthermore, with the help of the current pre-training language model, we further improve the state-of-the-art results of the easy-first approach.

Zuchao Li, Jiaxun Cai, Hai Zhao
HRCR: Hidden Markov-Based Reinforcement to Reduce Churn in Question Answering Forums

The high rate of churning users who abandon the Community Question Answering forums (CQAs) may be one of the crucial issues that hinder their development. More personalized question recommendation to users might help to manage this problem better. In this paper, we propose a new algorithm (we name HRCR) that recommends questions to users such to reduce their churning probability. We present our algorithm in a two-fold structure: First, we use Hidden Markov Models (HMMs) to uncover the users’ engagement states inside a CQA. Second, we apply a Reinforcement Learning Model (RL) to recommend users the questions that match better with their engagement mood and thus help them get into a better engagement state (the one with the least churning probability). Experiments on a large-scale offline dataset from Stack Overflow show a meaningful reduction in the churning probability of the users who comply with HRCR’s question recommendations.

Reza Hadi Mogavi, Sujit Gujar, Xiaojuan Ma, Pan Hui
Boosting Variational Generative Model via Condition Enhancing and Lexical-Editing

Conditional Variational Autoencoder (CVAE) has shown promising performance in text generation. However, CVAE is inadequate to generate sentences that are highly coherent to its condition due to error accumulation in decoding and KL-vanishing problem. In this paper, we propose an Edit-CVAE (ECVAE) in which we attempt to exploit information-related data to address the problem by (1) explicitly editing the generated sentence. (2) enriching the latent representation. While maintaining the diversity and information consistency. Experiment results on dialogue and Chinese poetry generation show that our method substantially increases generative coherence while maintaining the diversity and information consistency.

Zhengwei Tao, Waiman Si, Juntao Li, Dongyan Zhao, Rui Yan
Noise-Based Adversarial Training for Enhancing Agglutinative Neural Machine Translation

This study solves the problem of unknown(UNK) word in machine translation of agglutinative language in two ways. (1) a multi-granularity preprocessing based on morphological segmentation is used for the input of generative adversarial net. (2) a filtering mechanism is further used to identify the most suitable granularity for the current input sequence. The experimental results show that our approach has achieved significant improvement in the two representative agglutinative language machine translation tasks, including Mongolian $$\rightarrow $$ Chinese and Japanese $$\rightarrow $$ English.

Yatu Ji, Hongxu Hou, Junjie Chen, Nier Wu
Concept Mining in Online Forums Using Self-corpus-Based Augmented Text Clustering

This paper proposes a self-corpus-based text augmentation technique with clustering for concept mining in a discussion forum. Sparseness in text data, which challenges the distance and density measures in determining the concepts in a corpus, is handled through self-corpus-based document expansion via matrix factorization. Experiments with a real-world dataset show that the proposed method is able to infer useful concepts.

Wathsala Anupama Mohotti, Darren Christopher Lukas, Richi Nayak

Knowledge Representation and Reasoning

Frontmatter
Rational Inference Patterns

Understanding, formalizing and modelling human reasoning is a core topic of artificial intelligence. In psychology, numerous fallacies and paradoxes have shown that classical logic is not a suitable logical framework for this. In a recent paper, Eichhorn, Kern-Isberner, and Ragni have succeeded in resolving paradoxes and modelling human reasoning consistently in a non-monotonic resp. conditional logic environment with so-called inference patterns. For further studies using inference patterns, however, it is mandatory to understand better how inference patterns are triggered by the characteristics of specific examples used in the empirical tests. The goal of this paper is to categorize empirical tasks by formal inference patterns and then find crucial features of the corresponding reasoning tasks in such a way that they can be used to predict the reasoning of human subjects according to the task. To this end a large amount of psychological studies dealing with human reasoning from the literature were investigated and classified according to the observed inference patterns. From this classification, we learnt a decision tree revealing which features of empirical tasks lead to which inference pattern in most cases. These results provide insights into the reasoning modes of humans which is important for choosing the right formal model, and help setting up proper tasks for testing inference patterns.

Lars-Phillip Spiegel, Gabriele Kern-Isberner, Marco Ragni
Harnessing Higher-Order (Meta-)Logic to Represent and Reason with Complex Ethical Theories

The computer-mechanization of an ambitious explicit ethical theory, Gewirth’s Principle of Generic Consistency, is used to showcase an approach for representing and reasoning with ethical theories exhibiting complex logical features like alethic and deontic modalities, indexicals, higher-order quantification, among others. Harnessing the high expressive power of Church’s type theory as a meta-logic to semantically embed a combination of quantified non-classical logics, our work pushes existing boundaries in knowledge representation and reasoning. We demonstrate that intuitive encodings of complex ethical theories and their automation on the computer are no longer antipodes.

David Fuenmayor, Christoph Benzmüller
Aleatoric Dynamic Epistemic Logic for Learning Agents

We propose a generalisation of dynamic epistemic logic, where propositions are aleatoric: that is, rather than having true/false values, propositions have odds of being true. Agents in such a system suppose a probability distribution of possible worlds, and based on observations are able to refine this probability distribution to match their observations. We demonstrate this logic with respect to some games of chance.

Tim French, Andrew Gozzard, Mark Reynolds
Mastering Uncertainty: Towards Robust Multistage Optimization with Decision Dependent Uncertainty

We investigate, as a special case of robust optimization, integer linear programs with variables being either existentially or universally quantified. They can be interpreted as two-person zero-sum games between an existential and a universal player. In this setting the existential player must ensure the fulfillment of a system of linear constraints, while the universal variables can range within given intervals, trying to make the fulfillment impossible. We extend this approach by adding a linear constraint system the universal player must obey. Consequently, existential and universal variable assignments in early decision stages now can restrain possible universal variable assignments later on and vice versa resulting in a multistage optimization problem with decision dependent uncertainty. We present novel insights in structure and complexity.

Michael Hartisch, Ulf Lorenz
Belief Change Properties of Forgetting Operations over Ranking Functions

Intentional forgetting means to deliberately give up information and is a crucial part of change or consolidation processes, or to make knowledge more compact. Two well-known forgetting operations are contraction in the AGM theory of belief change, and various types of variable elimination in logic programming. While previous work dealt with postulates being inspired from logic programming, in this paper we focus on evaluating forgetting in epistemic states according to postulates coming from AGM belief change theory. We consider different forms of contraction, marginalization, and conditionalization as major representatives of forgetting operators to be evaluated. We use Spohn’s ranking functions as a common semantic base to show that all operations can be realized in one logical framework, thereby exploring the richness of forgetting operations in a comparable way.

Gabriele Kern-Isberner, Tanja Bock, Kai Sauerwald, Christoph Beierle
Identity Resolution in Ontology Based Data Access to Structured Data Sources

Earlier work has proposed a notion of referring expressions and types in first order knowledge bases as a way of more effectively answering conjunctive queries in ontology based data access (OBDA). We consider how PTIME description logics can be combined with referring expressions to provide a more effective virtual front-end to nested relational data sources via OBDA. In particular, we consider replacing the standard notion of an assertion box, or ABox, with a more general notion of a concept box, or CBox, and show how this can serve as a front-end to such data sources.

David Toman, Grant Weddell
SPARQL Queries over Ontologies Under the Fixed-Domain Semantics

Fixed-domain reasoning over OWL ontologies is adequate in certain closed-world scenarios and has been shown to be both useful and feasible in practice. However, the reasoning modes hitherto supported by available tools do not include querying. We provide the formal foundations of querying under the fixed domain semantics, based on the principle of certain answers, and show how fixed-domain querying can be incorporated in existing reasoning methods using answer set programming (ASP).

Sebastian Rudolph, Lukas Schweizer, Zhihao Yao
Augmenting an Answer Set Based Controlled Natural Language with Temporal Expressions

In this paper we discuss how we can augment an existing controlled natural language in a systematic way with temporal expressions in order to write high-level temporal specifications which require reasoning about action and change. We show that domain-dependent axioms which are necessary to specify time-varying properties, deal with the commonsense law of inertia, and with continuous change can be expressed directly and in a transparent way on the level of the controlled natural language. The resulting temporal specification including the corresponding axioms and the required terminological knowledge can be translated automatically into an executable answer set program and then be used by a linguistically motivated version of the Event Calculus, implemented as an answer set program, for temporal reasoning and question answering.

Rolf Schwitter
Predictive Systems: The Game Rock-Paper-Scissors as an Example

In simple-decision-making scenarios such as in repeated two-person games human behavior is to some extend predictable. To investigate this research question, we focused on developing a system for the Rock-Paper-Scissor (RPS) game. Our approach included three steps: (i) To generate a large data-base of experimental data, (ii) to analyze the data to detect systematic patterns and deviations from rational behavior within the test persons, and (iii) to employ methods from machine learning to identify patterns and predict the next throw of the opponent. We identified as the best current approach a Gated-Reccurent-Unit using User Statistics, which is able to predict the next throw and hence win in about 50% of the cases, beating state-of-the-art approaches. Potentials and limitations of our approach are discussed.

Mathias Zink, Paulina Friemann, Marco Ragni
Strategy-Proofness, Envy-Freeness and Pareto Efficiency in Online Fair Division with Additive Utilities

We consider fair division problems where indivisible items arrive one by one in an online fashion and are allocated immediately to agents who have additive utilities over these items. Many existing offline mechanisms do not work in this online setting. In addition, many existing axiomatic results often do not transfer from the offline to the online setting. For this reason, we propose here three new online mechanisms, as well as consider the axiomatic properties of three previously proposed online mechanisms. In this paper, we use these mechanisms and characterize classes of online mechanisms that are strategy-proof, and return envy-free and Pareto efficient allocations, as well as combinations of these properties. Finally, we identify an important impossibility result.

Martin Aleksandrov, Toby Walsh
Knowledge Enhanced Neural Networks

We propose Knowledge Enhanced Neural Networks (KENN), an architecture for injecting prior knowledge, codified by a set of logical clauses, into a neural network.In KENN clauses are directly incorporated in the structure of the neural network as a new layer that includes a set of additional learnable parameters, called clause weights. As a consequence, KENN can learn the level of satisfiability to impose in the final classification. When training data contradicts a constraint, KENN learns to ignore it, making the system robust to the presence of wrong knowledge. Moreover, the method returns learned clause weights, which gives us informations about the influence of each constraint in the final predictions, increasing the interpretability of the model. We evaluated KENN on two standard datasets for multi-label classification, showing that the injection of clauses automatically extracted from the training data sensibly improves the performances. Furthermore, we apply KENN to solve the problem of finding relationship between detected objects in images by adopting manually curated clauses. The evaluation shows that KENN outperforms the state of the art methods on this task.

Alessandro Daniele, Luciano Serafini
Encoding Epistemic Strategies for General Game Playing

We propose a general approach for encoding epistemic strategies for playing incomplete information games. A game strategy involves selecting actions in order to maximise an outcome (e.g., winning the game). In an epistemic strategy the selection of actions is based on reasoning about the knowledge of other players. We show how epistemic strategies can be encoded by supplementing a GDL-II game description with a set of epistemic rules to produce a GDL-III game that an appropriate reasoner can use to play the original GDL-II game. We prove the formal correctness of this approach and provide a practical evaluation to show its efficacy for playing the co-operative multi-player game of Hanabi. It was found that the encoded epistemic rules were able to provide players with a strategy that allowed them to play Hanabi near optimally.

Shawn Manuel, David Rajaratnam, Michael Thielscher
Non-zero-sum Stackelberg Budget Allocation Game for Computational Advertising

Computational advertising has been studied to design efficient marketing strategies that maximize the number of acquired customers. In an increased competitive market, however, a market leader (a leader) requires the acquisition of new customers as well as the retention of her loyal customers because there often exists a competitor (a follower) who tries to attract customers away from the market leader. In this paper, we formalize a new model called the Stackelberg budget allocation game with a bipartite influence model by extending a budget allocation problem over a bipartite graph to a Stackelberg game. To find a strong Stackelberg equilibrium, a solution concept of the Stackelberg game, we propose two algorithms: an approximation algorithm with provable guarantees and an efficient heuristic algorithm. In addition, for a special case where customers are disjoint, we propose an exact algorithm based on linear programming. Our experiments using real-world datasets demonstrate that our algorithms outperform a baseline algorithm even when the follower is a powerful competitor.

Daisuke Hatano, Yuko Kuroki, Yasushi Kawase, Hanna Sumita, Naonori Kakimura, Ken-ichi Kawarabayashi
Game Equivalence and Bisimulation for Game Description Language

This paper investigates the equivalence between games represented by state transition models and its applications. We first define a notion of bisimulation equivalence between state transition models and prove that it can be logically characterized by Game Description Language (GDL). Then we introduce a concept of quotient state transition model. As the minimum equivalent of the original model, it allows us to improve the efficiency of model checking for GDL. Finally, we demonstrate with real games that bisimulation equivalence can be generalized to characterize more general game equivalence.

Guifei Jiang, Laurent Perrussel, Dongmo Zhang, Heng Zhang, Yuzhi Zhang
Characterizing the Expressivity of Game Description Languages

Bisimulations are a key notion to study the expressive power of a modal language. This paper studies the expressiveness of Game Description Language (GDL) and its epistemic extension EGDL through a bisimulations approach. We first define a notion of bisimulation for GDL and prove that it coincides with the indistinguishability of GDL-formulas. Based on it, we establish a characterization of the definability of GDL in terms of k-bisimulations. Then we define a novel notion of bisimulation for EGDL, and obtain a characterization of the expressive power of EGDL. In particular, we show that a special case of the bisimulation for EGDL can be used to characterize the expressivity of GDL. These characterizations not only justify the notions of bisimulation are appropriate for game description languages, but also provide a powerful tool to identify their expressive power.

Guifei Jiang, Laurent Perrussel, Dongmo Zhang, Heng Zhang, Yuzhi Zhang
A Strategy-Proof Model-Based Online Auction for Ad Reservation

Ad reservation market is an important part of the Internet advertising industry. Advertisers expect to reserve ad slots in advance, while auctioneers need a mechanism for allocating ad slots and maximizing profits. We propose SMAR, which is a Strategy-proof Model-based online Auction for ad Reservation, to meet their needs. SMAR allows the cancelation policy. It means auctioneers can revoke the reservation and resell ad slots to advertisers with higher bids. SMAR achieves both incentive compatibility and individual rationality. We implement SMAR and compare it with offline VCG and other related works. The results show SMAR has a better performance in both social welfare and revenue.

Qinya Li, Fan Wu, Guihai Chen
Maximum Satisfiability Formulation for Optimal Scheduling in Overloaded Real-Time Systems

In real-time systems where tasks have timing requirements, once the workload exceeds the system’s capacity, missed deadlines may incur system overload. Finding optimal scheduling in overloaded real-time systems is critical in both theory and practice. To this end, existing works have encoded scheduling problems as a set of first-order formulas that might be tackled by the Satisfiability Modulo Theory (SMT) solver. In this paper, we move one step forward by formulating the scheduling dilemma in overloaded real-time systems as a Maximum Satisfiability (MaxSAT) problem. In the MaxSAT formulation, scheduling features are encoded as hard constraints and the task deadlines are considered soft ones. An off-the-shelf MaxSAT solver is employed to satisfy as many deadlines as possible, provided that all the hard constraints are met. Our experimental results show that our proposed MaxSAT-based method found optimal scheduling significantly more efficiently than previous works.

Xiaojuan Liao, Hui Zhang, Miyuki Koshimura, Rong Huang, Wenxin Yu
A Cognitive Model of Human Bias in Matching

The schema matching problem is at the basis of integrating structured and semi-structured data. Being investigated in the fields of databases, AI, semantic Web and data mining for many years, the core challenge still remains the ability to create quality matchers, automatic tools for identifying correspondences among data concepts (e.g., database attributes). In this work, we investigate human matchers behavior using a new concept termed match consistency and introduce a novel use of cognitive models to explain human matcher performance. Using empirical evidence, we further show that human matching suffers from predictable biases when matching schemata, which prevent them from providing consistent matching.

Rakefet Ackerman, Avigdor Gal, Tomer Sagi, Roee Shraga

Multi-Agent Systems

Frontmatter
Adaptive Incentive Allocation for Influence-Aware Proactive Recommendation

Most recommendation systems are designed for seeking users’ demands and preferences, whereas impotent to affect users’ decisions for realizing the system-level objective. In this light, we intend to propose a generic concept named ‘proactive recommendation’, which focuses on not only maintaining users’ satisfaction but also realizing system-level objectives. In this paper, we claim the proactive recommendation is crucial for the scenario where the system objectives are required to realize. To realize proactive recommendation, we intend to affect users’ decision-making by providing incentives and utilizing social influence between users. We design an approach for discovering the influential users in an unknown network, and a dynamic game-based mechanism that allocates incentives to users dynamically. The preliminary experimental results show the effectiveness of the proposed approach.

Shiqing Wu, Quan Bai, Byeong Ho Kang
Incentivizing Long-Term Engagement Under Limited Budget

In recent years, more and more systems have been designed to affect users’ decisions for realizing certain system goals. However, most of these systems only focus on affecting users’ short-term or one-off behaviors, while ignoring the maintenance of users’ long-term engagement. In this light, we intend to design a novel approach which focuses on incentivizing users’ long-term engagement. In this paper, inspired by the use of Markov Decision Process (MDP), we first formally model the process of a user’s decision-making under long-term incentives. Subsequently, we propose the MDP-based Incentive Estimation (MDP-IE) approach for determining the value of an incentive and the requirement of obtaining that incentive. Experimental results demonstrate that the proposed approach can effectively sustain users’ long-term engagement. Furthermore, the experiments also demonstrate that incentivizing users’ long-term engagement is more beneficial than one-off or short-term approaches.

Shiqing Wu, Quan Bai
A Compromising Strategy Based on Constraint Relaxation for Automated Negotiating Agents

This paper presents a compromising strategy based on constraint relaxation for automated negotiating agents. Automated negotiating agents have been studied widely and are one of the key technologies for the future society where multiple heterogeneous agents are collaborately and competitively acting in order to help humans perform daily activities. For example, driver-less cars will be common in the near future. Such autonomous cars will need to cooperate and also compete with each other in traffic situations. A lot of studies including international competitions have been made on negotiating agents. A principal issue is that most of the proposed negotiating agents employ an ad-hoc conceding process, where basically they are adjusting a threshold to accept their opponents’ offers. Because merely a threshold is adjusted, it is very difficult to show how and what the agent conceded even after agreement has been reached. To address this issue, we describe an explainable concession process we propose using a constraint relaxation process. In the process, an agent changes its belief that it should not believe a certain constraint so that it can accept its opponent’s offer. We also describe three types of compromising strategies we propose. Experimental results demonstrate that these strategies are efficient.

Shun Okuhara, Takayuki Ito
Semantics of Opinion Transitions in Multi-Agent Forum Argumentation

There are online forums such as changemyview where a user may submit his/her views on a subject matter, against which other users argue to try to change the opinions of his/hers. To measure the quality of such discussion, one useful criterion is how influential a given topic is to participating users’ opinion changes, as may be measured by the change (if any) in the proportion of supporting-objecting-mixed opinions by users. In this work, we incorporate the notion of agency into a previously proposed argumentation framework for issue-based information systems, QuAD, and formulate semantics of opinion transitions by newly considering agent-wise evaluation of QuAD initial scores.

Ryuta Arisaka, Takayuki Ito
Learning Individual and Group Preferences in Abstract Argumentation

In Abstract Argumentation, given the same AA framework rational agents accept the same arguments unless they reason by different AA semantics. Real agents may not do so in such situations, and in this paper we assume that this is because they have different preferences over the confronted arguments. Hence by reconstructing their reasoning processes, we can learn their hidden preferences, which then allow us to predict what else they must accept. Concretely we formalize and develop algorithms for such problems as learning the hidden preference relation of an agent from his expressed opinion, by which we mean a subset of arguments or attacks he accepted; and learning the collective preferences of a group from a dataset of individual opinions. A major challenge we addressed in this endeavor is to represent and reason with “answer sets” of preference relations which are generally exponential or even infinite.

Nguyen Duy Hung, Van-Nam Huynh
Epistemic Argumentation Framework

The paper introduces the notion of an epistemic argumentation framework (EAF) as a means to integrate the beliefs of a reasoner with argumentation. Intuitively, an EAF encodes the beliefs of an agent who reasons about arguments. Formally, an EAF is a pair of an argumentation framework and an epistemic constraint. The semantics of the EAF is defined by the notion of an $$\omega $$ -epistemic labelling set, where $$\omega $$ is complete, stable, grounded, or preferred, which is a set of $$\omega $$ -labellings that collectively satisfies the epistemic constraint of the EAF. The paper shows how EAF can represent different views of reasoners on the same argumentation framework. It also includes representing preferences in EAF and multi-agent argumentation. Finally, the paper discusses the complexity of the problem of determining whether or not an $$\omega $$ -epistemic labelling set exists.

Chiaki Sakama, Tran Cao Son
Modeling Convention Emergence by Observation with Memorization

Convention emergence studies how global convention arises from local interactions among agents. Traditionally, the studies on convention emergence are conducted by means of agent-based simulations, whereas very few studies are based on model-based approaches. In this paper, we employ model-based approach to study the convention emergence by observation with memorization in a large population under social learning. In particular, we derive the recurrence equations of the population dynamic, which is the evolution of action distribution over time, under the external majority (EM) strategy. The recurrence equations precisely predict the behaviour of the multi-agent system at any time point, which is verified with the agent-based simulations. Based on the recurrence equations, We prove the converge behavior under various situations and work out the optimal memory length under different number of actions. Finally, we show that the EM strategy outperforms other popular strategies such as Q-learning and Highest Cumulative Reward (HCR) in convergence speed under social learning, even in very large convention space.

Chin-wing Leung, Shuyue Hu, Ho-fung Leung
Breaking Deadlocks in Multi-agent Reinforcement Learning with Sparse Interaction

Although multi-agent reinforcement learning (MARL) is a promising method for learning a collaborative action policy that will enable each agent to accomplish specific tasks, the state-action space increased exponentially. Coordinating Q-learning (CQ-learning) effectively reduces the state-action space by having each agent determine when it should consider the states of other agents on the basis of a comparison between the immediate rewards in a single-agent environment and those in a multi-agent environment. One way to improve the performance of CQ-learning is to have agents greedily select actions and switch between Q-value update equations in accordance with the state of each agent in the next step. Although this “GPCQ-learning” usually outperforms CQ-learning, a deadlock can occur if there is no difference in the immediate rewards between a single-agent environment and a multi-agent environment. A method has been developed to break such a deadlock by detecting its occurrence and augmenting the state of a deadlocked agent to include the state of the other agent. Evaluation of the method using pursuit games demonstrated that it improves the performance of GPCQ-learning.

Toshihiro Kujirai, Takayoshi Yokota
Backmatter
Metadata
Title
PRICAI 2019: Trends in Artificial Intelligence
Editors
Abhaya C. Nayak
Alok Sharma
Copyright Year
2019
Electronic ISBN
978-3-030-29908-8
Print ISBN
978-3-030-29907-1
DOI
https://doi.org/10.1007/978-3-030-29908-8

Premium Partner