Skip to main content
main-content

Über dieses Buch

​This book constitutes the refereed proceedings of the 10th International Conference on Knowledge Science, Engineering and Management, KSEM 2017, held in Melbourne, Australia, in August 2017.

The 35 revised full papers and 12 short papers presented were carefully reviewed and selected from 134 submissions. The papers are organized in the following topical sections: text mining and document analysis; formal semantics and fuzzy logic; knowledge management; knowledge integration; knowledge retrieval; recommendation algorithms and systems; knowledge engineering; and knowledge representation and reasoning.

Inhaltsverzeichnis

Frontmatter

Text Mining and Document Analysis

Frontmatter

Learning Sparse Overcomplete Word Vectors Without Intermediate Dense Representations

Dense word representation models have attracted a lot of interest for their promising performances in various natural language processing (NLP) tasks. However, dense word vectors are uninterpretable, inseparable, and time and space consuming. We propose a model to learn sparse word representations directly from the plain text, rather than most existing methods that learn sparse vectors from intermediate dense word embeddings. Additionally, we design an efficient algorithm based on noise-contrastive estimation (NCE) to train the model. Moreover, a clustering-based adaptive updating scheme for noise distributions is introduced for effective learning when NCE is applied. Experimental results show that the resulting sparse word vectors are comparable to dense vectors on the word analogy tasks. Our models outperform dense word vectors on the word similarity tasks. The sparse word vectors are much more interpretable, according to the sparse vector visualization and the word intruder identification experiments.

Yunchuan Chen, Ge Li, Zhi Jin

A Study of Distributed Semantic Representations for Automated Essay Scoring

Automated essay scoring (AES) applies machine learning and NLP techniques to automatically rate essays written in an educational setting, by which the workload of human raters is considerably reduced. Current AES systems utilize common text features such as essay length, tf-idf weight, and the number of grammar errors to learn a scoring function. Despite the effectiveness brought by those common features, the semantics within the essay text is not well considered. To this end, this paper presents a study of the usefulness of the distributed semantic representations to AES. Novel features based on word or paragraph embeddings are combined with the common text features in order to improve the effectiveness of the AES systems. Evaluation results show that the use of the distributed semantic representations are beneficial for the task of AES.

Cancan Jin, Ben He, Jungang Xu

Weakly Supervised Feature Compression Based Topic Model for Sentiment Classification

Sentiment classification aims to use automatic tools to explore the subjective information like opinions and attitudes from user comments. Most of existing methods are centered on the semantic relationships and the extraction of syntactic feature, while the document topic feature is ignored. In this paper, a weakly supervised hierarchical model called external knowledge-based Latent Dirichlet Allocation (ELDA) is proposed to extract document topic feature. First of all, we take advantage of ELDA to compress document feature and increase the polarity weight of document topic feature. And then, we train a classifier based on the topic feature using SVM. Experiment results on one English dataset and one Chinese dataset show that our method can outperform the state-of-the-art models by at least $$4\%$$ in terms of accuracy.

Yan Hu, Xiaofei Xu, Li Li

An Effective Gated and Attention-Based Neural Network Model for Fine-Grained Financial Target-Dependent Sentiment Analysis

In this work, we propose an effective neural network architecture GABi-LSTM to address fine-grained financial target-dependent sentiment analysis from financial microblogs and news. We first adopt a gated mechanism to adaptively integrate character level and word level embeddings for word representation, then present an attention-based Bi-LSTM component to embed target-dependent information into sentence representation, and finally use a linear regression layer to predict sentiment score with respect to target company. Comparative experiments on financial benchmark datasets show that our proposed GABi-LSTM model outperforms baselines and previous top systems by a large margin and achieves the state-of-the-art performance.

Mengxiao Jiang, Jianxiang Wang, Man Lan, Yuanbin Wu

A Hidden Astroturfing Detection Approach Base on Emotion Analysis

This paper aims to take detection of hidden astroturfing based on emotion analysis. We propose a hidden astroturfing detection method which combines emotion analysis and unfair rating detection together. This approach contains five functional modules as: a data crawling module, pre-processing module, bag-of-word establishment module, emotion mining and analysis module and matching module. We give ROC curve (AUC) to evaluate the approach proposed in this paper. The results show that this method can realize the detection of implicit astroturfing under the prerequisite of improving the emotion classification accuracy. Our work discovers and studies a new hidden astroturfing characteristic, and construct a corpus manually for text emotion classification that establish a basis for our future research.

Tong Chen, Noora Hashim Alallaq, Wenjia Niu, Yingdi Wang, Xiaoxuan Bai, Jingjing Liu, Yingxiao Xiang, Tong Wu, Jiqiang Liu

Leveraging Term Co-occurrence Distance and Strong Classification Features for Short Text Feature Selection

In this paper, a short text feature selection method based on term co-occurrence distance and strong classification features is presented. On the one hand, co-occurrence distance between terms in each document is considered to determine the co-occurrence distance correlation, based on which the correlation weight for each term can be defined. On the other hand, the improved expected cross entropy is defined to obtain the weight of a term in a particular class with strong class indication. All terms of each class is sorted in a descending order based on their weights and top-k terms are selected as feature terms. Experiments show that our method can improve the effectiveness of short text feature selection.

Huifang Ma, Yuying Xing, Shuang Wang, Miao Li

Formal Semantics and Fuzzy Logic

Frontmatter

A Fuzzy Logic Based Policy Negotiation Model

Few existing policy generation models can reflect the fact that self-interested stockholders often need to interact when generating a set of policies acceptable to them all. To this end, this paper proposes a negotiation model for policy generation. In this model, each negotiating agent employs a fuzzy logic system to evaluate each policy in a proposal made by others during their negotiation, and then uses a uninorm operator to aggregate the evaluations of all the single policies in the proposal to gain an overall evaluation of the proposal. Moreover, different negotiating agent can use different fuzzy reasoning systems. Finally, we do some experiments to reveal some insights into our model.

Jieyu Zhan, Xudong Luo, Yuncheng Jiang, Wenjun Ma, Mukun Cao

f-(D)-LTL: A Fuzzy Spatio-Temporal Description Logic

In order to achieve representation and reasoning of fuzzy spatio-temporal knowledge on the Semantic Web, in this paper, we propose a fuzzy spatio-temporal description logic f-$$\mathcal {ALC}$$(D)-LTL that extends spatial fuzzy description logic f-$$\mathcal {ALC}$$(D) with linear temporal logic (LTL). Firstly, we give a formal definition of syntax, semantics of the f-$$\mathcal {ALC}$$(D)-LTL. Then, we propose a tableau algorithm for reasoning fuzzy spatio-temporal knowledge, i.e., determining satisfiability problem of f-$$\mathcal {ALC}$$(D)-LTL formula. Finally, we show the termination, soundness, and completeness of the tableau algorithm.

Haitao Cheng, Zongmin Ma

R-Calculus for the Primitive Statements in Description Logic

The AGM postulates [1] are for the belief revision (revision by a single belief), and the DP postulates [14] are for the iterated revision (revision by a finite sequence of beliefs). Li [4] gave an R-calculus for R-configurations $$\Delta |\Gamma $$, where $$\Delta $$ is a set of atomic formulas or negations of atomic formulas, and $$\Gamma $$ is a finite set of formulas. With an idea to delete the requirement that $$\Delta $$ is a set of atoms, we will give an R-calculus S$$^\mathrm{DL}$$ (a set of deduction rules) with respect to $$\subseteq $$-minimal change such that for any finite consistent sets $$\Gamma ,\Delta $$ of statements in the description logic $$\mathcal{ALC}$$, there is a consistent subset $$\Theta \subseteq \Gamma $$ of statements such that $$\Delta |\Gamma \Rightarrow \Delta ,\Theta $$ is provable; and prove that S$$^\mathrm{DL}$$ is sound and complete with the $$\subseteq $$-minimal change.

Yuhui Wang, Cungen Cao, Yuefei Sui

A Multi-objective Attribute Reduction Method in Decision-Theoretic Rough Set Model

Many attribute reduction methods have been proposed for decision-theoretic rough set model based on different definitions of attribute reduct, while an attribute reduct can be seen as an attribute subset that satisfies specific criteria. Most reducts are defined on the basis of a single criterion, which may result in the difficulty for users to choose appropriate reduct to design related reduction algorithm. To address this problem, we propose a multi-objective attribute reduction method based on NSGA-II for decision-theoretic rough set model. Three different definitions of attribute reduct based on positive region, decision cost and mutual information are considered and transferred to a multi-objective optimization problem. Experimental results show that the multi-objective reduction method can obtain a robust and better classification performance.

Lu Wang, Weiwei Li, Xiuyi Jia, Bing Zhou

A Behavior-Based Method for Distinction of Flooding DDoS and Flash Crowds

DDoS and Flash Crowds are always difficult to distinguish. In order to solve this issue, this paper concluded a new feature set to profile the behaviors of legitimate users and Bots, and proposed an idea employed Random Forest to distinguish DDoS and FC on two widely-used datasets. The results show that the proposed idea can achieve distinguishing accuracy more than 95%. With comparison with traditional methods-Entropy, it still has a high accuracy.

Degang Sun, Kun Yang, Zhixin Shi, Bin Lv

Knowledge Management

Frontmatter

Analyzing Customer’s Product Preference Using Wireless Signals

Customer’s product preference provides how a customer collects products or prefers one collection over another. Understanding customer’s product preference can provide retail store owner and librarian valuable insight to adjust products and service. Current solutions offer a certain convenience over common approaches such as questionnaire and interviews. However, they either require video surveillance or need wearable sensor which are usually invasive or limited to additional device. Recently, researchers have exploited physical layer information of wireless signals for robust device-free human detection, ever since Channel State Information (CSI) was reported on commodity WiFi devices. Despite of a significant amount of progress achieved, there are few works studying customer’s product preference. In this paper, we propose a customer’s product preference analysis system, PreFi, based on Commercial Off-The-Shelf (COTS) WiFi-enabled devices. The key insight of PreFi is to extract the variance features of the fine-grained time-series CSI, which is sensitively affected by customer activity, to recognize what is the customer doing. First, we conduct Principal Component Analysis (PCA) to smooth the preprocessed CSI values since general denoising method is insufficient in removing the bursty and impulse noises. Second, a sliding window-based feature extraction method and majority voting scheme are adopted to compare the distribution of activity profiles to identify different activities. We prototype our system on COTS WiFi-enabled devices and extensively evaluate it in typical indoor scenarios. The results indicate that PreFi can recognize a few representative customer activity with satisfied accuracy and robustness.

Na Pang, Dali Zhu, Kaiwen Xue, Wenjing Rong, Yinlong Liu, Changhai Ou

Improved Knowledge Base Completion by the Path-Augmented TransR Model

Knowledge base completion aims to infer new relations from existing information. In this paper, we propose path-augmented TransR (PTransR) model to improve the accuracy of link prediction. In our approach, we build PTransR based on TransR, which is the best one-hop model at present. Then we regularize TransR with information of relation paths. In our experiment, we evaluate PTransR on the task of entity prediction. Experimental results show that PTransR outperforms previous models.

Wenhao Huang, Ge Li, Zhi Jin

Balancing Between Cognitive and Semantic Acceptability of Arguments

This paper addresses the problem concerning approximating human cognitions and semantic extensions regarding acceptability status of arguments. We introduce three types of logical equilibriums in terms of satisfiability, entailment and semantic equivalence in order to analyse balance of human cognitions and semantic extensions. The generality of our proposal is shown by the existence conditions of equilibrium solutions. The applicability of our proposal is demonstrated by the fact that it detects a flaw of argumentation actually taking place in an online forum and suggests its possible resolution.

Hiroyuki Kido, Keishi Okamoto

Discovery of Jump Breaks in Joint Volatility for Volume and Price of High-Frequency Trading Data in China

Recent years have witnessed more and more frequent abnormal fluctuations in stock markets and thus it is important to real-time monitor dynamically such fluctuations. To that end, this paper first proposes a realized trading volatility (RTV) model and analyzes its properties. Next, based on the RTV model, it develops a critical jump point test for the joint volatility of volume and price using matrix singular values. Finally, the proposed models are evaluated on the minute transaction data of China’s Shanghai and Shenzhen A-share stock markets over 2009.01.05–2009.03.31. With the PV, VV and RTV sequence values extracted from the transaction data, case studies are performed on certain stocks and empirical suggestions are offered for the maintenance of the stability of the market index.

Xiao-Wei Ai, Tianming Hu, Gong-Ping Bi, Cheng-Feng Lei, Hui Xiong

Device-Free Intruder Sensing Leveraging Fine-Grained Physical Layer Signatures

With the development of smart indoor spaces, intruder sensing has attracted great attention in the past decades. Realtime intruder sensing in intelligent video surveillance is challenging due to the various covariate factors such as walking surface, clothing, carrying condition. Gait recognition provides a feasible approach for human identification. Pioneer systems usually rely on computer vision or wearable sensors which pose unacceptable privacy risks or be limited to additional devices. In this paper, we present CareFi, a device-free intruder sensing system that can identify a stranger or a burglar based on Commercial Off-The-Shelf (COTS) WiFi-enabled devices. CareFi extracts the fine-grained physical layer Channel State Information (CSI) to analyze the distinguishing gait characteristics for intruder sensing. CareFi can identify the intruder under both line-of-sight (LOS) and non-line-of-sight (NLOS) situations. CareFi does not require any dedicated sensors or lighting and works in dark just as well as in light. We prototype CareFi using commercial off-the-shelf WiFi devices and experimental results in typical indoor scenarios show that it achieves more than $$87.2\%$$ detection rate for intruder sensing.

Dali Zhu, Na Pang, Weimiao Feng, Muhmmad Al-Khiza’ay, Yuchen Ma

Understanding Knowledge Management in Agile Software Development Practice

Knowledge management in agile software development has typically been treated as a broad topic resulting in major classifications of its schools and concepts. What inherent knowledge is involved in everyday agile practice and how agile teams manage it is not well understood. To address these questions, we performed a Systematic Literature Review of 48 relevant empirical studies selected from reputed databases. Using a thematic analysis approach to the synthesis, we discovered that (a) agile teams use three knowledge management strategies: discussions, artifacts and visualisations to manage knowledge (b) there are three types of software engineering knowledge: team progress as project knowledge; requirements as product knowledge; and coding techniques as process knowledge. (c) this knowledge is presented in several everyday agile practices. A theoretical model describing how knowledge management strategies and knowledge types are related to agile practices is also presented. These results will help agile practitioners become aware of the specific knowledge types and knowledge management strategies and enable them to better manage them in everyday agile practices. Researchers can further investigate and build upon these findings through empirical studies.

Yanti Andriyani, Rashina Hoda, Robert Amor

Knowledge Integration

Frontmatter

Multi-view Unit Intact Space Learning

Multi-view learning is a hot research topic in different research fields. Recently, a model termed multi-view intact space learning has been proposed and drawn a large amount of attention. The model aims to find the latent intact representation of data by integrating information from different views. However, the model has two obvious shortcomings. One is that the model needs to tune two regularization parameters. The other is that the optimization algorithm is too time-consuming. Based on the unit intact space assumption, we propose an improved model, termed multi-view unit intact space learning, without introducing any prior parameters. Besides, an efficient algorithm based on proximal gradient scheme is designed to solve the model. Extensive experiments have been conducted on four real-world datasets to show the effectiveness of our method.

Kun-Yu Lin, Chang-Dong Wang, Yu-Qin Meng, Zhi-Lin Zhao

A Novel Blemish Detection Algorithm for Camera Quality Testing

In the camera manufacturing, there exist dusts, fingerprints, and water spots on the image sensor and lens. Hence, the resultant effects of darker region is called blemish, which causes a significant reduction in camera quality. The shapes of blemishes are diverse and irregular. Traditional method detects blemishes using image median filtering in a single direction which leads to false alarm and mis-detection for images with high level of noises. Thus, we present a novel filtering method for blemish detection, which utilizes four directional filters in the 0, 45, 90, 135 degrees directions. Compared to the conventional single direction filter, the multidirectional filters take into account more spatial information to more accurately detect blemishes for both weak and strong blemishes. Moreover, the proposed method uses a new adaptive threshold to better accommodate different image noise levels automatically. Experimental results on two batches of production samples (600 images) show the effectiveness of the proposed method over the conventional method.

Kun Wang, Kwok-Wai Hung, Jianmin Jiang

Learning to Infer API Mappings from API Documents

To satisfy business requirements of various platforms and devices, developers often need to migrate software code from one platform to another. During this process, a key task is to figure out API mappings between API libraries of the source and target platforms. Since doing it manually is time-consuming and error-prone, several code-based approaches have been proposed. However, they often have the issues of availability on parallel code bases and time expense caused by static or dynamic code analysis.In this paper, we present a document-based approach to infer API mappings. We first learn to understand the semantics of API names and descriptions in API documents by a word embedding model. Then we combine the word embeddings with a text similarity algorithm to compute semantic similarities between APIs of the source and target API libraries. Finally, we infer API mappings from the ranking results of API similarities. Our approach is evaluated on API documents of JavaSE and .NET. The results outperform the baseline model at precision@k by 41.51% averagely. Compared with code-based work, our approach avoids their issues and leverages easily acquired API documents to infer API mappings effectively.

Yangyang Lu, Ge Li, Zelong Zhao, Linfeng Wen, Zhi Jin

Super-Resolution for Images with Barrel Lens Distortions

Camera lens distortions are widely observed in different applications for achieving specific optical effects, such as wide angle captures. Moreover, the image with lens distortion is often limited in resolution due to the cost of camera, limited bandwidth, etc. In this paper, we present a learning-based image super-resolution method for improving the resolution of images captured by cameras with barrel lens distortions. The key to the significant improvement of the resolution loss due to lens distortions is to learn a sparse dictionary with a post-processing step. During the training stage, the training images are used to learn the sparse dictionary and projection matrixes. During the testing stage, the observed low-resolution image uses the projection matrixes for two step super-resolution reconstructions of the final high-resolution image. Experimental results show that the proposed method outperforms the conventional learning-based super-resolution methods in terms of PSNR and SSIM values using the same set of training images for algorithm trainings.

Mei Su, Kwok-Wai Hung, Jianmin Jiang

Knowledge Retrieval

Frontmatter

Mining Schema Knowledge from Linked Data on the Web

Datasets on the Web of Data (WoD) are often published without a precise schema which may discourage their reuse. Methods for schema acquisition from linked data have been proposed that mainly exploit the regularities in property and/or value distributions in resources to discover potentially useful classes as homogeneous clusters. Yet the crucial task of interpreting and naming the discovered classes is left to the human analyst. We prone a more holistic approach to schema discovery that, beside clustering, assists the analyst by suggesting plausible names for clusters. In doing that we: (1) rely on concept analysis for class discovery from linked data and (2) exploit known DBpedia types and shared properties to form candidate names. An evaluation of our approach with a dataset from the WoD showed it performs well.

Razieh Mehri, Petko Valtchev

Inferring User Profiles in Online Social Networks Based on Convolutional Neural Network

We propose a novel method to infer missing attributes (e.g., occupation, gender, and location) of online social network users, which is an important problem in social network analysis. Existing works generally utilize classification algorithms or label propagation methods to solve this problem. However, these works had to train a specific model for inferring one kind of missing attributes, which achieve limited precision rates in inferring multi-value attributes. To address above challenges, we proposed a convolutional neural network architecture to infer users’ all missing attributes based on one trained model. And it’s novel that we represent the input matrix using features of target user and his neighbors, including their explicit attributes and behaviors which are available in online social networks. In the experiments, we used a real-word large scale dataset with 220,000 users, and results demonstrated the effectiveness of our method and the importance of social links in attribute inference. Especially, our work achieved a 76.28% precision in the occupation inference task which improved upon the state of the art.

Xiaoxue Li, Yanan Cao, Yanmin Shang, Yanbing Liu, Jianlong Tan, Li Guo

Co-saliency Detection Based on Superpixel Clustering

The exiting co-saliency detection methods achieve poor performance in computation speed and accuracy. Therefore, we propose a superpixel clustering based co-saliency detection method. The proposed method consists of three parts: multi-scale visual saliency map, weak co-saliency map and fusing stage. Multi-scale visual saliency map is generated by multi-scale superpixel pyramid with content-sensitive. Weak co-saliency map is computed by superpixel clustering feature space with RGB and CIELab color features as well as Gabor texture feature in order to the representation of global correlation. Lastly, a final strong co-saliency map is obtained by fusing the multi-scale visual saliency map and weak co-saliency map based on three kinds of metrics (contrast, position and repetition). The experiment results in the public datasets show that the proposed method improves the computation speed and the performance of co-saliency detection. A better and less time-consuming co-saliency map is obtained by comparing with other state-of-the-art co-saliency detection methods.

Guiqian Zhu, Yi Ji, Xianjin Jiang, Zenan Xu, Chunping Liu

ARMICA-Improved: A New Approach for Association Rule Mining

With increasing in amount of available data, researchers try to propose new approaches for extracting useful knowledge. Association Rule Mining (ARM) is one of the main approaches that became popular in this field. It can extract frequent rules and patterns from a database. Many approaches were proposed for mining frequent patterns; however, heuristic algorithms are one of the promising methods and many of ARM algorithms are based on these kinds of algorithms. In this paper, we improve our previous approach, ARMICA, and try to consider more parameters, like the number of database scans, the number of generated rules, and the quality of generated rules. We compare the proposed method with the Apriori, ARMICA, and FP-growth and the experimental results indicate that ARMICA-Improved is faster, produces less number of rules, generates rules with more quality, has less number of database scans, it is accurate, and finally, it is an automatic approach and does not need predefined minimum support and confidence values.

Shahpar Yakhchi, Seyed Mohssen Ghafari, Christos Tjortjis, Mahdi Fazeli

Recommendation Algorithms and Systems

Frontmatter

Collaborative Filtering via Different Preference Structures

Recently, social network websites start to provide third-parity sign-in options via the OAuth 2.0 protocol. For example, users can login Netflix website using their Facebook accounts. By using this service, accounts of the same user are linked together, and so does their information. This fact provides an opportunity of creating more complete profiles of users, leading to improved recommender systems. However, user opinions distributed over different platforms are in different preference structures, such as ratings, rankings, pairwise comparisons, voting, etc. As existing collaborative filtering techniques assume the homogeneity of preference structure, it remains a challenge task of how to learn from different preference structures simultaneously. In this paper, we propose a fuzzy preference relation-based approach to enable collaborative filtering via different preference structures. Experiment results on public datasets demonstrate that our approach can effectively learn from different preference structures, and show strong resistance to noises and biases introduced by cross-structure preference learning.

Shaowu Liu, Na Pang, Guandong Xu, Huan Liu

A Multifaceted Model for Cross Domain Recommendation Systems

Recommendation systems (RS) play an important role in directing customers to their favorite items. Data sparsity, which usually leads to overfitting, is a major bottleneck for making precise recommendations. Several cross-domain RSs have been proposed in the past decade in order to reduce the sparsity issues via transferring knowledge. However, existing works only focus on either nearest neighbor model or latent factor model for cross domain scenario. In this paper, we introduce a Multifaceted Cross-Domain Recommendation System (MCDRS) which incorporates two different types of collaborative filtering for cross domain RSs. The first part is a latent factor model. In order to utilize as much knowledge as possible, we propose a unified factorization framework to combine both CF and content-based filtering for cross domain learning. On the other hand, to overcome the potential inconsistency problem between different domains, we equip the neighbor model with a selective learning mechanism so that domain-independent items gain more weight in the transfer process. We conduct extensive experiments on two real-world datasets. The results demonstrate that our MCDRS model consistently outperforms several state-of-the-art models.

Jianxun Lian, Fuzheng Zhang, Xing Xie, Guangzhong Sun

Cross Domain Collaborative Filtering by Integrating User Latent Vectors of Auxiliary Domains

Cross-Domain Collaborative Filtering solves the sparsity problem by transferring rating knowledge across multiple domains. However, how to transfer knowledge from auxiliary domains is nontrivial. In this paper, we propose a model-based CDCF algorithm by Integrating User Latent Vectors of auxiliary domains (CDCFIULV) from the perspective of classification. For a user-item interaction in the target domain, we first use the trivial location information as the feature vector, and use the rating information as the label. Thus we can convert the recommendation problem into a classification problem. However, such a two-dimensional feature vector is not sufficient to discriminate the different rating classes. Hence, we require some other features for the classification problem with the help of the rating information from the auxiliary domains.In this paper, we assume the auxiliary domains contain dense rating data and share the same aligned users with the target domain. In this scenario, we employ UV decomposition model to obtain the user latent vectors from the auxiliary domains. We expand the trivial location feature vector in the target domain with the obtained user latent vectors from all the auxiliary domains. Thus we can effectively add features for the classification problem. Finally, we can train a classifier for this classification problem and predict the missing ratings for the recommender system. Hence the hidden knowledge in the auxiliary domains can be transferred to the target domain effectively via the user latent vectors. A major advantage of the CDCFIULV model over previous collective matrix factorization or tensor factorization models is that our model can adaptively select significant features during the training process. However, the previous collective matrix factorization or tensor factorization models need to adjust the weights of the auxiliary domains according to the similarities between the auxiliary domains and the target domain. We conduct extensive experiments to show that the proposed algorithm is more effective than many state-of-the-art single domain and cross domain CF methods.

Xu Yu, Feng Jiang, Miao Yu, Ying Guo

Collaborative Filtering Based on Pairwise User-Item Blocking Structure (PBCF): A General Framework and Its Implementation

To our knowledge, all existing collaborative filtering techniques need to find neighbouring relationship between users or items by using some kind of similarity measurement in the feature space. However, a hypothesis hidden behind most existing works is that the similar relationship between users remains static over the whole item sets, which is not true in reality. Users who share similar opinions on some items may have totally different opinions on other items. Users can form many clusters in terms of their opinions on a set of items, However, these clusters may collapse and a new cluster structure will be built in terms their opinions on the new item sets. Analogously, clusters of items formed based on their popularity among a group of users would be disintegrated when encounter a new group of users. In a nutshell, user cluster structure varies across item sets, and vice versa, item cluster structure also varies across user sets.To deal with this collapse problem, we strive to find block structures embedded in the rating matrix in this paper. Block structure is used to characterize the interaction between users and items. This paper proposes a general framework of collaborative filtering based on pairwise user-item blocking structure and its implementation. At last, existing collaborative filtering algorithms are used to learn the latent factor at the global and block level and further make prediction on the unknown rating in the rating matrix. Experiment evidences show that the recommendation performance can be improved with utilization of these block structures.

Fengjuan Zhang, Jianjun Wu, Jianzhao Qin, Xing Liu, Yongqiang Wang

Beyond the Aggregation of Its Members—A Novel Group Recommender System from the Perspective of Preference Distribution

This paper focuses on recommending items to group of users rather than individual users. To model group profile, existing researches almost aggregate preferences of members into a single value, and thus cannot reflect actual group profile of groups with conflicting characteristics. Therefore, we propose a novel group recommender system mechanism. It views group profile as preference distribution, and then models item recommendation process as a multi-criteria decision making process, in order to obtain better recommendation results. Finally, experiments are conducted to verify the proposed approach.

Zhiwei Guo, Chaowei Tang, Wenjia Niu, Yunqing Fu, Haiyang Xia, Hui Tang

Exploring Latent Bundles from Social Behaviors for Personalized Ranking

Users in social networks usually have different interpersonal relationships and various social roles. It is common that a user will synthesize all of his/her roles before taking any action. Understanding how products relate to each other is crucial in Recommender Systems (RSs). Predicting personalized sequential behaviors which are influenced by users’ various social roles and product bundle relationships is one of key tasks for the success of RSs. In this paper, a novel method combining social roles and sequential patterns is proposed to explore the latent bundle dimensions from the perspective of user’s sequential pattern and his/her social roles as well. The extracted vector represents the most distinctive features of interpersonal relationships for users. The proposed method tries to explore the latent bundle relationship by learning personal dynamics which is influenced by the user’s social roles. The method is evaluated on Amazon datasets and demonstrates our framework outperforms alternative baseline by providing top k recommendations.

Wenli Yu, Li Li, Fan Li, Jinjing Zhang, Fei Hu

Trust-Aware Recommendation in Social Networks

With the popularity of online social networks, social network information is becoming increasingly important to improve recommendation effectiveness of the existing recommender systems. In this paper, we propose an improved trust-aware recommendation approach, called TRA. TRA constructs a new social trust matrix based on users’ trust relationships derived from online social networks to alleviate the problem of data sparsity, and meanwhile naturally fuses users’ preferences and their trusted friends’ favors together by means of probability matrix factorization. The experimental results show that TRA performs much better than the state-of-the art recommendation approaches.

Yingyuan Xiao, Zhongjing Bu, Ching-Hsien Hsu, Wenxin Zhu, Yan Shen

Connecting Factorization and Distance Metric Learning for Social Recommendations

Social relations can help to relieve the dilemmas called cold start and data sparsity in traditional recommender systems. Most of existing social recommendation methods are based on matrix factorization, which has been proven effective. In this paper, we introduce a novel social recommender based on the idea that distance reflects likability. It aims to make users in recommender systems be spatially close to their friends and items they like, and be far away from items they dislike by connecting factorization model and distance metric learning. In our method, the positions of users and items are decided by the ratings and social relations jointly, which can help to find appropriate locations for users who have few ratings. Finally, the learnt metric and locations are used to generate understandable and reliable recommendations. The experiments conducted on the real-world dataset have shown that, compared with methods only based on factorization, our method has advantages on both interpretability and accuracy.

Junliang Yu, Min Gao, Yuqi Song, Zehua Zhao, Wenge Rong, Qingyu Xiong

Knowledge Engineering

Frontmatter

Relevant Fact Selection for QA via Sequence Labeling

Question answering (QA) is a very important, but not yet completely resolved problem in artificial intelligence. Solving the QA problem consists of two major steps: relevant fact selection and answering the question. Existing methods usually combine the two steps to solve the problem. A major technique is to add a memory component to infer answers from the chaining facts. It is not very clear how irrelevant facts affect the effectiveness of these methods. In this paper, we propose to separate the two steps and only consider the problem of relevant fact selection. We used a graphical probabilistic model Conditional Random Field (CRF) to model the interdependent relationship among the chaining facts in order to select the relevant ones. In our experiments on a benchmark dataset, we are able to select correctly all relevant facts from 13 tasks out of 19 tasks (F-scores of the rest of the 6 tasks range from 0.8 to 0.97). We also show that using our selector to pre-select relevant facts can substantially improve the accuracies of existing QA systems (e.g. MemN2N (from 88% to 94%) and LSTM (from 66% to 91%) in 13 tasks with complete information).

Yuzhi Liang, Jia Zhu, Yupeng Li, Min Yang, Siu Ming Yiu

Community Outlier Based Fraudster Detection

Healthcare fraud is causing billions of dollars in loss for public healthcare funds. In existing healthcare fraud cases, the convicted fraudsters are mostly physicians - the healthcare professionals who submit fraudulent bills. Fraudster detection can help us to find suspicious physicians and to combat healthcare fraud in advance. When it comes to the problem of fraudster detection, rule based fraud detection methods are not applicable because fraudsters will try everything to avoid detection rules. Meanwhile, outlier based fraud detection approaches primarily aim to find global outliers and can’t find local outliers accurately. Therefore, we propose Community Outlier Based Fraudster Detection Approach - COBFDA in this paper. The proposed approach divides the physicians into different communities and looks for community outliers in each community. Extensive experiment results show that COBFDA outperforms the comparison approaches in terms of f-measure by over 20%.

Chenfei Sun, Qingzhong Li, Hui Li, Shidong Zhang, Yongqing Zheng

An Efficient Three-Dimensional Reconstruction Approach for Pose-Invariant Face Recognition Based on a Single View

A three-dimensional (3D) reconstruction approach based on a single view is proposed to solve the problem of lack of training samples while addressing multi-pose face recognition. First, a planar template is defined based on the geometric information of the segmented faces. Second, 3D faces are resampled according to the geometric relationship between the planar template and original 3D faces, and a normalized 3D face database is obtained. Third, a 3D sparse morphable model is established based on the normalized 3D face database, and a new 3D face can be reconstructed from a single face image. Lastly, virtual multi-pose face images can be obtained by texture mapping, rotation, and projection of the established 3D face, and training samples are enriched. Experimental results obtained using BJUT-3D and CAS-PEAL-R1 face databases show that recognition rate of the proposed method is 91%, which is better than other methods for pose-invariant face recognition based on a single view. This is primarily because the training samples are enriched using the proposed 3D sparse morphable model based on a new dense correspondence method.

Minghua Zhao, Ruiyang Mo, Yonggang Zhao, Zhenghao Shi, Feifei Zhang

MIAC: A Mobility Intention Auto-Completion Model for Location Prediction

Location prediction is essential to many proactive applications and many research works show that human mobility is highly predictable. However, existing works are reported with limited improvements in using generalized spatio-temporal features and unsatisfactory accuracy in complex human mobility. To address these challenges, a Mobility Intention and Auto-Completion (MIAC) model is proposed. We extract mobility patterns to capture common spatio-temporal features of all users, and use mobility intentions to characterize these mobility patterns. A new predicting algorithm based on auto-completion is then proposed. The experimental results on real-world datasets demonstrate that the proposed MIAC model can properly capture the regularity in human mobility by simultaneously considering spatial and temporal features. The comparison results also indicate that MIAC model significantly outperforms state-of-the-art location prediction methods, and can also predict long range locations.

Feng Yi, Zhi Li, Hongtao Wang, Weimin Zheng, Limin Sun

Automatically Difficulty Grading Method Based on Knowledge Tree

The aim of the current study is to propose a model, which can automatically grade difficulty for a question from “instruction system” question bank. The system mainly uses 4 attributes with 26 features based on principal component analysis, which are employed to be input of the Automatically Difficulty Grading Model (ADGM). A knowledge tree model and a machine learning algorithm are utilized as important parts for the classification module. The experimental dataset “instruction system” question bank is based on our built “Principles of Computer Organization” online education system, the accuracy result of difficulty classification could be 77.43% which is much higher than the accuracy of random guess 50%.

Jin Zhang, Chengcheng Liu, Haoxiang Yang, Fan Feng, Xiaoli Gong

A Weighted Non-monotonic Averaging Image Reduction Algorithm

Image reduction is commonly used as a data pre-processing method in many image processing field, an efficient image reduction operator can underpinning many practical applications. Traditional monotonic averaging image reduction operator may lost some detail features during reduction. However, In certain task those small features have very important significance. Therefore, some scholars proposed a non-monotonic averaging image reduction algorithm, recent works focus on integrate the pixel cluster’s space structure information into image representative pixel selection progress, it has certain practical significance but this method is only suitable for specific background pictures. To fill this gap, We propose an novel sigmoid function based weighted image reduction algorithm, which can be used to image reduction under different background colours. Experiments show that the proposed method has better image reduction effect on images with different background colors.

Jiaxin Han, Haiyang Xia

Knowledge Representation and Reasoning

Frontmatter

Learning Deep and Shallow Features for Human Activity Recognition

selfBACK is an mHealth decision support system used by patients for the self-management of Lower Back Pain. It uses Human Activity Recognition from wearable sensors to monitor user activity in order to measure their adherence to prescribed physical activity plans. Different feature representation approaches have been proposed for Human Activity Recognition, including shallow, such as with hand-crafted time domain features and frequency transformation features; or, more recently, deep with Convolutional Neural Net approaches. The different approaches have produced mixed results in previous work and a clear winner has not been identified. This is especially the case for wrist mounted accelerometer sensors which are more susceptible to random noise compared to data from sensors mounted at other body locations e.g. thigh, waist or lower back. In this paper, we compare 7 different feature representation approaches on accelerometer data collected from both the wrist and the thigh. In particular, we evaluate a Convolutional Neural Net hybrid approach that has been shown to be effective on image retrieval but not previously applied to Human Activity Recognition. Results show the hybrid approach is effective, producing the best results compared to both hand-crafted and frequency domain feature representations by a margin of over $$1.4\%$$ on the wrist.

Sadiq Sani, Stewart Massie, Nirmalie Wiratunga, Kay Cooper

Transfer Learning with Manifold Regularized Convolutional Neural Network

Deep learning has been recently proposed to learn robust representation for various tasks and deliver state-of-the-art performance in the past few years. Most researchers attribute such success to the substantially increased depth of deep learning models. However, training a deep model is time-consuming and need huge amount of data. Though techniques like fine-tuning can ease those pains, the generalization performance drops significantly in transfer learning setting with little or without target domain data. Since the representation in higher layers must transition from general to specific eventually, generalization performance degrades without integrating sufficient label information of target domain. To address such problem, we propose a transfer learning framework called manifold regularized convolutional neural networks (MRCNN). Specifically, MRCNN fine-tunes a very deep convolutional neural network on source domain, and simultaneously tries to preserve the manifold structure of target domain. Extensive experiments demonstrate the effectiveness of MRCNN compared to several state-of-the-art baselines.

Fuzhen Zhuang, Lang Huang, Jia He, Jixin Ma, Qing He

Learning Path Generation Method Based on Migration Between Concepts

The learning strategies often have a direct impact on learning effects. Often, the learning guidance is provided by teachers or experts. With the speed of knowledge renewal going faster and faster, it has been completely unable to meet the needs of the learner due to the limitation of individual time and energy. In order to solve this problem, we propose a learning strategy generation method based on migration between concepts, in which the semantic similarity is creatively applied to measure the relevance of concepts. Moreover, the concept of jump steps is introduced in Wikipedia to measure the difficulty of different learning orders. Based on the hyperlinks in Wikipedia, we build a graph model for the target concepts, and achieve multi-target learning path generation based on the minimum spanning tree algorithm. The test datasets include the books about Computer Science in Wiley database and test sets provided by volunteers. Evaluated by expert scoring and path matching, experimental results show that more than 59% of the 860 single-target learning paths generated by our algorithm are highly recognized by teachers and students. More than 60% of the 500 multi-targets learning paths can match the standard path with 0.7 and above.

Dan Liu, Libo Zhang, Tiejian Luo, Yanjun Wu

Representation Learning of Multiword Expressions with Compositionality Constraint

Representations of multiword expressions (MWE) are currently learned either from context external to MWEs based on the distributional hypothesis or from the representations of component words based on some composition functions using the compositional hypothesis. However, a distributional method treats MWEs as a non-divisible unit without consideration of component words. Distributional methods also have the data sparseness problem, especially for MWEs. On the other hand, a compositional method can fail if a MWE is non-compositional. In this paper, we propose a hybrid method to learn the representation of MWEs from their external context and component words with a compositionality constraint. This method can make use of both the external context and component words. Instead of simply combining the two kinds of information, we use compositionality measure from lexical semantics to serve as the constraint. The main idea is to learn MWE representations based on a weighted linear combination of both external context and component words, where the weight is based on the compositionality of MWEs. Evaluation on three datasets shows that the performance of this hybrid method is more robust and can improve the representation.

Minglei Li, Qin Lu, Yunfei Long

Linear Algebraic Characterization of Logic Programs

This paper introduces a novel approach for computing logic programming semantics based on multilinear algebra. First, a propositional Herbrand base is represented in a vector space and if-then rules in a program are encoded in a matrix. Then we provide methods of computing the least model of a Horn logic program, minimal models of a disjunctive logic program, and stable models of a normal logic program by algebraic manipulation of higher-order tensors. The result of this paper exploits a new connection between linear algebraic computation and symbolic computation, which has potential to realize logical inference in huge scale of knowledge bases.

Chiaki Sakama, Katsumi Inoue, Taisuke Sato

Representation Learning with Entity Topics for Knowledge Graphs

Knowledge representation learning which represents triples as semantic embeddings has achieved tremendous success these years. Recent work aims at integrating the information of triples with texts, which has shown great advantages in alleviating the data sparsity problem. However, most of these methods are based on word-level information such as co-occurrence in texts, while ignoring the latent semantics of entities. In this paper, we propose an entity topic based representation learning (ETRL) method, which enhances the triple representations with the entity topics learned by the topic model. We evaluate our proposed method knowledge graph completion task. The experimental results show that our method outperforms most state-of-the-art methods. Specifically, we achieve a maximum improvement of 7.9% in terms of hits@10.

Xin Ouyang, Yan Yang, Liang He, Qin Chen, Jiacheng Zhang

Robust Mapping Learning for Multi-view Multi-label Classification with Missing Labels

The multi-label classification problem has generated significant interest in recent years. Typical scenarios assume each instance can be assigned to a set of labels. Most of previous works regard the original labels as authentic label assignments which ignore missing labels in realistic applications. Meanwhile, few studies handle the data coming from multiple sources (multiple views) to enhance label correlations. In this paper, we propose a new robust method for multi-label classification problem. The proposed method incorporates multiple views into a mixed feature matrix, and augments the initial label matrix with label correlation matrix to estimate authentic label assignments. In addition, a low-rank structure and a manifold regularization are used to further exploit global label correlations and local smoothness. An alternating algorithm is designed to slove the optimization problem. Experiments on three authoritative datasets demonstrate the effectiveness and robustness of our method.

Weijieying Ren, Lei Zhang, Bo Jiang, Zhefeng Wang, Guangming Guo, Guiquan Liu

Fast Subsumption Between Rooted Labeled Trees

This paper presents two data structures designed to efficiently query a set of rooted labeled trees (forest) defined in a language based on a relational vocabulary $$\varSigma $$ and provided with a set-theoretic semantics and a subsumption relation matching the existential conjunctive fragment of the description logic $$\mathcal {ALC}$$. Given a tree query with q nodes and a forest with n nodes, after showing the equivalence between subsumption and homomorphism, an $$O(q \cdot n)$$ algorithm is proposed to compute all homomorphisms/subsumptions from the query to the forest. Then, are presented the two search data structures for faster homomorphism/subsumption retrieval. The first one provides a query time of O(q) for a structure size of $$O(2^n)$$; and the second one provides a trade-off between the query time of $$O(k^2 \cdot q)$$ and the structure size of $$O(k^2\cdot |\varSigma | \cdot 2^{\lceil n/k \rceil })$$, for a fixed integer k.

Olivier Carloni

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise