Skip to main content
main-content

Über dieses Buch

This book constitutes the refereed conference proceedings of the 29th International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems, IEA/AIE 2016, held in Morioka, Japan, in August 2-4, 2016.

The 80 revised full papers presented were carefully reviewed and selected from 168 submissions. They are organized in topical sections: data science; knowledge base systems; natural language processing and sentiment analysis; semantic Web and social networks; computer vision; medical diagnosis system and bio-informatics; applied neural networks; innovations in intelligent systems and applications; decision support systems; adaptive control; soft computing and multi-agent systems; evolutionary algorithms and heuristic search; system integration for real-life applications.

Inhaltsverzeichnis

Frontmatter

Data Science

Frontmatter

Intelligent Systems in Modeling Phase of Information Mining Development Process

The Information Mining Engineering (IME) understands in processes, methodologies, tasks and techniques used to: organize, control and manage the task of finding knowledge patterns in information bases. A relevant task is selecting the data mining algorithms to use, which it is left to the expertise of the information mining engineer, developing it in a non-structured way. In this paper we propose an Information Mining Project Development Process Model (D-MoProPEI) which provides an integrated view in the selection of Information Mining Processes Based on Intelligent Systems (IMPbIS) within the Modeling Phase of the proposed Process Model through a Systematic Deriving Methodology.

Sebastian Martins, Patricia Pesado, Ramón García-Martínez

Performance Evaluation of Knowledge Extraction Methods

This paper shows the precision, the recall and the F-measure for the knowledge extraction methods (under Open Information Extraction paradigm): ReVerb, OLLIE and ClausIE. For obtaining these three measures a subset of 55 newswires corpus was used. This subset was taken from the Reuters-21578 text categorization and test collection database. A handmade relation extraction was applied for each one of these newswires.

Juan M. Rodríguez, Hernán D. Merlino, Patricia Pesado, Ramón García-Martínez

Various Classifiers to Investigate the Relationship Between CSR Activities and Corporate Value

The relationship between corporate social responsibility (CSR) and financial performance is complex and nuanced. Many studies have reported positive, negative, and neutral impacts of CSR on financial performance. This inconsistency is due to differences in methodologies, approaches, and selection of variables. Rather than focusing on specific variables, the present study aims to classify as many variables as possible in CSR if they contribute to shaping corporate value. In this study, we calculate corporate value using the Ohlson model based on income, since many previous studies focus on only a market-based approach. We chose some common classifiers that were appropriate for the nature of our data. After evaluating the performance of each classifier, we found that the Decision Tree is the best classifier to analyze the relationship between CSR activities and corporate value. Based on the tree, companies with high or medium corporate values seek to enhance their CSR activities or to empower secondary stakeholders (e.g., communities, societies), as indicated by cooperation with NPO/NGO. In contrast, companies with low corporate values still focus their CSR activities on primary stakeholders (e.g., customers, employees).

Ratna Hidayati, Katsutoshi Kanamori, Ling Feng, Hayato Ohwada

Matching Rule Discovery Using Classification for Product-Service Design

Product-service design plays an important role in offering an optimal mix of product and accompanied service for the best customer experience and satisfaction. Previously, there are a number of design methods in literature that are focused at proposing the best combination of product and service package pertaining to customer requirements. However, the relationship between product and service elements from the perspective of customer demographics is less emphasized. In this study, we proposed a methodology to discover the matching relationship between product and service from the perspective of customer demographics. We detailed how a survey can be designed and conducted using openly available product and service information. A classification algorithm, C4.5, is applied to discover the possible product-service relationships. In order to showcase our approach, a case study of mobile phone choices and telecommunication services is presented. We have also discussed our results with some indication for future works.

A. F. Zakaria, S. C. J. Lim

Rare Event-Prediction with a Hybrid Algorithm Under Power-Law Assumption

We present an algorithm for predicting both common and rare events. Statistics show that occurrences of rare events are usually associated with common events. Therefore, we argue that predicting common events correctly is an important step toward correctly predicting rare events. The new algorithm assumes that frequencies of events exhibit a power-law distribution. The algorithm consists of components for detecting rare event types and common event types, while minimizing computational overhead. For experiments, we attempt to predict various fault types that can occur in distributed systems. The simulation study driven by the system failure data collected at the Pacific Northwest National Laboratory (PNNL) shows that fault-mitigation based on the new prediction mechanism provides 15 % better system availability than the existing prediction methods. Furthermore, it allows only 10 % of all possible system loss caused by rare faults in the simulation data.

Mina Jung, Jae C. Oh

“Anti-Bayesian” Flat and Hierarchical Clustering Using Symmetric Quantiloids

A myriad of works has been published for achieving data clustering based on the Bayesian paradigm, where the clustering sometimes resorts to Naïve-Bayes decisions. Within the domain of clustering, the Bayesian principle corresponds to assigning the unlabelled samples to the cluster whose mean (or centroid) is the closest. Recently, Oommen and his co-authors have proposed a novel, counter-intuitive and pioneering PR scheme that is radically opposed to the Bayesian principle. The rational for this paradigm, referred to as the “Anti-Bayesian” (AB) paradigm, involves classification based on the non-central quantiles of the distributions. The first-reported work to achieve clustering using the AB paradigm was in [1], where we proposed a flat clustering method which assigned unlabelled points to clusters based on the AB paradigm, and where the distances to the respective learned clusters was based on their quantiles rather than the clusters’ centroids for uni-dimensional and two-dimensional data. This paper, extends the results of [1] in many directions. Firstly, we generalize our previous AB clustering [1], initially proposed for handling uni-dimensional and two-dimensional spaces, to arbitrary d-dimensional spaces using their so-called “quantiloids”. Secondly, we extend the AB paradigm to consider how the clustering can be achieved in hierarchical ways, where we analyze both the Top-Down and the Bottom-Up clustering options. Extensive experimentation demonstrates that our clustering achieves results competitive to the state-of-the-art flat, Top-Down and Bottom-Up clustering approaches, demonstrating the power of the AB paradigm.

Anis Yazidi, Hugo Lewi Hammer, B. John Oommen

On the Online Classification of Data Streams Using Weak Estimators

In this paper, we propose a novel online classifier for complex data streams which are generated from non-stationary stochastic properties. Instead of using a single training model and counters to keep important data statistics, the introduced online classifier scheme provides a real-time self-adjusting learning model. The learning model utilizes the multiplication-based update algorithm of the Stochastic Learning Weak Estimator (SLWE) at each time instant as a new labeled instance arrives. In this way, the data statistics are updated every time a new element is inserted, without requiring that we have to rebuild its model when changes occur in the data distributions. Finally, and most importantly, the model operates with the understanding that the correct classes of previously-classified patterns become available at a later juncture subsequent to some time instances, thus requiring us to update the training set and the training model.The results obtained from rigorous empirical analysis on multinomial distributions, is remarkable. Indeed, it demonstrates the applicability of our method on synthetic datasets, and proves the advantages of the introduced scheme.

Hanane Tavasoli, B. John Oommen, Anis Yazidi

Explicit Contrast Patterns Versus Minimal Jumping Emerging Patterns for Lazy Classification in High Dimensional Data

Minimal jumping emerging patterns have been proved very useful for classification purposes. Nevertheless, the determination of minimal jumping emerging patterns may require evaluation of candidate patterns, the number of which might be exponential with respect to the dimensionality of a data set. This property may disallow classification by means of minimal jumping emerging patterns in the case of high dimensional data. In this paper, we derive an upper bound on the lengths of minimal jumping emerging patterns and an upper bound on their number. We also propose an alternative approach to lazy classification which uses explicit contrast patterns instead of minimal jumping emerging patterns, but produces the same classification quality as a lazy classifier based on minimal jumping emerging patterns. We argue that our approach, unlike the approach based on minimal jumping emerging patterns, can be applied in the case of high dimensional data.

Marzena Kryszkiewicz, Przemyslaw Podsiadly

An Evaluation on KNN-SVM Algorithm for Detection and Prediction of DDoS Attack

Recently, damage caused by DDoS attacks increases year by year. Along with the advancement of communication technology, this kind of attack also evolves and it has become more complicated and hard to detect using flash crowd agent, slow rate attack and also amplification attack that exploits a vulnerability in DNS server. Fast detection of the DDoS attack, quick response mechanisms and proper mitigation are a must for an organization. An investigation has been performed on DDoS attack and it analyzes the details of its phase using machine learning technique to classify the network status. In this paper, we propose a hybrid KNN-SVM method on classifying, detecting and predicting the DDoS attack. The simulation result showed that each phase of the attack scenario is partitioned well and we can detect precursors of DDoS attack as well as the attack itself.

Ahmad Riza’ain Yusof, Nur Izura Udzir, Ali Selamat

Reliable Clustering Indexes

This paper deals with a major challenge in clustering that is optimal model selection. It presents new efficient clustering quality indexes relying on feature maximization, which is an alternative measure to usual distributional measures relying on entropy or on Chi-square metric or vector-based measures such as Euclidean distance or correlation distance. Experiments compare the behavior of these new indexes with usual cluster quality indexes based on Euclidean distance on different kinds of test datasets for which ground truth is available. This comparison clearly highlights altogether the superior accuracy and stability of the new method, its efficiency from low to high dimensional range and its tolerance to noise.

Jean-Charles Lamirel

FHM: Faster High-Utility Itemset Mining Using Length Upper-Bound Reduction

High-utility itemset (HUI) mining is a popular data mining task, consisting of enumerating all groups of items that yield a high profit in a customer transaction database. However, an important issue with traditional HUI mining algorithms is that they tend to find itemsets having many items. But those itemsets are often rare, and thus may be less interesting than smaller itemsets for users. In this paper, we address this issue by presenting a novel algorithm named FHM$$+$$ for mining HUIs, while considering length constraints. To discover HUIs efficiently with length constraints, FHM$$+$$ introduces the concept of Length Upper-Bound Reduction (LUR), and two novel upper-bounds on the utility of itemsets. An extensive experimental evaluation shows that length constraints are effective at reducing the number of patterns, and the novel upper-bounds can greatly decrease the execution time, and memory usage for HUI mining.

Philippe Fournier-Viger, Jerry Chun-Wei Lin, Quang-Huy Duong, Thu-Lan Dam

: An Integrated Toolbox for Change, Causality and Motif Discovery

Time series are being generated continuously from all kinds of human endeavors. The ubiquity of time-series data generates a need for data mining and pattern discovery algorithms targeting this data format which is becoming of ever increasing importance. Three basic problems in mining time-series data are change point discovery, causality discovery and motif discovery. This paper presents an integrated toolbox that can be used to perform any of these tasks on multidimensional real-valued time-series using state of the art algorithms. The proposed toolbox provides practitioners in time-series analysis and data mining with several tools useful for data generation, preprocessing, modeling evaluation and mining of long sequences. The paper also reports real world applications that uses the toolbox in HRI, physiological signal processing, and human behavior modeling and understanding.

Yasser Mohammad, Toyoaki Nishida

Knowledge Based Systems

Frontmatter

Hidden Frequency Feature in Electronic Signatures

Forensics is a science discipline that deals with collecting evidence in crime scene investigation. However if we’re dealing with signatures, the crime scene is the signed paper itself. Therefore, for any kind of investigation, there should be a sample and a master signature to benchmark the similarities and differences. The characteristics of a master signature could easily be identified by forensics techniques, yet it is still infeasible for electronic signatures due to ease of copy-pasting. Through the emerging touchscreen technologies, the features of the signatures could be stealthily extracted and stored while the user is signing. Given these facts, the novelty we put forward in this paper is a feature extraction method using short time Fourier transformations to identify frequencies of a simple master signature. We subsequently presented the spectrogram analysis revealing the differences between the original and fake signatures. Finally a validation method for the analysis of the spectrograms is introduced which resulted in a significant gap between real and fraud signatures for various window sizes.

Orcan Alpar, Ondrej Krejcar

A Multimodal Approach to Relevance and Pertinence of Documents

Automated document classification process extracts information with a systematical analysis of the content of documents. This is an active research field of growing importance due to the large amount of electronic documents produced in the world wide web and made readily available thanks to diffused technologies including mobile ones. Several application areas benefit from automated document classification, including document archiving, invoice processing in business environments, press releases and search engines. Current tools classify or “tag” either text or images separately. In this paper we show how, by linking image and text-based contents together, a technology improves fundamental document management tasks like retrieving information from a database or automatically routing documents. We present a formal definition of pertinence and relevance concepts, that apply to those documents types we name “multimodal”. These are based on a model of conceptual spaces we believe compulsory for document investigation while using joint information sources coming from text and images forming complex documents.

Matteo Cristani, Claudio Tomazzoli

Fuzzy-Syllogistic Systems: A Generic Model for Approximate Reasoning

The well known Aristotelian syllogistic system $$ {\mathbb{S}} $$ consists of 256 moods. We have found earlier that 136 moods are distinct in terms of equal truth ratios that range in τ = [0,1]. The truth ratio of a particular mood is calculated by relating the number of true and false syllogistic cases that the mood matches. The introduction of (n − 1) fuzzy existential quantifiers, extends the system to fuzzy-syllogistic systems $$ ^{\rm n} {\mathbb{S}} $$, 1 < n, of which every fuzzy-syllogistic mood can be interpreted as a vague inference with a generic truth ratio, which is determined by its syllogistic structure. Here we introduce two new concepts, the relative truth ratio rτ = [0,1] that is calculated from the cardinalities of the syllogistic cases of the mood and fuzzy-syllogistic ontology (FSO). We experimentally apply the fuzzy-syllogistic systems $$ ^{2} {\mathbb{S}} $$ and $$ ^{6} {\mathbb{S}} $$ as underlying logic of a FSO reasoner (FSR) and discuss sample cases for approximate reasoning.

Bora İ. Kumova

Towards a Knowledge Based Environment for the Cognitive Understanding and Creation of Immersive Visualization of Expressive Human Movement Data

How the classification of expressive movement data and use of such classifications may be used to inform the development of a knowledge-based system of movement analysis. In this paper, we present an example that adopts a practice-led design approach to the creation of immersive animation environments that features a method of indexing dance movement used by artists and designers to characterize this movement using “narrative grammar” based on pathemic, kinesthetic, cinematographic and aesthetic criteria referred to as a “movement index”. The utility of the system described in this experiment was tested principally in art and design but this use can be generalized, to inform the development of cognitive medical applications for analyzing irregular movement patterns or the unusual motion behavior often characteristic of the ill or the elderly.

Christopher Bowman, Hamido Fujita, Gavin Perin

Bibliometric Tools for Discovering Information in Database

In bibliometrics, there are two main procedures to explore a research field: performance analysis and science mapping. Performance analysis aims at evaluating groups of scientific actors (countries, universities, departments, researchers) and the impact of their activity on the basis of bibliographic data. Science mapping aims at displaying the structural and dynamic aspects of scientific research, delimiting a research field, and quantifying and visualizing the detected sub-fields by means of co-word analysis or documents co-citation analysis. In this paper we present two bibliometric tools that we have developed in our research laboratory SECABA: (i) H-Classics to develop performance analysis based on Highly Cited Papers and (ii) SciMAT to develop science mapping guided by performance bibliometric indicators.

Enrique Herrera-Viedma, M. Angeles Martinez, Manuel Herrera

Natural Language Processing and Sentiment Analysis

Frontmatter

The Statistical Approach to Biological Event Extraction Using Markov’s Method

Gene Regulation Network (GRN) is a graphical representation of the relationship for a collection of regulators that interact with each other and with other substances in the cell to govern the gene expression levels of mRNA and proteins. In this study, we examine the extraction of GRN from literatures using a statistical method. Markovian logic has been used in the natural language processing domain extensively such as in the field of speech recognition. This paper presents an event extraction approach using the Markov’s method and the logical predicates. An event extraction task is modeled into a Markov’s model using the logical predicates and a set of weighted first ordered formulae that defines a distribution of events over a set of ground atoms of the predicates that is specified using the training and development data. The experimental results has a state-of-the-art F-score comparable 2013 BioNLP shared task and gets 81 % precision in forming the gene regulation network. It shows we have a good performance in solving this problem.

Wen-Juan Hou, Bamfa Ceesay

Citation-Based Extraction of Core Contents from Biomedical Articles

Retrieval of biomedical articles about specific research issues (e.g., gene-disease associations) is an essential and routine job for biomedical researchers. An article a can be said to be about a research issue r only if its core content (goal, background, and conclusion of a) focuses on r. In this paper, we present a technique CoreCE (CoreContent Extractor) that, given a biomedical article a, extracts the textual core content of a. The core contents extracted from biomedical articles can be used to index the articles so that articles about specific research issues can be retrieved by search engines more properly. Development of CoreCE is challenging, because the core content of an article a may be expressed in different ways and scattered in a. We tackle the challenge by considering titles of the references cited by a, as well as the passages (in a) used to explain why the references are cited (i.e., the citation passages). Empirical evaluation shows that, by representing biomedical articles with the core contents extracted by CoreCE, retrieval of those articles that are judged (by biomedical experts) to be about specific gene-disease associations can be significantly improved. CoreCE can thus be a front-end processor for search engines to preprocess biomedical scholarly articles for subsequent indexing and retrieval. The contribution is of technical significance to the retrieval and mining of the evidence already published in biomedical literature.

Rey-Long Liu

Event Extraction and Classification by Neural Network Model

To understand and automatically extract information about events presented in a text, semantically meaningful units expressing these events are important. Extracting events and classifying them into event types and subtypes using Natural Language Processing techniques poses a challenging research problem. There is no clear-cut definitions to what an event from a text is and what the optimal representation of semantic units within a given text is. In addition, events in a text can be classified into types and subtypes of events; and a single event can have multiple mentions in a given sentence. In this paper, we propose a model to determine events within a given text and classify them into event types or subtypes and REALIS by the distributional semantic role labeling and neural embedding techniques. For the task of the event nugget detection, we trained a three-layer network to determine the event mentions from texts achieving F1-score of 77.37 % for macro average and 71.10 % for micro average, respectively.

Bamfa Ceesay, Wen-Juan Hou

A Hybrid Approach to Sentiment Analysis with Benchmarking Results

The objective of this article is two-fold. Firstly, a hybrid approach to Sentiment Analysis encompassing the use of Semantic Rules, Fuzzy Sets and an enriched Sentiment Lexicon, improved with the support of SentiWordNet is described. Secondly, the proposed hybrid method is compared against two well established Supervised Learning techniques, Naïve Bayes and Maximum Entropy. Using the well known and publicly available Movie Review Dataset, the proposed hybrid system achieved higher accuracy and precision than Naïve Bayes (NB) and Maximum Entropy (ME).

Orestes Appel, Francisco Chiclana, Jenny Carter, Hamido Fujita

Mixture of Language Models Utilization in Score-Based Sentiment Classification on Clinical Narratives

Sentiment classification on clinical narratives has been a groundwork to analyze patient’s health status, medical condition and treatment. The work posed challenges due to the shortness, and implicit sentiment of the clinical text. The paper shows that a sentiment score of a sentence simultaneously depends on scores of its terms including words, phrases, sequences of non-adjacent words, thus we propose to use a linear combination which can incorporate the scores of the terms extracted by various language models with the corresponding coefficients for estimating the sentence’s score. Through utilizing the linear combination, we derive a novel vector representation of a sentence called language-model-based representation that is based on average scores of kinds of term in the sentence to help supervised classifiers work more effectively on the clinical narratives.

Tran-Thai Dang, Tu-Bao Ho

Twitter Feature Selection and Classification Using Support Vector Machine for Aspect-Based Sentiment Analysis

In this paper, with regards to aspect-based sentiment classification accuracy problem, we propose a Principal Component Analysis (PCA) feature selection method that can determine the most relevant set of features for aspect-based sentiment classification. Feature selection helps to reduce redundant features and remove irrelevant features which affect classifier accuracy. In this paper we present a method for feature selection for twitter aspect-based sentiment classification based on Principal Component Analysis (PCA). PCA is combined with Sentiwordnet lexicon-based method which is incorporated with Support Vector Machine (SVM) learning framework to perform the classification. Experiments on our own Hate Crime Twitter Sentiment (HCTS) and benchmark Stanford Twitter Sentiment (STS) datasets yields accuracies of 94.53 % and 97.93 % respectively. The comparisons with other statistical feature selection methods shows that our proposed approach shows promising results in improving aspect-based sentiment classification performance.

Nurulhuda Zainuddin, Ali Selamat, Roliana Ibrahim

Semantic Web and Social Networks

Frontmatter

The Effectiveness of Gene Ontology in Assessing Functionally Coherent Groups of Genes: A Case Study

In recent years, ontologies have been extensively used in many biological fields to support a variety of applications. A well known example is Gene Ontology (GO) that organizes a vocabulary of terms about gene products and functions. GO offers an effective support for evaluating the similarity between two genes by measuring the distance of their respective GO terms. The advent of high-throughput technologies and the consequent production of lists of genes associated with specific conditions is stressing the need of recognizing groups of genes which cooperate within a specific biological event. This paper compares six popular similarity measures on GO in order to evaluate their effectiveness in discovering functionally coherent genes from an assigned list of genes. The aim is to discover which measure performs best. We also investigate about the potential of GO in evaluating the similarity of a set of genes according to its cardinality and the characteristics of the similarity measures. Experiments take into consideration: (a) 84 groups of genes sharing similar molecular functions through the production of enzymes within the human organism; (b) 150 groups of randomly selected genes. The paper demonstrates the efficient support of GO in detecting functionally related groups of genes, despite the GO’s hierarchical structure limits the representation of richer forms of knowledge.

Nicoletta Dessì, Barbara Pes

Social Network Clustering by Using Genetic Algorithm: A Case Study

With the rapid growth of large-scaled social networks, the analysis of social network data has become an extremely challenging computational issue. To meet the challenge, it is possible to significantly reduce the complexity of the problem by properly clustering a large social network into groups, and then analyzing data within each group, or studying the relationship among groups. Hence, social network clustering can be regarded as one of the essential problems in social network analysis. To address the issue, we propose an evolutionary computation approach to social network clustering. We first formulate social network clustering as an optimization problem and then develop a genetic algorithm to solve the problem. We also applied the proposed approach to a case study based on data of some Facebook users.

Ming-Feng Tsai, Chun-Yi Lu, Churn-Jung Liau, Tuan-Fang Fan

S-Rank: A Supervised Ranking Framework for Relationship Prediction in Heterogeneous Information Networks

The most crucial part for relationship prediction in heterogeneous information networks (HIN) is how to effectively represent and utilize the information hidden in the creation of relationships. There exist three kinds of information that need to be considered, namely local structure information (Local-info), global structure information (Global-info) and attribute information (Attr-info). They influence relationship creation in a different but complementary way: Local-info is limited to the topologies around certain nodes thus it ignores the global position of node; methods using Global-info are biased to highly visible objects; and Attr-info can capture features related to objects and relations in networks. Therefore, it is essential to combine all the three kinds of information together. However, existing approaches utilize them separately or in a partially combined way since effectively encoding all the information together is not an easy task. In this paper, a novel three-phase Supervised Ranking framework (S-Rank) is proposed to tackle this issue. To the best of our knowledge, our work is the first to completely combine Global-info, Local-info and Attr-info together. Firstly, a Supervised PageRank strategy (SPR) is proposed to capture Global-info and Attr-info. Secondly, we propose a Meta Path-based Ranking method (MPR) to obtain Local-info in HIN. Finally, they are integrated into the final ranking result. Experiments on DBLP data demonstrate that the proposed S-Rank framework can effectively take advantage of all the three kinds of information for predicting citation relation and outperforms other well-known baseline approaches.

Wenxin Liang, Xiaosong He, Dongdong Tang, Xianchao Zhang

Discovering Common Semantic Trajectories from Geo-tagged Social Media

Massive social media data are being created and uploaded to online nowadays. These media data associated with geographical information reflect people’s footprints of movements. This study investigates into extraction of people’s common semantic trajectories from geo-referenced social media data using geo-tagged images. We first convert geo-tagged photographs into semantic trajectories based on regions-of-interest, and then apply density-based clustering with a similarity measure designed for multi-dimensional semantic trajectories. Using real geo-tagged photographs, we find interesting people’s common semantic mobilities. These semantic behaviors demonstrate the effectiveness of our approach.

Guochen Cai, Kyungmi Lee, Ickjai Lee

Analysis of Social Networks Using Pseudo Cliques and Averaging

In order to analyze social networks, a improved method for Clique Percolation Method (CPM) is proposed. Using this method, which is called pseudo Alternative CPM (ACPM), network analysis of friendship networks on SNS sites for college students is carried out. As the number of lack of nodes to fuse two cliques increases, it is confirmed that small communities inside large communities can be detected. The differences between two SNS sites coming from their system, registration or invitation, are also clarified. Moreover the change of average degrees of nodes there is observed and its behavior is also discussed.

Atsushi Tanaka

Exposing Open Street Map in the Linked Data Cloud

After the mobile revolution, geographical knowledge has getting more and more importance in many location-aware application scenarios. Its popularity influenced also the production and publication of dedicated datasets in the Linked Data (LD) cloud. In fact, its most recent representation shows Geonames competing with DBpedia as the largest and most linked knowledge graph available in the Web. Among the various projects related to the collection and publication of geographical information, as of today, Open Street Map (OSM) is for sure one of the most complete and mature one exposing a huge amount of data which is continually updated in a crowdsourced fashion. In order to make all this knowledge available as Linked Data, we developed LOSM: a SPARQL endpoint able to query the data available in OSM by an on-line translation form syntax to a sequence of calls to the OSM overpass API. The endpoint makes also possible an on-the-fly integration among Open Street Map information and the one contained in external knowledge graphs such as DBpedia, Freebase or Wikidata.

Vito Walter Anelli, Andrea Calì, Tommaso Di Noia, Matteo Palmonari, Azzurra Ragone

A MCDM Methods Based TAM for Deriving Influences of Privacy Paradox on User’s Trust on Social Networks

Social network (SN) sites (SNSs) surged recently all over the world and have become new platforms for intimate communications. As the functionality of SNs was enhanced, users’ own information can be collected, stored, and manipulated much more easily. Privacy concerns have thus become the most concerned issue by both users and SN service providers. The service providers intend to maximize the profits and need to consider how users’ confidential information can be fully utilized in marketing and operations. At the same time, users usually concern over the misuse of private information by the website operations at the moment when disclosing individual details on SNSs. Apparently, a significant gap exists between the website operators’ intention to fully utilize the private information as well as the users’ privacy concerns about disclosing information on the SNSs. Such a cognition gap, or the “privacy paradox”, influences users’ trust on a specific SNS directly and further influences users’ acceptance and continuous usage of the sites. In this study, the Technology Acceptance Model (TAM) was introduced as the theoretical basis by applying users’ private disclosure behavior, disclosure risks perception, and the extent of privacy settings in the SNSs as the main variables. In addition, in the past works, researchers found that perceived usefulness, perceived ease of use and the interaction strength for modern technology services or products influence the use intention. So, these factors were also added as research variables in the analytic model. The Decision Making Trial and Evaluation Laboratory Based Network Process (DNP) was introduced construct the influence relationships between the variables. The weights being associated with the variables can be derived accordingly. By using the analytic model, the variables which can influence the privacy paradox on user’s trust of SNs can be derived. Such variables and influence relationships can be used in developing the security policies of the SNSs.

Chi-Yo Huang, Hsin-Hung Wu, Hsueh-Hsin Lu

Algorithms for Quantitative-Based Possibilistic Lightweight Ontologies

This paper proposes approximate algorithms for computing inconsistency degree and answering instance checking queries in the framework of uncertain lightweight ontologies. We focus on an extension of lightweight ontologies, encoded here in DL-Lite languages, to the product-based possibility theory framework. We provide an encoding of the problem of computing inconsistency degree in product-based possibility DL-Lite as a weighted set cover problem and we use a greedy algorithm to compute an approximate value of the inconsistency degree. We also propose an approximate algorithm for answering instance checking queries in product-based possibilistic DL-Lite. Experimental studies show the good results obtained by both algorithms.

Salem Benferhat, Khaoula Boutouhami, Faiza Khellaf, Farid Nouioua

A Survey on Ontologies and Ontology Alignment Approaches in Healthcare

In the era of Internet, high connectivity and openness introduced an opportunity for a new kind of approach to healthcare information system integration. Such an approach may utilize semantic-based technologies to represent and communicate knowledge between these systems. Resource Description Framework (RDF) in conjunction with Web Ontology Language (OWL) can be considered as a de facto standard when it comes to semantic web and linked data technologies, and represents a foundation for defining healthcare ontologies. The goal of this paper is to provide an overview and critical review of existing healthcare ontologies and approaches to healthcare IS integration, focusing on OWL/RDF based solutions. With this review we want to show that although a lot work is done in this area, no universal or omnipresent solution has surfaced to allow automatic or at least semi-automatic integration of healthcare ISs. As there is a large number of established and emerging ontologies covering this subject our review will not provide an exhaustive collection of all the references in the area, but present the most notable standards, ontologies, taxonomies, and integration approaches.

Vladimir Dimitrieski, Gajo Petrović, Aleksandar Kovačević, Ivan Luković, Hamido Fujita

Computer Vision

Frontmatter

The Research of Chinese License Plates Recognition Based on CNN and Length_Feature

Although the license plate recognition system has been widely used, the location and recognition rate is still affected by the clarity and illumination conditions. A license plate locating (LPL) method and a license plate characters recognition (LPCR) method, respectively, based on convolution neural network (CNN) and Length_Feature (LF), are proposed in this paper. Firstly, this paper changes the activation function of CNN, and extracts local feature to train the network. Through this change, the network convergence has sped up, the location accuracy has improved, and wrong location and long time consuming, which caused by some complicated factors such as light conditions, fuzzy image, tilt, complex background and so on, have been resolved. Secondly, the LF, which is proposed in this paper, is easier to understand and has less calculation and higher speed than transform domain features, and also has higher accuracy to recognize fuzzy and sloping characters than traditional geometric features.

Saina He, Chunsheng Yang, Jeng-Shyang Pan

View-Invariant Gait Recognition Using a Joint-DLDA Framework

In this paper, we propose a new view-invariant framework for gait analysis. The framework profits from the dimensionality reduction advantages of Direct Linear Discriminant Analysis (DLDA) to build a unique view-invariant model. Among these advantages is the capability to tackle the under-sampling problem (USP), which commonly occurs when the number of dimensions of the feature space is much larger than the number of training samples. Our framework employs Gait Energy Images (GEIs) as features to create a single joint model suitable for classification of various angles with high accuracy. Performance evaluations shows the advantages of our framework, in terms of computational time and recognition accuracy, as compared to state-of-the-art view-invariant methods.

Jose Portillo, Roberto Leyva, Victor Sanchez, Gabriel Sanchez, Hector Perez-Meana, Jesus Olivares, Karina Toscano, Mariko Nakano

Copyright Protection in Video Distribution Systems by Using a Fast and Robust Watermarking Scheme

In this paper we propose a watermarking scheme to ensure copyright protection in video distribution systems. This research is focused on provide a low computational cost solution that allows embedding a robust and imperceptible watermarking signal in the video content, such that the original video transmission rate will not be affected. Our proposal accomplishes this challenge by introducing a fast and effective method based on spatio-temporal Human Visual System properties that allow improving imperceptibility along video sequence and at the same time, enhancing robustness. Extensive experiments were performed to prove that the proposed algorithm satisfies quickness, invisibility and robustness against several real-world video processing issues. The proposed scheme has the advantages of robustness, simplicity and flexibility, allowing it to be considered a good solution in practical situations since it can be adapted to different video distribution systems.

Antonio Cedillo-Hernandez, Manuel Cedillo-Hernandez, Francisco Garcia-Ugalde, Mariko Nakano-Miyatake, Hector Perez-Meana

Facial Expression Recognition Adaptive to Face Pose Using RGB-D Camera

In this paper, we propose a facial expression recognition method for non-frontal faces using RGB-D camera. The method uses the depth information of the RGB-D camera to calculate the face pose, modeled using a cylinder. Feature points obtained by the RGB-D camera, modified by the face pose, are compared with Action Units of the Facial Action Coding System for recognition of facial expression. Experiments were conducted using facial images in three types of angles and four expressions: anger, sadness, happiness, and surprise. Results of the experiments have shown that the method is rather robust to roll rotations than yaw and pitch rotations.

Yuta Inoue, Shun Nishide, Fuji Ren

Visible Spectrum Eye Tracking for Safety Driving Assistance

Although many studies have proposed eye tracking methods to be used for driving assistance systems, these concepts have not been put into practical. The most considered issue to track drivers’ eyes is the effects of ambient infrared spectrum from the sunlight, making center of pupils and the corneal reflections from the tracker’s infrared light source undetectable. In this study, we propose a visible spectrum eye tracking to calculate the driver’s gaze points only if the gaze detection from the infrared spectrum eye tracking failed. The proposed eye tracking uses an automated facial landmark detection to calculate the head pose which enabling head movement compensation, and a learning-based calibration model to eliminate the calibration processes.

Takashi Imabuchi, Oky Dicky Ardiansyah Prima, Hisayoshi Ito

Medical Diagnosis System and Bio-informatics

Frontmatter

3D Protein Structure Prediction with BSA-TS Algorithm

Three-dimensional protein spatial structure prediction with the amino acid sequence can be converted to a global optimization problem of a multi-variable and multimodal function. This article uses an improved hybrid optimization algorithm named BSA-TS algorithm which combines Backtracking Search Optimization Algorithm (BSA) with Tabu Search (TS) Algorithm to predict the structure of protein based on the three-dimensional AB off-lattice model. It combines the advantage of BSA which has a simple and efficient algorithm framework, less control parameters and less sensitivity to the initial value of the control parameters and the advantage of TS which has a strong ability for the global neighborhood search, and can better overcome the shortcomings of traditional algorithms which have slow convergence rate and are easy to fall into local optimum. At last we experiment in some Fibonacci sequences and real protein sequences which are widely used in protein spatial structure prediction, and the experimental results show that the hybrid algorithm has good performance and accuracy.

Yan Xu, Changjun Zhou, Qiang Zhang, Bin Wang

Training ROI Selection Based on MILBoost for Liver Cirrhosis Classification Using Ultrasound Images

Ultrasound images are widely used for diagnosis of liver cirrhosis. In most of liver ultrasound images analysis, regions of interest (ROIs) are selected carefully, to use for feature extraction and classification. It is difficult to select ROIs exactly for training classifiers, because of the low SN ratio of ultrasound images. In these analyses, training sample selection is important issue to improve classification performance. In this article, we have proposed training ROI selection using MILBoost for liver cirrhosis classification. In our experiments, the proposed method was evaluated using manually selected ROIs. Experimental results show that the proposed method improve classification performance, compared to previous method, when qualities of class label for training sample are lower.

Yusuke Fujita, Yoshihiro Mitani, Yoshihiko Hamamoto, Makoto Segawa, Shuji Terai, Isao Sakaida

Sleep Pattern Discovery via Visualizing Cluster Dynamics of Sound Data

The quality of a good sleep is important for a healthy life. Recently, several sleep analysis products have emerged on the market; however, many of them require additional hardware or there is a lack of scientific evidence regarding their clinical efficacy. This paper proposes a novel method for discovering the sleep pattern via clustering of sound events. The sleep-related sound clips are extracted from sound recordings obtained when sleeping. Then, various self-organizing map algorithms are applied to the extracted sound data. We demonstrate the superiority of Kullback-Leibler divergence and obtain the cluster maps to visualize the distribution and changing patterns of sleep-related events during the sleep. Also, we perform a comparative interpretation between sleep stage sequences and obtained cluster maps. The proposed method requires few additional hardware, and its consistency with the medical evidence proves its reliability.

Hongle Wu, Takafumi Kato, Tomomi Yamada, Masayuki Numao, Ken-ichi Fukui

A Conformational Epitope Prediction System Based on Sequence and Structural Characteristics

An epitope is composed of several amino acids located on structural surface of an antigen. These gathered amino acids can be specifically recognized by antibodies, B cells, or T cells through immune responses. Precise recognition of epitopes plays an important role in immunoinformatics for vaccine design applications. Conformational epitope (CE) is the major type of epitopes in a vertebrate organism, but neither regular combinatorial patterns nor fixed geometric features are known for a CE. In this paper, a novel CE prediction system was established based on physico-chemical propensities of sequence contents, spatial geometrical conformations, and surface rates of amino acids. In addition, a support vector machine technique was also applied to train appearing frequencies of combined neighboring surface residues of known CEs, and it was applied to classify the best predicted CE candidates. In order to evaluate prediction performance of the proposed system, an integrated dataset was constructed by removing redundant protein structures from current literature reports, and three testing datasets from three different systems were collected for validation and comparison. The results have shown that our proposed system improves in both specificity and accuracy measurements. The performance of average sensitivity achieves 36 %, average specificity 92 %, average accuracy 89 %, and average positive predictive value 25 %.

Wan-Li Chang, Ying-Tsang Lo, Tun-Wen Pai

The Factors Affecting Partnership Quality of Hospital Information Systems Outsourcing of PACS

The purpose of this study is to investigate the determinants of outsourcing partnership between clients and providers in hospital outsourcing projects, and it also explores the relationship between the quality of outsourcing partnership and the success of outsourcing. Subjects of the survey were the medical centers that involved in PACS projects in Taiwan. A total of 97 valid questionnaires were analyzed in this study by the partial least squares to test related hypotheses. The results indicated that there is a positive correlation between the following variables: (1) shared knowledge and mutual benefits (2) mutual dependency and mutual benefits (3) mutual dependency and commitment (4) mutual dependency and predisposition (5) organizational linkage and commitment (6) organizational linkage and predisposition (7) commitment and outsourcing success.

Yi-Horng Lai

A Recent Study on Hardware Accelerated Monte Carlo Modeling of Light Propagation in Biological Tissues

The Monte Carlo (MC) method is the gold standard in photon migration through 3D media with spatially varying optical proper-ties. MC offers excellent accuracy, easy-to-program and straightforward parallelization. In this study we summarize the recent advances in accelerating simulations of light propagation in biological tissues. The systematic literature review method is involved selecting the relevant studies for the research. With this approach research questions regarding the acceleration techniques are formulated and additional selection criteria are applied. The selected studies are analyzed and the research questions are answered. We discovered that there are several possibilities for accelerating the MC code and the CUDA platform is used in more than $$60\,$$% of all studies. We also discovered that the trend in GPU acceleration with CUDA has continued in last two years.

Jakub Mesicek, Ondrej Krejcar, Ali Selamat, Kamil Kuca

Clustering Analysis of Vital Signs Measured During Kidney Dialysis

Analysis of vital data of kidney dialysis patients is presented. The analysis is based on some vital signs of pulse rate (PR), respiration rate (RR) and body movement (BM) which were obtained by a sleep monitoring system. In a series of experiments, eight patients of different genders and ages were involved. For the analysis, a hierarchical clustering method was applied with multi-dimensional dynamic time warping distance to analyze the similarity between the vital signs. The hierarchical clustering uses Ward’s method to calculate the distance between two clusters. The analysis results show that daily vital sign indicates a feature related to one of the clusters and physiological rhythms based on a series of the features vary depending on the season. Based on the hypothesis, some irregular vital signs which deviate from the physiological rhythms can be detected to predict abnormal health condition and discomfort of the patients.

Kazuki Yamamoto, Yutaka Watanobe, Wenxi Chen

A Smart Arduino Alarm Clock Using Hypnagogia Detection During Night

This project describes hardware design and implementation of low-cost smart alarm clock based on Arduino platform, which uses passive infrared sensor (PIR) to detect sleep states of users. Sleep is not just a passive process. People can achieve different states during the night, which are known as Hypnagogia (state from wakefulness to sleep), NREM (non-rapid eye movement), REM (rapid eye movement), Hypnexagogium (awakening state) and dreaming. The main goal for this developed smart alarm clock is to detect these states and adjust alarm time to the best possible moment, when people are in awaking state or in light sleep. Awaking in these states is quite better and people feel much more refreshed. Hardware of this developed alarm clock is composed from LCD LED display, real-time (RTC) clock unit, temperature and humidity sensor, photosensitive module for detection of daytime, touch sensor and WiFi module for time synchronization from NTP (Network Time Protocol) servers. This Smart alarm clock could be used for better and more effective awakening for users.

Adam Drabek, Ondrej Krejcar, Ali Selamat, Kamil Kuca

Flow Visualization Techniques: A Review

Flow visualization is an approach focuses on methods to get information from the flow field datasets either in 2 or 3 dimensional. Researchers have presented many visualization techniques with similar goal, which is to enrich the information provided by the visualization. The differences between each technique cause misunderstanding to new researchers on the implementation, advantages, and their limitations. This paper will review and discuss some of the available techniques by classifying them based on the dimension and type of flow datasets. The type of flow is either steady or unsteady, and the dimensions are 2, 2.5, and 3 dimensional. The classification assists readers to identify and choose the appropriate method based on their requirement. This paper also highlights several important information related to the history and the foundation of flow visualization before reviewing the methods in details. Discussion and tables are included to enhance the readers understanding about the differences between reviewed methods.

Yusman Azimi Yusoff, Farhan Mohamad, Mohd Shahrizal Sunar, Ali Selamat

Applied Neural Networks

Frontmatter

Hardware/Software Co-design for a Gender Recognition Embedded System

Gender recognition has applications in human-computer interaction, biometric authentication, and targeted marketing. This paper presents an implementation of an algorithm for binary male/female gender recognition from face images based on a shunting inhibitory convolutional neural network, which has a reported accuracy on the FERET database of 97.2 %. The proposed hardware/software co-design approach using an ARM processor and FPGA can be used as an embedded system for a targeted marketing application to allow real-time processing. A threefold speedup is achieved in the presented approach compared to a software implementation on the ARM processor alone.

Andrew Tzer-Yeu Chen, Morteza Biglari-Abhari, Kevin I-Kai Wang, Abdesselam Bouzerdoum, Fok Hing Chi Tivive

Style-Me – An Experimental AI Fashion Stylist

“Style endures as it is renewed and evolved” believed French fashion designer Gabrielle Chanel [1]. In this study, we propose an AI based system called “Style-Me” as our answer to the question “Can an AI machine be a fashion stylist?” Style-Me is a machine learning application that recommends fashion looks. More specifically, Style-Me learns user preferences through the use of Artificial Neural Networks (ANN). The system scores user’s customized style looks based on fashion trends and users’ personal style history. Although much remains to be done, our implementation shows that an AI machine can be a fashion stylist.

Haosha Wang, Joshua De Haan, Khaled Rasheed

Reduction of Computational Cost Using Two-Stage Deep Neural Network for Training for Denoising and Sound Source Identification

This paper addresses reduction of computational cost in training of a Deep Neural Network (DNN), in particular, for sound identification using highly noise-contaminated sound recorded with a microphone array embedded in an Unmanned Aerial Vehicle (UAV), aiming at people’s voice detection quickly and widely in a disastrous situation. It is known that a DNN training method called end-to-end training shows high performance, since it uses a huge neural network with high non-linearity which is trained with a large amount of raw input signals without preprocessing. Its computational cost is, however, expensive due to the high complexity of the neural network. Therefore, we propose two-stage DNN training using two separately-trained networks; denoising of sound sources and sound source identification. Since the huge network is divided into two smaller networks, the complexity of the networks is expected to decrease and each of them can consider a specific model of denoising and identification. This results in faster convergence and computational cost reduction in DNN training. Preliminary results showed that only 71 % of training time was necessary with the proposed two staged network, while maintaining the accuracy of sound source identification, compared to end-to-end training using noisy acoustic signals recorded with an 8 ch circular microphone array embedded in a UAV.

Takayuki Morito, Osamu Sugiyama, Satoshi Uemura, Ryosuke Kojima, Kazuhiro Nakadai

KANSEI (Emotional) Information Classifications of Music Scores Using Self Organizing Map

We classified KANSEI (emotional) information for musical compositions by using only the notes in the music score. This is in contrast to the classification of music by using audio files, which are taken from a performance with the emotional information processed by the instrumentalists. The first is classification into one of two classes, duple meter or irregular meter. The second is classification into one of the two classes, slow vs. fast (threshold tempo: ♩ = 110). The classification of the musical meter is based on identifying the meter indicated in the score. For tempo classification, we generally used the tempo indication in the score, but we evaluate classification that includes tempo revisions through a subject’s emotions to be accurate. We performed classification for both the meter and tempo evaluations with a recognition rate above 70 % by using self-organizing maps for unsupervised online training. Particularly, in the tempo classification, a computer successfully processed the emotional information directed.

Satoshi Kawamura, Hitoaki Yoshida

Origin of Randomness on Chaos Neural Network

We have proposed a hypothesis on the origin of randomness in the chaos time series of a chaos neural network (CNN) according to empirical results. An improved pseudo-random number generator (PRNG) has been proposed on the basis of the hypothesis and contamination mechanisms. PRNG has been implemented also with the fixed-point arithmetic (Q5.26). The result is expected to apply to embedded systems; for example the application of protecting personal information in smartphone and other mobile devices.

Hitoaki Yoshida, Takeshi Murakami, Taiki Inao, Satoshi Kawamura

Artificial Neural Network Application for Parameter Prediction of Heat Induced Distortion

Heat induced distortion has been widely studied over the years, in order to provide reliable results, thermal elastic-plastic FEM analysis have been used to estimate the distortions produced by the heat source. However this type of analysis often involves long computational time and requires high degree of technical knowledge by the user, moreover it’s mainly performed to specific regions that limit the scope of the analysis. In order to provide a tool for the prediction of the line heating phenomena, an artificial neural network (ANN) is used. ANN is a powerful tool to predict complex phenomena, and in addition, it is very attractive because of the relatively modest hardware requirements and fast computational time. In this paper, parameter prediction for the heat induced distortion as an inverse problem is performed by ANN, using, the inherent deformation from a gas heating FEM analysis and their heating conditions as the training data. Exploratory analysis of the data and the model were performed to accurately predict the heating conditions. The prediction of the necessary heating conditions to generate an arbitrary deformation in the plate is a step forward in the automation of the line heating forming process. The possibility of predicting arbitrary heat induced distortion problem by an ANN model is shown.

Cesar Pinzon, Kazuhiko Hasewaga, Hidekazu Murakawa

The Optimization of a Lathing Process Based on Neural Network and Factorial Design Method

The capability to optimize the surface roughness is critical to the surface quality of manufactured work pieces. If the performance of the available CNC machine is correctly characterized or the relationship between inputs and output is clearly identified, the operators on the shop floor will be able to operate their machine at the highest efficiency. In order to achieve the desired objective, this research is based on the empirical study which is conducted in such a way that the optimization method is utilized to analyze the empirical data. The focused process in this study is the lathing process with three input factors, spindle speed, feed rate and depth of cut while the corresponding output is surface roughness. Two methods, namely artificial neural network (ANN) and 2k factorial design, are used to construct mathematical models exploring the relationship between inputs and output. The performance of each method is compared by considering the forecasting errors after fitting the model to the empirical data. The results according to this study signify that there is no significant difference between the performance of these two optimization methods.

Karin Kandananond

FPGA Implementation of Neuron Model Using Piecewise Nonlinear Function on Double-Precision Floating-Point Format

The artificial neurons model has been implemented in a field programmable gate array (FPGA). The neuron model can be applied to learning, training of neural networks; all data types are 64 bits, and first and second-order functions is employed to approximate the sigmoid function. The constant values of the model are tuned to provide a sigmoid-like approximate function which is both continuous and continuously differentiable. All data types of the neuron are corresponding to double precision in C language. The neuron implementation is expressed in 48-stage pipeline. Assessment with an Altera Cyclone IV predicts an operating speed of 85 MHz. Simulation of 4 neurons neural network on FPGA obtained chaotic behavior. An FPGA output chaos influenced by calculation precision and characteristics of the output function. The circuit is the estimation that above 1,000 neurons can implement in Altera Cyclone IV. It shows the effectiveness of this FPGA model to have obtained the chaotic behavior where nonlinearity infuences greatly. Therefore, this model shows wide applied possibility.

Satoshi Kawamura, Masato Saito, Hitoaki Yoshida

Innovations in Intelligent Systems and Applications

Frontmatter

Video Inpainting in Spatial-Temporal Domain Based on Adaptive Background and Color Variance

Video inpainting is repairing the damage regions. Nowadays, video camera is usually used to record the visual memory in our life. When people recorded a video, some scenes (or some objects) which unwanted are presented in video sometimes, but it doesn’t record repeatedly based on some reasons. In order to solve this problem, in this paper, we propose a video inpainting method to effectively repair the damage regions based on the relationships of frames in temporal sequence and color variability in spatial domain. The procedures of the proposed method include adaptive background construction, removing the unwanted objects, and repairing the damage regions in temporal and spatial domains. Experimental results verify that our proposed method can obtain the good structure property and extremely reduce the computational time in inpainting.

Hui-Yu Huang, Chih-Hung Lin

Multi-core Accelerated Discriminant Feature Selection for Real-Time Bearing Fault Diagnosis

This paper presents a real-time and reliable bearing fault diagnosis scheme for induction motors with optimal fault feature distribution analysis based discriminant feature selection. The sequential forward selection (SFS) with the proposed feature evaluation function is used to select the discriminative feature vector. Then, the k-nearest neighbor (k-NN) is employed to diagnose unknown fault signals and validate the effectiveness of the proposed feature selection and fault diagnosis model. However, the process of feature vector evaluation for feature selection is computationally expensive. This paper presents a parallel implementation of feature selection with a feature evaluation algorithm on a multi-core architecture to accelerate the algorithm. The optimal organization of processing elements (PE) and the proper distribution of feature data into memory of each PE improve diagnosis performance and reduce computational time to meet real-time fault diagnosis.

Md. Rashedul Islam, Md. Sharif Uddin, Sheraz Khan, Jong-Myon Kim, Cheol-Hong Kim

QUasi-Affine TRansformation Evolution (QUATRE) Algorithm: A New Simple and Accurate Structure for Global Optimization

QUasi-Affine TRansformation Evolution (QUATRE) algorithm is a simple but powerful structure for global optimization. Six different evolution schemes derived from this structure will be discussed in this paper. There is a close relationship between our proposed structure and Different Evolution (DE) structure, and DE can be considered as a special case of the proposed QUATRE algorithm. The performance of DE is usually dependent on parameter control and mutation strategy. There are 3 control parameters and several mutation strategies in DE, and this makes it a little complicated. Our proposed QUATRE is simpler than DE algorithm as it has only one control parameter and it is logically powerful from mathematical perspective of view. We also use COCO framework under BBOB benchmarks and CEC Competition benchmarks for the verification of the proposed QUATRE algorithm. Experiment results show that though QUATRE algorithm is simpler than DE algorithm, it is more powerful not only on unimodal optimization but also on multimodal optimization problem.

Jeng-Shyang Pan, Zhenyu Meng, Huarong Xu, Xiaoqing Li

Decision Support Systems

Frontmatter

Mobile Gaming Trends and Revenue Models

The study tries to find out the most important features in building games based on the grossing. The study is limited to fifty iPhone games that have achieved top grossing in the USA. The game features were extracted from a previous study [1] and classified through ARM funnel into five groups (“A”, “R”, “M”, “AR”, and “RM”). The paper follows CRISP-DM approach under SPSS Modeler through business and data understanding, Data preparation, model building and evaluation. The researcher uses Decision Tree model since the features have closed value i.e. (Yes/No) on the grossing weight. The study reached to the most important 10 features out of 31. These features are important to build successful mobile games. The study emphasizes on the availability of (Acquisition, Retention and Monetization) elements on every successful game and if any is missed, will lead to the failure of the game.

Khaled Mohammad Alomari, Tariq Rahim Soomro, Khaled Shaalan

Decision Making Based on Different Dimension Direction

“Dimension direction” means the evaluation direction in decision making problem. Decision-making sometimes should consider multi-criteria, multi-period and multi-place. The criteria, period and place are different dimension direction for making decision. In special situation, decision maker must think over more than three dimension directions in the decision making problem. However, past research literatures at most consider two dimension directions. The goal of this research is to develop the execution process to deal with the decision making problem under multi dimension direction. Proposed execution process can flexibly apply in various kinds of multi dimension direction problem. A numerical example will be implemented. Some analysis and comparison will be executed. Conclusion and future research will be discussed as ending.

Yung-Lin Chen

Non-Conformity Detection in High-Dimensional Time Series of Stock Market Data

It is desired to extract important information on-line from high-dimensional time series because of difficulty in reflecting the data fully in decision making. In econometric techniques, previous works primarily focus on prediction of price. To make decision in business practice, it is important to focus on human-machine interaction based on chance discovery. Here we propose Non-Conformity Detection as a method for aiding to humans to discover chances. Non-Conformity Detection is designed to detect a noteworthy point that behaves exceptionally compared to surrounding points in time series. In the experiment, the method of Non-Conformity Detection is applied to the time series of 29 stocks return in the electrical machine industry. As the result, four dates among the detected top five non-conformity points coincide with the important dates that professional analysts judged for making investment decision. These results suggest Non-Conformity Detection support the discovery of chances for decision making.

Akira Kasuga, Yukio Ohsawa, Takaaki Yoshino, Shunichi Ashida

Recent Study on the Application of Hybrid Rough Set and Soft Set Theories in Decision Analysis Process

Many approaches and methods have been proposed and applied in decision analysis process. One of the most popular approaches that has always been investigated is parameterization method. This method helps decision makers to simplify a complex data set. The purpose of this study was to highlight the roles and the implementations of hybrid rough set and soft set theories in decision-making especially in parameter reduction process. Rough set and soft set theories are the two powerful mathematical tools that have been successfully proven by many research works as a good parameterization method. Both of the theories have the capability of handling data uncertainties and data complexity problems. Recent studies have also shown that both rough set and soft set theories can be integrated together in solving different problems by producing a variety of algorithms and formulations. However, most of the existing works only did the performance validity test with a small volume of data set. In order to prove the hypothesis, which is the hybridization of rough set and soft set theories could help to produce a good result in the classification process, a new alternative to hybrid parameterization method is proposed as the outcome of this study. The results showed that the proposed method managed to achieve significant performance in solving the classification problem compared to other existing hybrid parameter reduction methods.

Masurah Mohamad, Ali Selamat

Multivariate Higher Order Information for Emergency Management Based on Tourism Trajectory Datasets

Higher order information of trajectory dataset for “what-if” analysis is getting more popular for providing better informed decision making. However, existing research had been studied trajectory higher order information based on univariate dataset only. There is yet no research to study higher order information of multivariate dataset for trajectory analysis. This paper will introduce a unified data structure which supports multivariate datasets for trajectory analysis and approaches to analyse information based on multivariate higher order information for making decisions related to emergency management. Interactive visualisation tools are implemented to facilitate the analysis. Tourism trajectory dataset is used to demonstrate the proposed approach.

Ye Wang, Kyungmi Lee, Ickjai Lee

Hourly Solar Radiation Forecasting Through Model Averaged Neural Networks and Alternating Model Trees

The objective of the current study was to develop a solar radiation forecasting model capable of determining the specific times during a given day that solar panels could be relied upon to produce energy in sufficient quantities to meet the demand of the energy provider, Southern Company. Model averaged neural networks (MANN) and alternating model trees (AMT) were constructed to forecast solar radiation an hour into the future, given 2003–2012 solar radiation data from the Griffin, GA weather station for training and 2013 data for testing. Generalized linear models (GLM), random forests, and multilayer perceptron (MLP) were developed, in order to assess the relative performance improvement attained by the MANN and AMT models. In addition, a literature review of the most prominent hourly solar radiation models was performed and normalized root mean square error was calculated for each, for comparison with the MANN and AMT models. The results demonstrate that MANN and AMT models outperform or parallel the highest performing forecasting models within the literature. MANN and AMT are thus promising time series forecasting models that may be further improved by combining these models into an ensemble.

Cameron R. Hamilton, Frederick Maier, Walter D. Potter

Adaptive Control

Frontmatter

Fully Automated Learning for Position and Contact Force of Manipulated Object with Wired Flexible Finger Joints

We discuss about the modeling technology in the object manipulation of the robot arm that is equipped with flexible finger joints. In recent years, flexible robot fingers are getting attention because of their handling capability and safety. However, the position and contact force of manipulated object take much non-linear uncertainty from the flexibility. In this paper, we propose the modeling framework of the position and contact force of the manipulated object. The proposed framework is an online learning method that is composed of motor babbling, dynamics learning tree (DLT), and $$\epsilon $$-greedy method. In the experiments, the effectiveness of DLT was compared with neural network (NN), the effectiveness of the proposed framework was validated using a drawing task of a humanoid robot that equipped with flexible finger joints. The proposed framework was able to realize a fully automated incremental-manipulation-learning.

Kanta Watanabe, Shun Nishide, Manabu Gouko, Chyon Hae Kim

Vehicle Dynamics Modeling Using FAD Learning

Highly precise vehicle dynamics modeling is indispensable for self-driving technology. We propose a model learning framework, which utilizes FAD (The abbreviation of the capital letters of free dynamics, actuator, and disturbance.) learning, motor babbling, and dynamics learning tree. In the proposed framework, modeling error was decreased compared with conventional neural network approach. Also, this framework is applicable to online learning. In experiments, FAD learning and dynamics learning tree decreased learning error. The dynamics of a simulated car was learned using motor babbling. The proposed framework is applicable to a variety of mechanical systems.

Keigo Eto, Yuichi Kobayashi, Chyon Hae Kim

Adaptive Model for Traffic Congestion Prediction

Traffic congestion is influenced by various factors like the weather, the physical conditions of the road, as well as the traffic routing. Since such factors vary depending on the type of road network, restricting the traffic prediction model to pre-decided congestion factors could compromise the prediction accuracy. In this paper, we propose a traffic prediction model that could adapt to the road network and appropriately consider the contribution of each congestion causing or reflecting factors. Basically our model is based on the multiple symbol Hidden Markov Model, wherein correlation among all the symbols (congestion causing factors) are build using the bivariate analysis. The traffic congestion state is finally deduced on the basis of influence from all the factors. Our prediction model was evaluated by comparing two different cases of traffic flow. We compared the models built for uninterrupted (without traffic signal) and interrupted (with traffic signal) traffic flow. The resulting prediction accuracy is of 79 % and 88 % for uninterrupted and interrupted traffic flow respectively.

Pankaj Mishra, Rafik Hadfi, Takayuki Ito

Soft Computing and Multi-agent Systems

Frontmatter

Intellectual Processing of Human-Computer Interruptions in Solving the Project Tasks

The paper focuses on an approach that facilitates improving the management processes in solving the project tasks. The improvement can be achieved if Agile managerial mechanisms will be combined with the managing of human-computer interruptions when tasks are reflected on programmable queues. The offered combining opens the possibility for planned and situational interactions of the designer with queues of tasks in points of interruptions taking into account the evolving multitasking. Furthermore, the approach supports intellectual processing the reasons of interruptions for their more effective using.

P. Sosnin

How the Strategy Continuity Influences the Evolution of Cooperation in Spatial Prisoner’s Dilemma Game with Interaction Stochasticity

The evolution of cooperation among selfish individuals is a fundamental issue in artificial intelligence. Recent work has revealed that interaction stochasticity can promote cooperation in evolutionary spatial prisoner’s dilemma games. Considering the players’ strategies in previous works are discrete (either cooperation or defection), we focus on the evolutionary spatial prisoner’s dilemma game with continuous strategy based on interaction stochasticity mechanism. In this paper, we find that strategy continuity do not enhance the cooperation level. The simulation results show that the cooperation level is lower if the strategies are continuous when the interaction rate is low. With higher interaction rate, the cooperation levels of continuous-strategy system and the discrete-strategy system are very close. The reason behind the phenomena is also given. Our results may shed some light on the role of continuous strategy and interaction stochasticity in the emergence and persistence of cooperation in spatial network.

Xiaowei Zhao, Xiujuan Xu, Wangpeng Liu, Yixuan Huang, Zhenzhen Xu

-SROIQ(D): Possibilistic Description Logic for Uncertain Geographic Information

The central question addressed in this paper is to determine how to represent uncertain knowledges present in many applications in geographic domain. The resulting proposition is a possibilistic extension of the very expressive Description Logic $$\mathcal {SROIQ(D)}$$, the underlying logic of the OWL2, called $$\pi $$-$$\mathcal {SROIQ(D)}$$. It is a solution for handling uncertainty and for dealing with inconsistency in geographic applications. Both syntax and semantics of the proposed description logic are considered. In addition we provide a tableau algorithm for reasoning with the possibilistic DL $$\pi $$-$$\mathcal {SROIQ(D)}$$.

Safia Bal-Bourai, Aicha Mokhtari

Smart Solution of Alternative Energy Source for Smart Houses

This paper describes the design and implementation of smart photovoltaic power source. It describes the principle of solar irradiation to energy transformation and influences on its effectiveness, as well as ways to achieve higher energy gain. Based on the operation of the control algorithm the microcontroller-controlled electronics is capable of providing optimal adjustment of the front surface of the panel towards the Sun. It is also capable of measuring the energy balance of the whole device. All individual electronic devices that was built as a products of this project are directly using or are based on Arduino development kit. This work also designs and implements solutions that provide visualization and storage of measured data. Evaluation of the benefit of smart power source in practical operation is performed on the basis of comparison with the measured data obtained from the photovoltaic panel with fixed position.

Jakub Vit, Ondrej Krejcar

An Assembly Sequence Planning Approach with a Multi-state Particle Swarm Optimization

Assembly sequence planning (ASP) becomes one of the major challenges in the product design and manufacturing. A good assembly sequence leads in reducing the cost and time of the manufacturing process. However, assembly sequence planning is known as a classical hard combinatorial optimization problem. Assembly sequence planning with more product components becomes more difficult to be solved. In this paper, an approach based on a new variant of Particle Swarm Optimization Algorithm (PSO) called the multi-state of Particle Swarm Optimization (MSPSO) is used to solve the assembly sequence planning problem. As in of Particle Swarm Optimization Algorithm, MSPSO incorporates the swarming behaviour of animals and human social behaviour, the best previous experience of each individual member of swarm, the best previous experience of all other members of swarm, and a rule which makes each assembly component of each individual solution of each individual member is occurred once based on precedence constraints and the best feasible sequence of assembly is then can be determined. To verify the feasibility and performance of the proposed approach, a case study has been performed and comparison has been conducted against other three approaches based on Simulated Annealing (SA), Genetic Algorithm (GA), and Binary Particle Swarm Optimization (BPSO). The experimental results show that the proposed approach has achieved significant improvement.

Ismail Ibrahim, Zuwairie Ibrahim, Hamzah Ahmad, Zulkifli Md. Yusof

Evolutionary Algorithms and Heuristic Search

Frontmatter

A Black Hole Algorithm for Solving the Set Covering Problem

The set covering problem is a classical optimization benchmark with many industrial applications such as production planning, assembly line balancing, and crew scheduling among several others. In this work, we solve such a problem by employing a recent nature-inspired metaheuristic based on the black hole phenomena. The core of such a metaheuristic is enhanced with the incorporation of transfer functions and discretization methods to handle the binary nature of the problem. We illustrate encouraging experimental results, where the proposed approach is capable to reach various global optimums for a well-known instance set from the Beasley’s OR-Library.

Ricardo Soto, Broderick Crawford, Ignacio Figueroa, Stefanie Niklander, Eduardo Olguín

Challenging Established Move Ordering Strategies with Adaptive Data Structures

The field of game playing is a particularly well-studied area within the context of AI, leading to the development of powerful techniques, such as the alpha-beta search, capable of achieving competitive game play against an intelligent opponent. It is well known that tree pruning strategies, such as alpha-beta, benefit strongly from proper move ordering, that is, searching the best element first. Inspired by the formerly unrelated field of Adaptive Data Structures (ADSs), we have previously introduced the History-ADS technique, which employs an adaptive list to achieve effective and dynamic move ordering, in a domain independent fashion, and found that it performs well in a wide range of cases. However, previous work did not compare the performance of the History-ADS heuristic to any established move ordering strategy. In an attempt to address this problem, we present here a comparison to two well-known, acclaimed strategies, which operate on a similar philosophy to the History-ADS, the History Heuristic, and the Killer Moves technique. We find that, in a wide range of two-player and multi-player games, at various points in the game’s progression, the History-ADS performs at least as well as these strategies, and, in fact, outperforms them in the majority of cases.

Spencer Polk, B. John Oommen

An Binary Black Hole Algorithm to Solve Set Covering Problem

The set covering problem (SCP) is one of the most representative combinatorial optimization problems and it has multiple applications in different situations of engineering, sciences and some other disciplines. It aims to find a set of solutions that meet the needs defined in the constraints having lowest possible cost. In this paper we used an existing binary algorithm inspired by Binary Black Holes (BBH), to solve multiple instances of the problem with known benchmarks obtained from the OR-library. The presented method emulates the behavior of these celestial bodies using a rotation operator to bring good solutions. After tray this algorithm, we implemented some improvements in certain operators, as well as added others also inspired by black holes physical behavior, to optimize the search and exploration to improving the results.

Álvaro Gómez Rubio, Broderick Crawford, Ricardo Soto, Adrián Jaramillo, Sebastián Mansilla Villablanca, Juan Salas, Eduardo Olguín

Solving the Set Covering Problem with the Soccer League Competition Algorithm

The Soccer League Competition (SLC) algorithm is a new metaheuristic approach intendended to solve complex optimization problems. It is based in the interaction model present in soccer teams and the goal to win every match, becoming the best team and league of players. This paper presents adaptations to the initial mode of SLC for the purpose of being applied to the Set Covering Problem (SCP) with a Python implementation.

Adrián Jaramillo, Broderick Crawford, Ricardo Soto, Sebastián Mansilla Villablanca, Álvaro Gómez Rubio, Juan Salas, Eduardo Olguín

An Artificial Fish Swarm Optimization Algorithm to Solve Set Covering Problem

The Set Covering Problem (SCP) consists in finding a set of solutions that allow to cover a set of necessities with the minor possible cost. There are many applications of this problem such as rolling production lines or installation of certain services like hospitals. SCP has been solved before with different algorithms like genetic algorithm, cultural algorithm or firefly algorithm among others. The objective of this paper is to show the performance of an Artificial Fish Swarm Algorithm (AFSA) in order to solve SCP. This algorithm, simulates the behavior of a fish shoal inside water and it uses a population of points in space to represent the position of a fish in the shoal. Here we show a study of its simplified version of AFSA in a binary domain with its modifications applied to SCP. This method was tested on SCP benchmark instances from OR-Library website.

Broderick Crawford, Ricardo Soto, Eduardo Olguín, Sebastián Mansilla Villablanca, Álvaro Gómez Rubio, Adrián Jaramillo, Juan Salas

The Impact of Using Different Choice Functions When Solving CSPs with Autonomous Search

Constraint programming is a powerful technology for the efficient solving of optimization and constraint satisfaction problems (CSPs). A main concern of this technology is that the efficient problem resolution usually relies on the employed solving strategy. Unfortunately, selecting the proper one is known to be complex as the behavior of strategies is commonly unpredictable. Recently, Autonomous Search appeared as a new technique to tackle this concern. The idea is to let the solver adapt its strategy during solving time in order to improve performance. This task is controlled by a choice function which decides, based on performance information, how the strategy must be updated. However, choice functions can be constructed in several manners variating the information used to take decisions. Such variations may certainly conduct to very different resolution processes. In this paper, we study the impact on the solving phase of 16 different carefully constructed choice functions. We employ as test bed a set of well-known benchmarks that collect general features present on most CSPs. Interesting experimental results are obtained in order to provide the best-performing choice functions for solving CSPs.

Ricardo Soto, Broderick Crawford, Rodrigo Olivares, Stefanie Niklander, Eduardo Olguín

Binary Harmony Search Algorithm for Solving Set-Covering Problem

This paper is intended to generate solutions to Set Covering Problem (SCP) through the use of a metaheuristic. The results were obtained using a variation of Harmony Search called Binary Global-Best Harmony Search Algorithm. To measure the effectiveness of the technique against other metaheuristics, Weasly benchmark was used.

Juan Salas, Broderick Crawford, Ricardo Soto, Álvaro Gómez Rubio, Adrián Jaramillo, Sebastián Mansilla Villablanca, Eduardo Olguín

Differential Evolution for Multi-objective Robust Engineering Design

There have been various algorithms based on Differential Evolution (DE) that have been demonstrated to optimize various engineering problems. While there has been work on using variations of DE for solving multi-objective problems, there has not been much work on applying this methodology to tolerance design. The ability to handle tolerances is extremely important in robust design, where a product of process is to be optimized while minimizing its responses to uncontrollable manufacturing and/or operational variations. This paper will demonstrate how Multi-Objective Differential Evolution can be applied to the area of multiple objective tolerance design. The algorithm’s utility is highlighted with two canonical engineering design optimization problems.

Andrew Linton, Babak Forouraghi

Multiple Objectives Reconfiguration in Distribution System Using Non-Dominated Sorting Charged System Search

Distribution system reconfiguration is achieved by changing the statuses of the switches. A number of targets of distribution system operation can be achieved after feeder reconfiguration operation. In general, while constructing the feeder reconfiguration, some factors need be considered. For example, primary feeder losses minimization, the number of switch actions reduction and voltage profile. Weighted sum method is used when multiple objective problems have to be solved. However, through the use of the weighted sum method, only one solution can be found. This is not preferred by distribution systems operators. So as to provide multiple compromise solutions, the multi-objective approach is one of the methods. In order to provide operators with different compromise solutions, A Non-Dominated Sorting Charged System Search (NDSCSS) is proposed to solve the multi-objective problems of distribution systems. Because the values of different factors are made using diverse topologies, these topologies can find different solutions. In order to generate a legal topology, the Zone Real Number Strings (ZRNS) encoding/decoding scheme is used. The 33-bus is implemented. The performance of Non-Dominated Sorting Evolutionary Programming (NSEP), Multi-Objective Particle Swarm Optimization (MOPSO) and Non-Dominated Sorting Charged System search (NDSCSS) are compared. The results indicate that NDSCSS can search for the best solutions among the three considered algorithms for distribution system reconfiguration problems.

Cheng-Chieh Chu, Men-Shen Tsai

System Integration for Real-Life Applications

Frontmatter

Design of a Communication System that Can Predict Situations of an Absentee Using Its Behavior Log

We designed a communication system that can predict the current situation of an absentee on the basis of its behavior log and provide some communication tools suitable for the situation to a visitor. An operative evaluation revealed that the proposed system could predict user situations with high accuracy. Further, it can output the expected values in short terms by calculating the feedback values to the possibility table with our Bayesian network. Furthermore, the proposed system has realized the communication suitable for a situation and for information sharing including prediction by using the current information and communication technology.

Hironori Hiraishi

Autonomic Smart Home Operations Management Using CWMP: A Task-Centric View

Despite the well-development of smart living space research field, Smart Home is still more like a luxury product than a daily necessity for most families. Operations management issues are essential for a new technology to be accepted by the mass consumer market. However, only few attempts have been made toward this direction. In this paper, we present the design and implementation of a CWMP-based platform that supports autonomic operations management. The experiment results show that the proposed approach is stable and is able to drive the operations tasks smoothly. We also demonstrate the feasibility of the platform by realizing two application scenarios supported by the prototype of the proposed approach.

Chun-Feng Liao, Shih-Ting Huang, Yi-Ching Wang

An Event-Driven Adaptive Cruise Controller

The proposed Event-Driven Adaptive Cruise Controller (EDACC) serves as longitudinal driver assistant by accelerating, decelerating, and stopping the host vehicle given the readings of various sensors. EDACC uses a simplified non-linear longitudinal vehicle model and a hierarchical control structure of PI and PID controllers integrated with an embedded specific logic. In addition to the adaptability of the ACC, events can be added such as host vehicle entering speed-limit zone and/or having punctured tire. The proposed EDACC is designed and implemented using Matlab and Simulink to simulate multiple scenarios.

Jessica Jreijiry, Mohamad Khaldi

Design and Implementation of a Smartphone-Based Positioning System

In this research study, we investigate feasibility of a smart phone based positioning system. The system allows users to get their current location information via their smartphones. The client software required for user smartphones can be distributed and installed easily through standard mobile Apps. Once the mobile App is ready in user smartphones, they just need to take one or two pictures at target objects around, and upload the pictures to the positioning system through Internet connection. The positioning system will identify the location, based on the upload pictures and other related information. We conducted a field test to identify locations among a building complex in a campus. We collected thousands of images taken from outward appearance of several buildings in a campus at different days. The images were classified into 17 classes, based on the location the picture images were taken. From the experiment results, we found that under well control of image quality for both training and testing images the correct classification rate can be as high as 98.3 %. Even under the cases of large scope-of-view mismatching between the raining images and the tested images, the proposed scheme can still generate good correct classification rate (86.7 % and 77.3 % for both covering and covered cases respectively) compared with random guest (1/17 $$=$$ 5.88 %).

Chun-Chao Yeh, Yu-Ching Lo, Chin-Chun Chang

Prototypical Design and Implementation of an Intelligent Network Data Analysis Tool Collaborating with Active Information Resource

The methodologies of data analysis such as data mining, statistical analysis and machine learning are important notion in the network management in order to deal with complex network structure and growing network threats. Although a number of analytics tools are available in present day, it is not easy to apply the tools for network management because the ordinal administrators do not have the professional knowledge about mathematics and statistics. To improve the knowledge problem of network management, in this paper, we propose an agent-based analysis support system for network data and its collaboration mechanism with autonomic network management system. The evaluation experiment using prototypical system shows that the system can reduce intellectual load of the analysis task of the users.

Kazuto Sasai, Hideyuki Takahashi, Gen Kitagata, Tetsuo Kinoshita

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise