Skip to main content

Über dieses Buch

The two volumes set LNCS 7653 and 7654 constitutes the refereed proceedings of the 4th International Conference on Computational Collective Intelligence, ICCCI, held in Ho Chi Minh City, Vietnam, in November 2012. The 113 revised full papers presented were carefully reviewed and selected from 397 submissions. The papers are organized in topical sections on (Part I) knowledge integration; data mining for collective processing; fuzzy, modal, and collective systems; nature inspired systems; language processing systems; social networks and semantic web; agent and multi-agent systems; classification and clustering methods; modeling and optimization techniques for business intelligence; (Part II) multi-dimensional data processing; web systems; intelligent decision making; methods for scheduling; collective intelligence in web systems – web systems analysis; advanced data mining techniques and applications; cooperative problem solving; computational swarm intelligence; and semantic methods for knowledge discovery and communication



Knowledge Integration

Comparison of One-Level and Two-Level Consensuses Satisfying the 2-Optimality Criterion

This paper is dedicated to examine the results of methods determining the one-level and the two-level consensuses fulfilling the 2-optimality criterion with reference to the optimal solution. The 2-optimality criterion requires the sum of the squared distance between a consensus and the profile’s elements to be minimal. This problem is an NP-complete problem, so for solving it heuristic approaches are presented. The researches demonstrate that a better solution is always given by the one-level consensus. In comparison to the optimal solution the two-level algorithm gives results by 5% worse and the one-level method by 1% worse. Additionally, author considers how many units are required to determine the reasonable consensus, which is called a susceptible to a consensus of profiles. Analyses presented in this paper show that the increasing the cardinality of a profile increases the probability of being susceptible to a consensus but for the assumed study cardinality of the profile greater than 384 gives a good result.

Adrianna Kozierkiewicz-Hetmańska

A Heuristic Method for Collaborative Recommendation Using Hierarchical User Profiles

Document recommendation in information retrieval is a well known problem. Recommending a profile in order to personalize document search is a less common approach. In this paper a specific solution to profile recommendation is proposed, by use of knowledge integration methods. A hierarchical user profile is defined to represent the user. For each new user joining an information retrieval system, a prepared non-empty profile is assigned based on other similar users. To create such a profile, knowledge integration methods are used. A set of postulates are proposed to describe such representative profile. Criteria measures are used to determine if a solution to a specific algorithm satisfies these postulates. Three integration algorithms are proposed and evaluated, including a heuristic algorithm. In future research, these algorithms will be used in a practical system.

Marcin Maleszka, Bernadetta Mianowska, Ngoc-Thanh Nguyen

Solving Conflict on Collaborative Knowledge via Social Networking Using Consensus Choice

While knowledge exchange among users is rapidly increased in social networking environment, collaborative knowledge in social networking is being become more and more essential for knowledge management systems. In this work, a method for solving conflict on collaborative knowledge via social networking using consensus choice is presented. A knowledge base can be considered as a pair


= (

O, I

) where


is an ontology and


is a set of instances of concepts belonging to the ontology


. The main issue presented here is how to organize a collaboration process and to resolve conflicts in collaborative knowledge creation via social networking.In this work, ontology is considered as sharing mechanism for social collaboration. The structure of a collaborative group is distinguished by three types including centralized-, decentralized-, and distributed group. For each group type, we propose a corresponding algorithm for conflict resolution using consensus choice.

Quoc Uy Nguyen, Trong Hai Duong, Sanggil Kang

Integrating Multiple Experts for Correction Process in Interactive Recommendation Systems

To improve the performance of the recommendation process, most of recommendation systems (RecSys) should collect better ratings from users. Particularly, rating process is an important task in interactive RecSys which can ask users to correct their own ratings. However, in real world, there are many inconsistencies (e.g., mistakes and missing values) or incorrect in the user ratings. Thereby, expert-based recommendation framework has been studied to select the most relevant experts in a certain item attribute (or value). This kind of RecSys can


) discover user preference and


) determine a set of experts based on attribute and value of items. In this paper, we propose a consensual recommendation framework integrating multiple experts to conduct correction process. Since the ratings from experts are assumed to be reliable and correct, we first analyze user profile to determine the preference and find out a set of experts. Next, we measure a minimal inconsistency interval (MinIncInt) that might contain incorrect ratings. Finally, we propose solutions to correct the incorrect rating based on ratings from multiple experts.

Xuan Hau Pham, Jason J. Jung, Ngoc-Thanh Nguyen

Modeling Collaborative Knowledge of Publishing Activities for Research Recommendation

We have applied social network analysis (SNA) approach for our current researches that relate to recommender systems in the field of scientific research. One of the challenges for SNA based methods is how to identify and quantify relationships of actors in a specified social community. In this context, how we can extract and organize a social structure from a collection of scientific articles. In order to do so, we proposed and developed a collaborative knowledge model of researchers from their publishing activities. The collaborative knowledge model (CKM) forms a collaborative network that is used to represent, qualify collaborative relationships. The proposed model is based on the combination of graph theory and probability theory. The model consists of three key components such as CoNet (a scientific collaborative network), M (measures) and R (rules). The model aims to support recommendations for researchers such as research paper recommendation, collaboration recommendation, expert recommendation, and publication venue recommendation that we have been working on.

Tin Huynh, Kiem Hoang

Data Mining for Collective Processing

A New Approach for Problem of Sequential Pattern Mining

Frequent Pattern Mining is an important data mining task and it has been a focus theme in data mining research. One of the main issues in Frequent Pattern Mining is Sequential Pattern Mining retrieved the relationships among objects in sequential dataset. AprioriAll is a typical algorithm to solve the problem in Sequential Pattern Mining but its complexity is so high and it is difficult to apply in large datasets. Recently, to overcome the technical difficulty, there are a lot of researches on new approaches such as custom-built Apriori algorithm, modified Apriori algorithm, Frequent Pattern-tree and its developments, integrating Genetic algorithms, Rough Set Theory or Dynamic Function to solve the problem of Sequential Pattern Mining. However, there are still some challenging research issues that time consumption is still hard problem in Sequential Pattern Mining. This paper introduces a new approach with a model presented with definitions and operations. The proposed algorithm based on this model finds out the sequential patterns with quadratic time to solve absolutely problems in Sequential Pattern Mining and significantly improve the speed of calculation and data analysis.

Thanh-Trung Nguyen, Phi-Khu Nguyen

Robust Human Detection Using Multiple Scale of Cell Based Histogram of Oriented Gradients and AdaBoost Learning

Human detection is an important task in many applications such as intelligent transport systems, surveillance systems, automatic human assistance systems, image retrieval, and so on. This paper proposes a multiple scale of cell based Histogram of Oriented Gradients (HOG) features description for human detection system. Using these proposed feature descriptors, a robust system is developed according to decision tree structure of boosting algorithm. In this system, the integral image based method is utilized to compute feature descriptors rapidly, and then cascade classifiers are taken into account to reduce computational cost. The experiments were performed on INRIA’s database and our own database, which includes samples in several different sizes. The experiment results showed that our proposed method produce high performance with lower false positive and higher recall rate than the standard HOG features description. This method is also efficient with different resolution and gesture poses under a variety of backgrounds, lighting, as well as individual human in crowds, and partial occlusions.

Van-Dung Hoang, My-Ha Le, Kang-Hyun Jo

Discovering Time Series Motifs Based on Multidimensional Index and Early Abandoning

Time series motifs are pairs of previously unknown sequences in a time series database or subsequences of a longer time series which are very similar to each other. Since their formalization in 2002, discovering motifs has been used to solve problems in several application areas. In this paper, we propose a novel approach for discovering approximate motifs in time series. This approach is based on R*-tree and the idea of early abandoning. Our method is time and space efficient because it only saves Minimum Bounding Rectangles (MBR) of data in memory and needs a single scan over the entire time series database and a few times to read the original disk data in order to validate the results. The experimental results showed that our proposed algorithm outperforms the popular method, Random Projection, in efficiency.

Nguyen Thanh Son, Duong Tuan Anh

A Hybrid Approach of Pattern Extraction and Semi-supervised Learning for Vietnamese Named Entity Recognition

Requiring a large hand-annotated corpus in supervised learning of contemporary Vietnamese Named Entity Recognition researches is challenging. We therefore propose a hybrid approach of pattern extraction and semi-supervised learning. Applied rule-based method helps generating patterns automatically. Part-of-speech tagger, lexical diversity and chunking are explored to define rules in pattern extractions which are used for identifying potential named entities. Semi-supervised learning trains a small amount of seed named entities to categorize named entities in extracted patterns. In experiments, our approach shows good increasing the system accuracy with others in Vietnamese.

Duc-Thuan Vo, Cheol-Young Ock

Information Extraction from Geographical Overview Maps

The paper presents a method of information extraction from overview maps. The idea is based on recognizing text located on the map and on finding locations corresponding to the extracted text labels using the GeoNames ontology. The method consists of three phases. The first one performs map image processing in order to recognize text labels. The next phase verifies these labels and marks them as being sure or unsure locations. In the third phase the map is interpreted based on the locations found. The second and third phases make use of the ontology. The preliminary results are promising for further development of the method.

Roman Pawlikowski, Krzysztof Ociepa, Urszula Markowska-Kaczmar, Pawel B. Myszkowski

Pixel-Based Object Detection and Tracking with Ensemble of Support Vector Machines and Extended Structural Tensor

In this paper we propose a system for visual object detection and tracking based on the extended structural tensor and the ensemble of one-class support vector machines. First, the input color image is transformed with the anisotropic process into the extended structural tensor. Then the tensor space is clustered into the number of partitions which are used to train a corresponding number of one-class support vector machines composing an ensemble of classifiers. In run-time the ensemble classifies the input video stream into an object and background. Thanks to high discriminative properties of the extended structural tensor and to the diversity of the ensemble of classifiers the method shows very good properties which were shown by experiments on real video sequences.

Bogusław Cyganek, Michał Woźniak

A Tree-Based Approach for Mining Frequent Weighted Utility Itemsets

In this paper, we propose a method for mining Frequent Weighted Utility Itemsets (FWUIs) from quantitative databases. Firstly, we introduce the WIT (Weighted Itemset Tidset) tree data structure for mining high utility itemsets in the work of Le et al. (2009) and modify it into MWIT (M stands for Modification) tree for mining FWUIs. Next, we propose an algorithm for mining FWUIs using MWIT-tree. We test the proposed algorithm in many databases and show that they are very efficient.

Bay Vo, Bac Le, Jason J. Jung

A Novel Trajectory Privacy-Preserving Future Time Index Structure in Moving Object Databases

The next generation of location-based services has been being predicted to achieve its superior development over the coming years. Keeping pace with this growth are new trends of predictive applications emerging to meet the demands of end-users and satisfy their matters of life. The violation of users’ private information from their position disclosure, however, cuts off their beliefs when they enjoy such services. In this paper, therefore, we propose a novel index structure known as PP


-tree, which is able to deal with predictive and aggregate queries and is aware of trajectory privacy protection towards future positions of moving objects. Moreover, the prediction model and related strategies are also introduced in order to support location-based applications whereas user privacy is still preserved. Last but not least, privacy analyses and performance experiments show how well the proposed method can help.

Trong Nhan Phan, Tran Khanh Dang

Fuzzy, Modal and Collective Systems

Summarizing Knowledge Base with Modal Conditionals

Work proposes and evaluates a method of summarizing transaction base with conditional formulas and their modal extensions. Modality expresses certainty levels using phrases ‘It is possible, that’ and ‘I believe, that’. Method is based on previous works on grounding of natural language statements. Semantics of conditionals, as opposed to associative rules, are consistent with their conventional natural language meaning. A software simulation is described and implemented. Some interesting results are discussed showing methods pros and cons.

Grzegorz Skorupa, Radosław Katarzyniak

Modeling PVT Properties of Crude Oil Systems Based on Type-2 Fuzzy Logic Approach and Sensitivity Based Linear Learning Method

In this paper, we studies on a prediction model of Pressure-Volume-Temperature (PVT) properties of crude oil systems using a hybrid type-2 fuzzy logic system (type-2 FLS) and sensitivity based linear learning method (SBLLM). The PVT properties are very important in the reservoir engineering computations whereby an accurate determination of PVT properties is important in the subsequent development of an oil field. In the formulation used, for the type-2 FLS the value of a membership function corresponding to a particular PVT properties value is no longer a crisp value; rather, it is associated with a range of values that can be characterized by a function that reflects the level of uncertainty, while in the case of SBBLM, the sensitivity analysis coupled with a linear training algorithm by human subject selections for each of the two layers is employed which ensures that the learning curve stabilizes soon and behave homogenously throughout the entire process operation based on the collective intelligence algorithms. Results indicated that type-2 FLS had better performance for the case of dataset with large data points (782-dataset) while SBLLM performed better for the small dataset (160-dataset).

Ali Selamat, S. O. Olatunji, Abdul Azeez Abdul Raheem

On Structuring of the Space of Needs in the Framework of Fuzzy Sets Theory

The article is focused on consumer’s needs modeling. Authors develop and describe a theoretical model based on Maslow’s needs hierarchy. Presented approach allows to compare consumers, represented by vectors of needs. Consumers’ preferences are described in the framework of fuzzy sets theory. Authors apply a measure of consumers’ dissimilarity. We suggest how large groups of individuals can be compared and how such space can be structured. The goal of this paper is to present our current research directions, with special interest paid to issues, which emerge, while we were discussing the concept of similarity relation between consumers. Formal description utilizes standard and well-known mathematical operators. The greatest attention is paid to our model application. The originality of our idea is in the interpretation and formal description of human needs, treated as a groundwork of the decision making process. In contrast to existing theories and models, we believe that explaining human behavior must consider the most basic level, which are needs.

Agnieszka Jastrzebska, Wladyslaw Homenda

Comparison of Fuzzy Combiner Training Methods

More recently, neural network techniques and fuzzy logic inference systems have been receiving an increasing attention. At the same time, methods of establishing decision by a group of classifiers are regarded as a general problem in various application areas of pattern recognition. Fuzzy combiner proposed by authors, harnesses the support values from classifiers to provide final response having no other restrictions on their structure. The work on generalization of the two-class classification into multiclass classification by means of a fuzzy inference system is extended in this paper. Different methods of fuzzy combiner training are investigated and the result of computer experiments carried out on UCI benchmark datasets in the Matlab environment are presented.

Tomasz Wilk, Michał Woźniak

An Axiomatic Model for Merging Stratified Belief Bases by Negotiation

This paper presents an axiomatic model for merging stratified belief bases by negotiation. We introduce the concept of mapping solution, which maps the preferences of agents into layers, as a vehicle to represent the belief states of agents and their attitudes towards the negotiation situations. The belief merging process in our model is divided into two stages: in the first stage, the agents’ stratified belief bases are maped to their preferences, and in the second stage a negotiation between the agents is carried out based on these preferences. In this paper, a set of rational axioms for negotiation-based belief merging is proposed and a negotiation solution which satisfies the proposed axioms is introduced. Finally, the logical properties of a family of merging-by-negotiation operators are discussed.

Trong Hieu Tran, Quoc Bao Vo

From Fuzzy Cognitive Maps to Granular Cognitive Maps

In this study, we introduce a concept of a granular fuzzy cognitive map. The generic maps are regarded as graph-oriented models describing relationships among a collection of concepts (represented by nodes of the graph). The generalization of the map comes in the form of its granular connections whose design dwells upon a principle of Granular Computing such as an optimal allocation (distribution) of information granularity being viewed as an essential modeling asset. Some underlying ideas of Granular Computing are briefly revisited.

Witold Pedrycz, Wladyslaw Homenda

Bayesian Vote Weighting in Crowdsourcing Systems

In social collaborative crowdsourcing platforms, the


which people give on the content generated by others is a very important component of the system which seeks to find the best content through collaborative action. In a crowdsourced innovation platform, people vote on innovations/ideas generated by others which enables the system to synthesize the view of the crowd about an idea. However, in many such systems


or vote spamming as it is commonly known is prevalent. In this paper we present a Bayesian mechanism for weighting the

actual vote

given by a user to compute an

effective vote

which incorporates the voters history of voting and also what the crowd is thinking about the value of the innovation. The model results into some interesting insights about social voting systems and new avenues for gamification.

Manas S. Hardas, Lisa Purvis

Recognition Task with Feature Selection and Weighted Majority Voting Based on Interval-Valued Fuzzy Sets

This paper presents the recognition algorithm with random selection of features. In the proposed procedure of classification the choice of weights is one of the main problems. In this paper we propose the weighted majority vote rule in which weights are represented by interval-valued fuzzy set (IVFS). In our approach the weights have a lower and upper membership function. The described algorithm was tested on one data set from UCI repository. The obtained results are compared with the most popular majority vote and the weighted majority vote rule.

Robert Burduk

On Quadrotor Navigation Using Fuzzy Logic Regulators

In this paper the cascaded fuzzy controller system for quadrotor steering and stabilizing was deliberated. The mathematical model of quadrotor and its cascaded fuzzy controller were simulated using Matlab Simulink software. The fuzzy system was divided into three subsystems for controlling position and speed of the quadrotor and for steering rotation speed of propellers. In the article the square trajectory of quadrotor flight was presented as a system test.

Boguslaw Szlachetko, Michal Lower

An Analysis of Change Trends by Predicting from a Data Stream Using Genetic Fuzzy Systems

A method to predict from a data stream of real estate sales transactions based on ensembles of genetic fuzzy systems was proposed. The approach consists in incremental expanding an ensemble by models built over successive chunks of a data stream. The predicted prices of residential premises computed by aged component models for current data are updated according to a trend function reflecting the changes of the market. The impact of different trend functions on the accuracy of single and ensemble fuzzy models was investigated in the paper. The results proved the usefulness of ensemble approach incorporating the correction of individual component model output.

Bogdan Trawiński, Tadeusz Lasota, Magdalena Smętek, Grzegorz Trawiński

On C-Learnability in Description Logics

We prove that any concept in any description logic that extends


with some features amongst





(quantified number restrictions with numbers bounded by a constant




(local reflexivity of a role) can be learnt if the training information system is good enough. That is, there exists a learning algorithm such that, for every concept


of those logics, there exists a training information system consistent with


such that applying the learning algorithm to the system results in a concept equivalent to



Ali Rezaei Divroodi, Quang-Thuy Ha, Linh Anh Nguyen, Hung Son Nguyen

Query-Subquery Nets

We formulate query-subquery nets and use them to create the first framework for developing algorithms for evaluating queries to Horn knowledge bases with the properties that: the approach is goal-directed; each subquery is processed only once and each supplement tuple, if desired, is transferred only once; operations are done set-at-a-time; and any control strategy can be used. Our intention is to increase efficiency of query processing by eliminating redundant computation, increasing flexibility and reducing the number of accesses to the secondary storage. The framework forms a generic evaluation method called QSQN. To deal with function symbols, we use a term-depth bound for atoms and substitutions occurring in the computation and propose to use iterative deepening search which iteratively increases the term-depth bound. In the long version [6] of the current paper we prove soundness and completeness of our generic evaluation method and show that, when the term-depth bound is fixed, the method has PTIME data complexity. In [6] we also propose two exemplary control strategies: one is to reduce the number of accesses to the secondary storage, while the other is depth-first search.

Linh Anh Nguyen, Son Thanh Cao

An Approach to Extraction of Linguistic Recommendation Rules – Application of Modal Conditionals Grounding

An approach to linguistic summarization of distributed databases is considered. It is assumed that summarizations are produced for the case of incomplete access to existing data. To cope with the problem the stored data are processed partially (sampled). In consequence summarizations become equivalent to the natural language modal conditionals with modal operators of knowledge, belief and possibility. To capture this case of knowledge processing an original theory for grounding of modal languages is applied. Simple implementation scenarios and related computational techniques are suggested to illustrate a possible utilization of this model of linguistic summarization.

Radosław P. Katarzyniak, Dominik Więcek

Nature Inspired Systems

Paraconsistent Artificial Neural Networks and AD Analysis – Improvements

This work is a sequel of our study of Alzheimer Disease – AD auxiliary diagnosis through EEG findings, with the aid of Paraconsistent Artificial Neural Network – PANN [3], [6], [7] through testing a new architecture of PANN whose expert systems are based on the profile of the EEG examination. This profile consists of the quantification of the waves grouped in clinically normal frequency bands (delta, theta, alpha and beta) plus the relationship alpha / theta.

Jair Minoro Abe, Helder Frederico S. Lopes, Kazumi Nakamatsu

Classification of Tuberculosis Digital Images Using Hybrid Evolutionary Extreme Learning Machines

In this work, classification of Tuberculosis (TB) digital images has been attempted using active contour method and Differential Evolution based Extreme Learning Machines (DE-ELM). The sputum smear positive and negative images (N=100) recorded under standard image acquisition protocol are subjected to segmentation using level set formulation of active contour method. Moment features are extracted from the segmented images using Hu’s and Zernike method. Further, the most significant moment features derived using Principal Component Analysis and Kernel Principal Component Analysis (KPCA) are subjected to classification using DE-ELM. Results show that the segmentation method identifies the bacilli retaining their shape in-spite of artifacts present in the images. It is also observed that with the KPCA derived significant features, DE-ELM performed with higher accuracy and faster learning speed in classifying the images.

Ebenezer Priya, Subramanian Srinivasan, Swaminathan Ramakrishnan

Comparison of Nature Inspired Algorithms Applied in Student Courses Recommendation

In this paper we present comparison of several nature inspired algorithms applied in recommendation of student courses. Nature inspired algorithms proved to be very effective in solving many optimization problems, here we show that these techniques could be successfully used in solving the problem of prediction of final grades students receives on completing university courses is able to deliver good solutions. However to apply these algorithms we need special representation of the problem appropriate for each algorithm.

Janusz Sobecki

Ten Years of Weakly Universal Cellular Automata in the Hyperbolic Plane

In this paper, we indicate the progression over ten years of the study of weakly universal cellular automata in the hyperbolic plane. This research obtained the ultimate limit with a weakly universal cellular automaton with two states which is rotation invariant and also actually planar, a new result.

Maurice Margenstern

Optimizing Communication Costs in ACODA Using Simulated Annealing: Initial Experiments

ACODA is a distributed framework for Ant Colony Optimization based on multi-agent middleware. ACODA is heavily using agent message passing, so communication costs are quite high. In this paper we formulate the minimizing of communication costs in ACODA as a mathematical optimization problem. We propose its solution using Simulated Annealing. The paper contains our initial experimental results and conclusions.

Costin Bădică, Sorin Ilie, Mirjana Ivanović

Language Processing Systems

Robust Plagiary Detection Using Semantic Compression Augmented SHAPD

This work presents results of the ongoing novel research in the area of semantic networks, plagiarism detection and general natural language processing. Results presented here demonstrate that the semantic compression is a valuable addition to the existing methods used in plagiary detection. The application of the semantic compression boosts the efficiency of Sentence Hashing Algorithm for Plagiarism Detection (SHAPD) and authors’ implementation of the w-shingling algorithm. There were also test with use of the traditional Vector Space Model method that demonstrated that this technique is not well suited for plagiary detection contrary to general beliefs. All the experiments were performed on a generally available corpus built so that such analysis can be comparable to efforts of other research teams.

Dariusz Ceglarek, Konstanty Haniewicz, Wojciech Rutkowski

Words Context Analysis for Improvement of Information Retrieval

In the article we present an approach to improvement of retrieval information from large text collections using words context vectors. The vectors have been created analyzing English Wikipedia with Hyperspace Analogue to Language model of words similarity. For test phrases we evaluate retrieval with direct user queries as well as retrieval with context vectors of these queries. The results indicate that the proposed method can not replace retrieval based on direct user queries but it can be used for refining the search results.

Julian Szymański

Mediating Accesses to Multiple Information Sources in a Multi-lingual Application

This paper describes an approach to mediating accesses to multiple information sources in a multi-lingual application. There are many information sources available on the Internet in different languages, and machine translation services are also available to allow multi-lingual access to information sources. Domain-dependent translation dictionaries are often used to make translation more appropriate. In the proposed approach, the domain-dependent translation dictionaries are represented as linked data. Using the data available from the translation dictionaries, accesses to the information sources that are represented as linked data can be customized. By applying the linked data concept, a multi-lingual application can be constructed in a flexible way.

Kazuhiro Kuwabara, Shingo Kinomura

Classification of Speech Signals through Ant Based Clustering of Time Series

Classification of speech signals in a time domain can be made through a clustering process of time windows into which examined speech signals are divided. Disturbances in speech signals of patients having some problems with the voice organ cause some difficulties in formation of coherent clusters of similar time windows. A quality of a clustering process result can be used as an indicator of non-natural disturbances in articulation of selected phonemes by patients. In the paper, we describe a procedure based on this fact. A special ant based algorithm is used to cluster time windows being time series. In this algorithm, a new local function, formulas for picking and dropping decisions as well as some additional operations are implemented to adjust the clustering process to a classification ability.

Krzysztof Pancerz, Arkadiusz Lewicki, Ryszard Tadeusiewicz, Jarosław Szkoła

A Neuronal Approach to the Statistical Image Reconstruction from Projections Problem

The image reconstruction from projections problem is still the primary challenge for designers of the computed tomography devices. The presented paper describes a new approach to the reconstruction problem, which takes into consideration the statistical conditions of the signals measured in real tomographic scanners. The reconstruction problem is reformulated to the optimization problem. The new form of optimization loss function is proposed. The optimization process is performed using a recurrent neural network. Experimental results show that the appropriately designed neural network is able to reconstruct an image with better quality than obtained from conventional algorithms.

Robert Cierniak, Anna Lorent

Ripple Down Rules for Vietnamese Named Entity Recognition

One of the biggest problems with rule based systems is how to avoid the conflict between rules when a new rule is added. Ripple Down Rules (RDR) is considered a good systematic approach to address this for classification problems. In this paper, we present a system using RDR to build the set of rules for Vietnamese Named Entity Recognition which is important for many natural language processing tasks. Experimental results on comparing the proposed approach with a standard method where rules are added in an ad-hoc manner prove to be very promising.

Dat Ba Nguyen, Son Bao Pham

Induction of Dependency Structures Based on Weighted Projection

This paper describes a novel weighted projection method of inducing grammatical dependency structures for Polish. Using a parallel English-Polish corpus, the English side is automatically annotated with a syntactic parser and the resulting annotations are projected to Polish via word alignment links. Projected arcs are weighted according to the certainty of word alignment links used in the projection, where arcs projected via intersection links are weighted with the lowest value (corresponding to the highest certainty). Minimum spanning trees induced from such graphs are used to train a parsing model with a publicly available parser-generation system.

Alina Wróblewska, Adam Przepiórkowski

Smart Access to Big Data Storage – Android Multi-language Offline Dictionary Application

Some of current applications of mobile language dictionaries with built-in large amount of data for dictionary are available in offline mode. They provide unfortunately limited possibilities for user. They also need optimized solution for communication with dictionaries files in reasonable speed. Very few of them support multi language option with offline mode. In this paper, we presented appropriate way of application design suitable for big data dictionary storage. We developed through different approaches while the performance of an each level was tested and finally we have implemented most suitable case also using smart design principle. In addition, our experimental results showed that the performance of the final solution has been much more improved as well as it is more suitable for development.

Erkhembayar Gantulga, Ondrej Krejcar

Social Networks and Semantic Web

STARS: Ad-Hoc Peer-to-Peer Online Social Network

Online social networks allow people to exchange information in their social lives. Most famous online social networks today such as Facebook, Twitter, or Linkedin are centralized. Users do not have much control over their personal data and every transaction around users requires the appearance of their service providers. This paper introduces a decentralized approach to building social networking based on ad-hoc Peer-to-Peer architecture in order to remove the bound of a centralized provider and provide a flexible way of connecting people. The proposed system focuses on bringing the ownership of user data back to the owner and allows user to build Interest-based network with high responsiveness and direct data exchange. The working implementation focuses on mobile device platform such as Android and iOS based smartphone with security techniques.

Quang Long Trieu, Tran Vu Pham

Social Filtering Using Social Relationship for Movie Recommendation

Traditional recommendation systems provide appropriate information to a target user after analyzing user preferences based on user profiles and rating histories. However, most of people also consider the friend’s opinions when they purchase some products or watch the movies. As social network services have been recently popularized, many users obtain and exchange their opinions on social networks. This information is reliable because they have close relationships and trust each other. Most of the users are satisfied with the information. In this paper, we propose a recommendation system based on advanced user modeling using social relationship of users. For the user modeling, both direct and indirect relations are considered and the relation weight between users is calculated by using six degrees of Kevin Bacon. From the experimental results, our proposed social filtering method can achieve better performance than a traditional user-based collaborative filtering method.

Inay Ha, Kyeong-Jin Oh, Myung-Duk Hong, Geun-Sik Jo

An Intelligent RDF Management System with Hybrid Querying Approach

Managing a large scale RDF is a challenging problem in Semantic Web research domain since achieving efficiency and scalability is hard with keeping its intelligent level. Various approaches including indexing and keyword querying have been applied to manage RDF successfully. However, none of them address the problem from the higher level and support a massive scale RDF and a large scale user request at the same time. In this paper, we present our hybrid approach with cache and ranking to achieve efficiency, scalability, and intelligence. In our approach, a query is able to be answered quickly from the cache which holds results from the previous queries. The entity-based cache structure is designed as distributed to serve a large scale user requests. A ranking system is added to improve accuracy of returned results from the cache. We present empirical evaluations of our approach.

Jangsu Kihm, Minho Bae, Sanggil Kang, Sangyoon Oh

Agent and Multi-agent Systems

Cross-Organisational Decision Support: An Agent-Enabled Approach

Decision support systems (DSS) are typically built for efficiency but fail to address flexibility in support for decision makers in modern organisations. They provide insufficient resources and inadequate support for decision makers to manage the processes for solving complex problems. They are not adaptive to user requirements. The above situation worsens when cooperation or collaboration is needed among decision makers across organisations. As technology progresses, there is an increasing opportunity for addressing these deficiencies. To address these problems we leverage the power of component independence and modelling concepts for designing a DSS framework to support this process. We develop a flexible distributed DSS architecture to realise the proposed framework using an agent paradigm. The architecture is then implemented, refined and validated with a prototype.

Ching-Shen Dong, Gabrielle Peko, David Sundaram

The Semantics of Norms Mining in Multi-agent Systems

In this paper, we present the semantics of our proposed norms mining technique for an agent to detect the norms of a group of agents in order to comply with the group’s normative protocol. We define the semantics of entities and processes that are involved in the norms mining technique. The technique entails an agent exploiting the resources of the host system, implementing data formatting and filtering, and identifying the norms’ components that contribute to the strength of the norms to identify the group’s potential norms. We then present an algorithm of the norms mining operation and demonstrate the success of the technique in detecting the potential norms.

Moamin A. Mahmoud, Mohd Sharifuddin Ahmad, Azhana Ahmad, Mohd Zaliman Mohd Yusoff, Aida Mustapha

MAScloud: A Framework Based on Multi-Agent Systems for Optimizing Cost in Cloud Computing

In this paper we propose MAScloud, a framework for optimizing both cost and performance in cloud computing systems. This framework is based on both multi-agent systems (MAS) and simulation, where agents are categorized in two different groups:




. First type of agents,


, are in charge of managing the configuration and deployment of different cloud models. Second type,


, are in charge of generating and launching simulated cloud environments obtained from a specific cloud model. The basic idea is that a set of agents collaborate to obtain the configuration that minimize cost, in cloud computing systems, for the execution of a given application by performing simulations. The simulation of cloud computing systems has been performed using the iCanCloud simulation platform. Finally, we present some performance experiments using the proposed framework that model actual cloud computing systems.

Alberto Núñez, César Andrés, Mercedes G. Merayo

A Computational Trust Model with Trustworthiness against Liars in Multiagent Systems

Trust is considered as the crucial factor for agents in decision making to select the partners during their interaction in open distributed multiagent systems. Most of current trust models are the combination of experience trust and reference trust and make use of some propagation mechanism to enable agents to share his/her final trust with partners. These models are based on the assumption that all agents are reliable when they share their trust with others. However, they are no more longer appropriate to applications of multiagent systems, in which several concurrent agents may not be ready to share their information or may share the wrong data by lying to their partners. In this paper, we introduce a computational model of trust that is a combination of experience trust and reference trust. Furthermore, our model offers a mechanism to enable agents to judge the trustworthiness of referees when they refer the reference trust from their various partners that may be liars.

Manh Hung Nguyen, Dinh Que Tran

Classification and Clustering Methods

Color Image Segmentation Based on the Block Homogeneity

We propose the segmentation method of the color Image with irregular texture based on the block homogeneity. Recently segmentation often used for the object based image retrieval and in the application it is more important to approximate the regions than to decide precise region boundary. Our approach subdivides an image into the salient regions. First, a color image is divided into blocks and the block homogeneity for each block is computed by using the modified color histogram intersection. The block homogeneity is designed to have the higher value in the center of region with the homogenous colors or texture while to have the lower value near region boundaries. The image represented by the block homogeneity is the gray scale image and a watershed transform is used to generate closed boundary for each region. As the watershed transform generally results in over-segmentation, region merging based on common boundary strength is followed. The proposed method can be applicable for the segmentation in object based image retrieval.

Chang Min Park

Finite Automata with Imperfect Information as Classification Tools

The concepts of finite automata with imperfect information and finite automata as classification tools are main objectives of this study. These concepts stemming form the theory of finite automata are introduced here to support original ideas for intelligent data processing. The new definition of finite automata with imperfect information generalizes classical definitions of finite automata (deterministic and nondeterministic) and fuzzy automata. It leads to models with different kinds of information imperfectness. The idea usage of using finite automata in classification problems stems from ways of accepting input data. The idea is powerful when tied to finite automata with imperfect information, which is outlined in a case study of classification of a certain type of time series. This case study identifies also some interesting directions of further development of the newly introduced concepts.

Wladyslaw Homenda, Witold Pedrycz

Adaptive Splitting and Selection Algorithm for Classification of Breast Cytology Images

The article presents an application of Adaptive Splitting and Selection (AdaSS) classifier in the medical decision support system for breast cancer diagnosis. Apart from the canonical malignant versus non-malignant problem we introduced a third class - fibroadenoma, which is a benign tumor of the breast often occurring in women. Medical images are delivered by the Regional Hospital in Zielona Góra, Poland. For the process of segmentation and feature extraction a mixture of Gaussians is used. AdaSS is a combined classifier, based on an evolutionary splitting of feature space into clusters. To increase the overall accuracy of the classification we propose to add a feature selection step to the optimization criterion of the native AdaSS algorithm. Experimental investigation proves that the introduced method is more accurate than previously used classification approaches.

Bartosz Krawczyk, Paweł Filipczuk, Michał Woźniak

An Approach to Determine the Number of Clusters for Clustering Algorithms

When clustering a dataset, the right number k of clusters is not often obvious. And choosing k automatically is a complex problem. This paper first reviews existing methods for selecting the number of clusters for the algorithm. Then, an improved algorithm is presented for learning k while clustering. The algorithm is based on coefficients

α, β

that affect this selection. Meanwhile, a new measure is suggested to confirm the member of clusters. Finally, we evaluate the computational complexity of the algorithm, apply to real datasets and results show its efficiency.

Dinh Thuan Nguyen, Huan Doan

Fuzzy Classification Method in Credit Risk

The paper presents FCMCR a fuzzy classification method for credit risk in banking system. Our implementation makes use of fuzzy rules to evaluate similarity between objects as well as using membership degree for features respect to each class. The method is inspired by Fuzzy classification method and was tested using loan data from a large bank. Our result shows that the proposed method is competitive with other approaches reported in the literature.

Hossein Yazdani, Halina Kwasnicka

Preventing Attacks by Classifying User Models in a Collaborative Scenario

There are several methods to assess the capability of a organization to prevent attacks in a potentially wrong collaborative scenario.

In this paper we explore a methodology based on considering some probabilistic information. We assume that we are provided with a

probabilistic user model

. This is a model denoting the probability that the entity interacting with the system takes each available choice.

We show how to build these models using the log files. Moreover, we define the meaning of a good, a bad and a suspicious behavior. Finally, we present a mechanism to share the information presented in each node of the collaborative system.

César Andrés, Alberto Núñez, Manuel Núñez

Hierarchical Clustering through Bayesian Inference

Inheritance, Retention, Variance Hierarchical Clustering

, based on

Tree-Structured Stick Breaking for Hierarchical Data

method which uses nested stick-breaking processes to allow for trees of unbounded width and depth, is proposed. The stress is put on the three requirements, the important one is that groups located lower in the hierarchy are more specific. The paper contains description of the method and experimental comparison of both methods.

Michal Spytkowski, Halina Kwasnicka

An Approach to Improving Quality of Crawlers Using Naïve Bayes for Classifier and Hyperlink Filter

Nowadays, most of search engines rely on keywords provided by users. However, keywords may not be sufficiently representative for the main topic of a web page. When searching for a topic, users input their desirable topic in terms of keywords. Keyword-based search engines will return pages that contain the keywords even though these pages are not about the topic. This limits the efficiency of these engines as they may return undesirable result.

In this paper, we present an approach to improve the quality of search engines by focusing on web pages related to specific topics. Our system includes three main components: a crawler for gathering web pages, a classifier for classifying web pages by topics, and a hyperlink filter (or distiller) for filtering hyperlinks. We propose Naïve Bayes algorithms for classifier and distiller to enhance the accuracy of the system. We also implement and examine the efficiency of our system by gathering web pages in two topics: Artificial Intelligence and Motorcycle. The result shows that our crawler achieves performance improvements in efficiency over the ones that search by keywords.

Huu-Thien-Tan Nguyen, Duy-Khanh Le

Modelling and Optimization Techniques for Business Intelligence

Network Intrusion Detection Based on Multi-Class Support Vector Machine

In the field of network security, the Intrusion Detection Systems (IDSs) always require more research to improve system performance. Multi-Class Support Vector Machine (MSVM) has widely used for network intrusion detection to perform the multi-class classification of intrusions. In this paper, we first consider the MSVM model introduced by J. Weston and C. Watkins that differs from classical approaches for MSVM. Further, as an alternative approach, we use a pseudo



-norm proposed by Y. Guermeur instead of



-norm in the previous model. Both models are investigated to IDSs and tested on the KDD Cup 1999 dataset, a benchmark data in the researches on network intrusion detection. Computational results show the efficiency of both models to IDSs, in particular the alternative model with the




Anh Vu Le, Hoai An Le Thi, Manh Cuong Nguyen, Ahmed Zidna

Solving Nurse Rostering Problems by a Multiobjective Programming Approach

In this paper, we present a multiobjective programming approach for solving nurse rostering problems. We first formulate the nurse rostering problems as a multiobjective mixed 0-1 linear program and then prove that finding an efficient solution of the last program leads to solving one mixed 0-1 linear problem. Two benchmark problems are considered and computational experiments are presented.

Viet Nga Pham, Hoai An Le Thi, Tao Pham Dinh

Conditional Parameter Identification with Asymmetrical Losses of Estimation Errors

In many scientific and practical tasks, the classical concepts for parameter identification are satisfactory and generally applied with success, although many specialized problems necessitate the use of methods created with specifically defined assumptions and conditions. This paper investigates the method of parameter identification for the case where losses resulting from estimation errors can be described in polynomial form with additional asymmetry representing different results of under- and overestimation. Most importantly, the method presented here considers the conditionality of this parameter, which in practice means its significant dependence on other quantities whose values can be obtained metrologically. To solve a problem in this form the Bayes approach was used, allowing a minimum expected value of losses to be achieved. The methodology was based on the nonparametric technique of statistical kernel estimators, which freed the worked out procedure from forms of probability distributions characterizing both the parameter under investigation and conditioning quantities. As a result, a ready to direct use algorithm has been presented here.

Piotr Kulczycki, Malgorzata Charytanowicz


Weitere Informationen

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.



Globales Erdungssystem in urbanen Kabelnetzen

Bedingt durch die Altersstruktur vieler Kabelverteilnetze mit der damit verbundenen verminderten Isolationsfestigkeit oder durch fortschreitenden Kabelausbau ist es immer häufiger erforderlich, anstelle der Resonanz-Sternpunktserdung alternative Konzepte für die Sternpunktsbehandlung umzusetzen. Die damit verbundenen Fehlerortungskonzepte bzw. die Erhöhung der Restströme im Erdschlussfall führen jedoch aufgrund der hohen Fehlerströme zu neuen Anforderungen an die Erdungs- und Fehlerstromrückleitungs-Systeme. Lesen Sie hier über die Auswirkung von leitfähigen Strukturen auf die Stromaufteilung sowie die Potentialverhältnisse in urbanen Kabelnetzen bei stromstarken Erdschlüssen. Jetzt gratis downloaden!