Skip to main content

About this book

These transactions publish research in computer-based methods of computational collective intelligence (CCI) and their applications in a wide range of fields such as the semantic web, social networks, and multi-agent systems. TCCI strives to cover new methodological, theoretical and practical aspects of CCI understood as the form of intelligence that emerges from the collaboration and competition of many individuals (artificial and/or natural). The application of multiple computational intelligence technologies, such as fuzzy systems, evolutionary computation, neural systems, consensus theory, etc., aims to support human and other collective intelligence and to create new forms of CCI in natural and/or artificial systems. This 14th issue contains 9 carefully selected and thoroughly revised contributions.

Table of Contents


A Two-Armed Bandit Collective for Hierarchical Examplar Based Mining of Frequent Itemsets with Applications to Intrusion Detection

Over the last decades, frequent itemset mining has become a major area of research, with applications including indexing and similarity search, as well as mining of data streams, web, and software bugs. Although several efficient techniques for generating frequent itemsets with a minimum frequency have been proposed, the number of itemsets produced is in many cases too large for effective usage in real-life applications. Indeed, the problem of deriving frequent itemsets that are both compact and of high quality, remains to a large degree open.
In this paper we address the above problem by posing frequent itemset mining as a collection of interrelated two-armed bandit problems. We seek to find itemsets that frequently appear as subsets in a stream of itemsets, with the frequency being constrained to support granularity requirements. Starting from a randomly or manually selected examplar itemset, a collective of Tsetlin automata based two-armed bandit players – one automaton for each item in the examplar – learns which items should be included in the mined frequent itemset. A novel reinforcement scheme allows the bandit players to learn this in a decentralized and on-line manner by observing one itemset at a time. By invoking the latter procedure recursively, a progressively more fine granular summary of the itemset stream is produced, represented as a hierarchy of frequent itemsets.
The proposed scheme is extensively evaluated using both artificial data as well as data from a real-world network intrusion detection application. The results are conclusive, demonstrating an excellent ability to find frequent itemsets. Also, computational complexity grows merely linearly with the cardinality of the examplar itemset. Finally, the hierarchical collections of frequent itemsets produced for network intrusion detection are compact, yet accurately describe the different types of network traffic present.
Vegard Haugland, Marius Kjølleberg, Svein-Erik Larsen, Ole-Christoffer Granmo

Semantic Compression for Text Document Processing

Ongoing research on novel methods and tools that can be applied in Natural Language Processing tasks has resulted in the design of a semantic compression mechanism. Semantic compression is a technique that allows for correct generalization of terms in some given context. Thanks to this generalization a common thought can be detected. The rules governing the generalization process are based on a data structure which is referred to as a domain frequency dictionary. Having established the domain for a given text fragment the disambiguation of possibly many hypernyms becomes a feasible task. Semantic compression, thus an informed generalization, is possible through the use of semantic networks as a knowledge representation structure. In the given overview, it is worth noting that the semantic compression allows for a number of improvements in comparison to already established Natural Language Processing techniques. These improvements, along with a detailed discussion of the various elements of algorithms and data structures that are necessary to make semantic compression a viable solution, are the core of this work. Semantic compression can be applied in a variety of scenarios, e.g. in detection of plagiarism. With increasing effort being spent on developing semantic compression, new domains of application have been discovered. What is more, semantic compression itself has evolved and has been refined by the introduction of new solutions that boost the level of disambiguation efficiency. Thanks to the remodeling of already existing data sources to suit algorithms enabling semantic compression, it has become possible to use semantic compression as a base for automata that, thanks to the exploration of hypernym-hyponym and synonym relations, new concepts that may be included in the knowledge representation structures can now be discovered.
Dariusz Ceglarek

On Stigmergically Controlling a Population of Heterogeneous Mobile Agents Using Cloning Resource

Cloning can greatly enhance the performance of networked systems that make use of mobile agents to patrol or service the nodes within. Uncontrolled cloning can however lead to generation of a large number of such agents which may affect the network performance adversely. Several attempts to control a population of homogeneous agents and their clones have been made. This paper describes an on-demand population control mechanism for a heterogeneous set of mobile agents along with an underlying application for their deployment as service providers in a networked robotic system. The mobile agents stigmergically sense and estimate the network conditions from within a node and control their own cloning rates. These agents also use a novel concept called the Cloning Resource which controls their cloning behaviour. The results, obtained from both simulation and emulation presented herein, portray the effectiveness of deploying this mechanism in both static and dynamic networks.
W. Wilfred Godfrey, Shashi Shekhar Jha, Shivashankar B. Nair

On the Existence and Heuristic Computation of the Solution for the Commons Game

It is well known that Game Theory can be used to capture and model the phenomenon of economic strategies, psychological and social dilemmas, and the exploitation of the environment by human beings. Many artificial games studied in Game Theory can be used to understand the main aspects of humans using/misusing the environment. They can be tools by which we can define the aggregate behavior of humans, which, in turn, is often driven by “short-term”, perceived costs and benefits. The Commons Game is a simple and concise game that elegantly formulates the different behaviors of humans toward the exploitation of resources (also known as “commons”) as seen from a game-theoretic perspective. The game is intrinsically hard because it is non-zero-sum, and involves multiple players, each of who can use any one of a set of strategies. It also could involve potential competitive and cooperative strategies. In the Commons Game, an ensemble of approaches towards the exploitation of the commons can be modeled by colored cards. This paper shows, in a pioneering manner, the existence of an optimal solution to Commons Game, and demonstrates a heuristic computation for this solution. To do this, we consider the cases when, with some probability, the user is aware of the approach (color) which the other players will use in the exploitation of the commons. We then investigate the problem of determining the best probability value with which a specific player can play each color in order to maximize his ultimate score. Our solution to this problem is a heuristic algorithm which determines (locates in the corresponding space) feasible probability values to be used so as to obtain the maximum average score. This project has also involved the corresponding implementation of the game, and the output of the new algorithm enables the user to visualize the details.
Rokhsareh Sakhravi, Masoud T. Omran, B. John Oommen

Method of Constructing the Cognitive State for Context-Dependent Utterances in the Form of Conditionals

During the last few years we have been working on a theory for the grounding of semiotic symbols in artificial agents. So far the theory allows for the grounding of statements that summarize a speaker’s knowledge generally without limiting the time or space related denotations of the utterance. The aim of this work is to propose an exemplary method for the grounding of context-dependent utterances. The implementation of this method should allow for the grounding of conditional statements about (describing) the current (last) environmental observation. To solve this task, a method for constructing a context-dependent model of a cognitive state is proposed. An agent’s knowledge is partitioned into a few disjoint subsets. This division is the result of a classification of past environmental observations. The classification process utilizes some known data exploration and feature selection methods which pick the environmental observations that seem relevant to the described situation.
Grzegorz Skorupa

Conflict Compensation, Redundancy and Similarity in DataBases Federation

Integration of several databases is a complex process, in which it is needed to specify if two databases with entities and relationship are similar or strong equivalents or weak equivalent. Data bases that are strong equivalent became identical with suitable change of entities without change the relationship. In this situation in the federation of databases we have a lot of strong redundant databases that can be changed in only one prototype. The week equivalence or week redundant databases are more difficult to discover because in the federation of databases change the entities but also the relationship. For example images on the sphere are week equivalent to the projected image that is a locally distortion of the original images. In this paper we give the algorithm to discover the week redundant databases and also how to create the local compensation in a way to transform all the different databases in only one prototype. This is a useful method to solve conflicts among agents as databases.
Germano Resconi

Extended Learning Method for Designation of Co-operation

The aim of the paper is to present a new machine learning method for determining intelligent co-operation at project realization. The method uses local optimization task of a special form and is based on learning idea. Additionally, the information gathered during a searching process is used to prune non-perspective solutions. The paper presents a formal approach to creation of constructive algorithms that use a sophisticated local optimization and are based on a formal definition of multistage decision process. It also proposes a general conception of creation local optimization tasks for different problems as well as a conception of local optimization task modification on basis of acquired information. To illustrate the conceptions, the learning algorithm for NP-hard scheduling problem is presented as well as results of computer experiments.
Edyta Kucharska, Ewa Dudek-Dyduch

Methods of Prediction Improvement in Efficient MPC Algorithms Based on Fuzzy Hammerstein Models

Two methods of prediction improvement in Model Predictive Control (MPC) algorithms utilizing fuzzy Hammerstein models are proposed in the paper. The first one consists in iterative adjustment of the prediction, the second one – in utilization of disturbance measurement. Though the methods can significantly improve control system operation, they modify the prediction in such a way that it is described by relatively simple analytical formulas. Thus, the prediction has such a form that the MPC algorithms using it are formulated as numerically efficient quadratic optimization problems. Efficiency of the MPC algorithms based on the prediction utilizing the proposed methods of improvement is demonstrated in the example control system of a nonlinear control plant with significant time delay.
Piotr M. Marusak

Visualization of Semantic Data Based on Selected Predicates

Due to the spreading of semantic technologies, the volume of the datasets that are described in the Resource Description Framework (RDF) is dynamically growing. The RDF framework is suitable for integrating data from heterogeneous sources; however, the resulted datasets can be larger and extremely complex than before, new tools are needed to analyze them. In this paper, we present a method which aims to help to understand the structure of semantic datasets. It can reduce the size and the complexity of a dataset while preserves the selected parts of it. The method consists of a filtering and a compaction phases that are implemented according to the MapReduce distributed programing model to be able to handle large volume of data. The result of the method can be visualized as a labeled directed graph that is suitable to give an overview of the structure of the dataset. It may reveal hidden connections or different kinds of problems related to the completeness and correctness of the data.
Gábor Rácz, Gergő Gombos, Attila Kiss


Additional information

Premium Partner

    Image Credits