scroll identifier for mobile
main-content

## Über dieses Buch

These transactions publish research in computer-based methods of computational collective intelligence (CCI) and their applications in a wide range of fields such as the Semantic Web, social networks, and multi-agent systems. TCCI strives to cover new methodological, theoretical and practical aspects of CCI understood as the form of intelligence that emerges from the collaboration and competition of many individuals (artificial and/or natural). The application of multiple computational intelligence technologies, such as fuzzy systems, evolutionary computation, neural systems, consensus theory, etc., aims to support human and other collective intelligence and to create new forms of CCI in natural and/or artificial systems. This tenth issue contains 13 carefully selected and thoroughly revised contributions.

## Inhaltsverzeichnis

### Markov Chain Based Analysis of Agent-Based Immunological System

In the course of the paper we recall the Markov model for immunological Evolutionary Multi-Agent System. The model allows to study dynamic features of the computation and increases understanding the considered classes of systems. The main contribution of the paper is the draft of the proof of the ergodicity feature of the Markov chain modelling iEMAS dynamics.

Aleksander Byrski, Robert Schaefer, Maciej Smołka

### Towards Dynamic Orchestration of Semantic Web Services

The Semantic Web together with Web services technologies enable new scenarios in which the machines use the Web to provide intelligent services in an autonomus way. The orchestration of Semantic Web Services now can be defined from an abstract perspective where their formal semantics can be exploited by software agents to replace human input. This paper tackles the more difficult use case, automatic composition, providing a complete solution to create and manage service processes in a semantically interoperable environment.

Domenico Redavid, Stefano Ferilli, Floriana Esposito

### Agent-Based Framework Facilitating Component-Based Implementation of Distributed Computational Intelligence Systems

The paper presents a framework particularly suitable for the design of a certain class of distributed computational intelligence systems based on the agent paradigm. A starting point constitutes a formalism utilizing the notions of algorithms and dependencies, which allows for the formulation of the system functional integrity conditions. Next, technological assumptions of

AgE

framework are presented and a direct mapping between the formalism and the implementation structure of the framework is discussed. The approach assumes that component techniques facilitate the realization of the particular system in such a way that algorithm dependencies are represented as contracts. These allow to support the verification of the system’s functional integrity. Selected technical aspects of the framework design illustrate the considerations of the paper.

Kamil Piętak, Marek Kisiel-Dorohinicki

### A Hardware Collective Intelligence Agent

In recent years, several powerful computing models including grid, cloud, and Internet have emerged. These state-of-the-art paradigms offer numerous benefits to large-scale and compute-intensive applications such as data analysis and decision modelling in enterprise systems. Many of these applications make use of the collective intelligence technique due to the necessity in a widely dispersed environment, or the desirability to harness the processing power and aggregated knowledge in a distributed system. Most current implementations of the intelligent agent model are software based. This work proposes the use of hardware collective intelligence agent in lieu of the software version, in order to achieve flexibility, versatility, and scalability. Housing on a single chip, the hardware agent is useful in the emulation of collective intelligence models, and deployment in realistic collaborative settings. The rationales of using hardware agent, its advantages, and performance are presented, discussed, and analysed.

Kin Fun Li, Darshika G. Perera

### SimISpace2: A Simulation Platform for Exploring Strategic Knowledge Management Processes

SimISpace2

is an agent-based simulation environment designed to simulate strategic knowledge management processes, in particular knowledge flows and knowledge-based agent interactions. It serves as a general knowledge management engine that, through a user-friendly graphical interface, can be adapted to a wide range of knowledge-related applications. Its purpose is to improve our understanding of how knowledge is generated, diffused, internalized and managed by individuals and organizations, under both collaborative and competitive learning conditions.

Martin Ihrig

### Cloud Search Engine for IaaS

Cloud based online storage enables the storage of massive data. In these systems, a full text search engine is very important for finding documents. In this paper, we propose a distributed search engine suitable for searching a cloud. In our previous work, we developed a distributed search engine, the cooperative search engine (CSE). We now extend the CSE to search clouds. In a cloud, elasticity and reliability are important. We realize these by employing consistent hashing for distributed index files in the cloud. In this paper, we describe the improved CSE architecture and its implementation for a cloud search.

Yuuta Ichikawa, Minoru Uehara

### Data Scheduling in Data Grids and Data Centers: A Short Taxonomy of Problems and Intelligent Resolution Techniques

Data-aware scheduling in today’s large-scale heterogeneous environments has become a major research issue. Data Grids (DGs) and Data Centers arise quite naturally to support needs of scientific communities to share, access, process, and manage large data collections geographically distributed. Data scheduling, although similar in nature with grid scheduling, is given rise to the definition of a new family of optimization problems. New requirements such as data transmission, decoupling of data from processing, data replication, data access and security are to be added to the scheduling problem are the basis for the definition of a whole taxonomy of data scheduling problems. In this paper we briefly survey the state-of-the-art in the domain. We exemplify the model and methodology for the case of data-aware independent job scheduling in computational grid and present several heuristic resolution methods for the problem.

Joanna Kołodziej, Samee Ullah Khan

### Improving Scalability of an Hybrid Infrastructure for E-Science Applications

The Italian GPS receiver for Radio Occultation has been launched from the Satish Dhawan Space Center (Sriharikota, India) on board of the Indian Remote Sensing OCEANSAT-2 satellite. The Italian Space Agency has established a set of Italian universities and research centers to develop an innovative solution in order to quickly elaborate RO data and extract atmospherical profiles. The algorithms adopted can be used to characterize the temperature, pressure and humidity. In consideration of large amount of data to process, an hybrid infrastructure has been created using both the existing grid environment (fully physical) and the virtual environment composed of virtual machines from local cloud infrastructure and Amazon EC2. This enhancement of the project stems from the need of computational power in case of an unexpected burst of calculation that the physical infrastructure would not be able to respond on its own. The virtual environment implemented guarantees flexibility and a progressive scalability and other benefits derived by virtualization and cloud computing.

Olivier Terzo, Lorenzo Mossucca, Pietro Ruiu, Giuseppe Caragnano, Klodiana Goga, Riccardo Notarpietro, Manuela Cucca

### Energy Aware Communication Protocols for Wireless Sensor Networks

The ad hoc networking is an ultimate technology in wireless communication that allows wireless devices located within their transmission range to communicate directly to each other without the need for established fixed network infrastructure. It is a new area of research that has become extremely popular over the last decade and is rapidly increasing its advance into different areas of technology. In this paper properties, limitations and basic issues related to development of wireless sensor network applications are investigated. The focus is on reliable and energy aware inter-node communication strategies. The approaches to power control and activity control of nodes are briefly summarized. The results of the performance evaluation of energy aware protocols through simulation are presented and discussed. The protocol that relies on hierarchical routing and uses a periodic coordination for energy efficient WSN is described and investigated.

### GPU Acceleration for Hermitian Eigensystems

As a recurrent problem in numerical analysis and computational science, eigenvector and eigenvalue determination usually employs high-performance linear algebra libraries. This paper explores the implementation of high-performance routines for the solution of multiple large Hermitian eigenvector and eigenvalue systems on a Graphics Processing Unit (GPU). We report a performance increase of up to two orders of magnitude over the original

$\textsc{Eispack} {}$

routines with a NVIDIA Tesla C2050 GPU, providing an effective order of magnitude increase in unit cell size or simulated resolution for Inelastic Neutron Scattering (INS) modelling from atomistic simulations.

Michael T. Garba, Horacio González–Vélez, Daniel L. Roach

### Scalable and High Performing Learning and Mining in Large-Scale Networked Environments: A State-of-the-art Survey

Scalability is a major issue in the application of machine learning and data mining to large-scale networked environments. While there has been important progress in the learnability of models for medium-sized datasets, there is still much challenge in facing large-scale systems. In particular, with the evolution of distributed and networked environments, the complexity of the learning and mining process has now grown due to the possibility to integrating more data in the learning process. This paper provides a survey on the state-of-the-art on the methods and algorithms to enhance scalability of machine learning and data mining for large-scale networked systems.

Evis Trandafili, Marenglen Biba

### Heterarchy in Constructing Decision Trees – Parallel ACDT

In this paper, a novel decision tree construction algorithm that utilizes the Ant Colony Optimization (ACO) is presented. The ACO is a population based metaheuristic inspired by the foraging behavior of real ants. It consists in searching for optimal solutions by considering both local heuristic and accumulated (in the form of pheromone trails) knowledge.

In this paper we study a parallel version of the Ant Colony Decision Trees (ACDT) algorithm developed for constructing decision trees. Decision tree induction is a widely used technique to generate classifiers from training data through a process of recursively splitting the data attribute space. The main idea of this paper is to speed up the tree construction process by dividing the population of ants into subpopulations for which calculations are carried out in parallel. The exchange of information between ants is possible through direct and indirect communication channels on the local and global (inter-subpopulation) levels. Ants cooperating in this way form a structure called heterarchy.

A detailed study of the proposed algorithm, focusing both on the computation time and the quality of results, is carried out using data sets from the UCI Machine Learning repository. Proposed scheme of parallelization of the ACDT demonstrates the possibility to improve not only the computation time, but also the quality of results.

Urszula Boryczka, Jan Kozak, Rafał Skinderowicz

### Decision Support Simulation System Based on Synchronous Manufacturing

The focus of the article was the redesign of an assembly cell in a car manufacturing company. The new design had to include the operations related to the assembly of new roofs without disrupting the throughput rate. The theory-of-constrains framework has been used to propose designs as well as to define the evaluation criteria. The complexity of the system has been studied using a discrete-event simulation model. The traditional scholar steps have been followed to develop a robust decision support simulation system (DSSS).

F. Javier Otamendi

### Backmatter

Weitere Informationen

## BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.

## Whitepaper

- ANZEIGE -

### INDUSTRIE 4.0

Der Hype um Industrie 4.0 hat sich gelegt – nun geht es an die Umsetzung. Das Whitepaper von Protolabs zeigt Unternehmen und Führungskräften, wie sie die 4. Industrielle Revolution erfolgreich meistern. Es liegt an den Herstellern, die besten Möglichkeiten und effizientesten Prozesse bereitzustellen, die Unternehmen für die Herstellung von Produkten nutzen können. Lesen Sie mehr zu: Verbesserten Strukturen von Herstellern und Fabriken | Konvergenz zwischen Soft- und Hardwareautomatisierung | Auswirkungen auf die Neuaufstellung von Unternehmen | verkürzten Produkteinführungszeiten