Skip to main content

2017 | Buch

Computational Collective Intelligence

9th International Conference, ICCCI 2017, Nicosia, Cyprus, September 27-29, 2017, Proceedings, Part I

herausgegeben von: Ngoc Thanh Nguyen, Prof. George A. Papadopoulos, Prof. Piotr Jędrzejowicz, Dr. Bogdan Trawiński, Prof. Dr. Gottfried Vossen

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This two-volume set (LNAI 10448 and LNAI 10449) constitutes the refereed proceedings of the 9th International Conference on Collective Intelligence, ICCCI 2017, held in Nicosia, Cyprus, in September 2017.
The 117 full papers presented were carefully reviewed and selected from 248 submissions. The conference focuseson the methodology and applications of computational collective intelligence, included: multi-agent systems, knowledge engineering and semantic web, social networks and recommender systems, text processing and information retrieval, data mining methods and applications, sensor networks and internet of things, decision support & control systems, and computer vision techniques.

Inhaltsverzeichnis

Frontmatter

Knowledge Engineering and Semantic Web

Frontmatter
Mapping the Territory for a Knowledge-Based System

Although many powerful applications are able to locate vast amounts of digital information, effective tools for selecting, structuring, personalizing, and making sense of the digital resources available to us are lacking. As a result, the opportunities to connect and empower knowledge workers are severely limited.In recognizing these constraints, predictions of the ‘Next Knowledge Management (KM) Generation’ focus on nurturing personal and social settings and on utilizing existing and creating new knowledge. Levy even envisages a decentralizing KM revolution that gives more power and autonomy to individuals and self-organized groups. But, such promising scenarios have not materialized yet. It might be time to follow Pollard’s suggestion of going back to the original premise and promise of KM and start again - but this time from the bottom up by developing processes, programs, and tools to improve knowledge workers’ effectiveness and sense-making. As part of an ongoing design science research (DSR) project, this paper contributes to prior publications by synthesizing renowned computer-based methods of collective intelligence to provide a visual meta-perspective of a novel personal knowledge management (PKM) concept and prototype application. In focusing on time, space, and causality, the bottom-up approach taken, pictures the relevant personal and organizational knowledge spaces as a substitute for the intangible KM territory and provides a guiding map for knowledge workers and KM education.

Ulrich Schmitt
A Bidirectional-Based Spreading Activation Method for Human Diseases Relatedness Detection Using Disease Ontology

There is a numerous demand for a standard representation of the ubiquitous available information on the web. Developing an efficient algorithm for traversing large ontologies is a key challenge for many semantic web applications. This paper proposes spreading activation over ontology method based on bidirectional search technique in order to detect the relatedness between two human diseases. The aim of our work is to detect disease relatedness by considering semantic domain knowledge and description logic rules to identify diseases relatedness. The proposed method is divided into two phases: Semantic Matching and Disease Relatedness Detection. In Semantic matching phase, diseases in submitted query are semantically identified in the ontology graph. In Disease relatedness detection phase, disease relatedness is detected by running a bidirectional-based spreading activation algorithm and return the related path (set of diseases) if so. In addition, the classification of these diseases is provided as well.

Said Fathalla, Yaman Kannot
Semantic Networks Modeling with Operand-Operator Structures in Association-Oriented Metamodel

Semantic networks are nowadays one of the most frequently used knowledge representation method. In this paper authors present a novel approach towards the definition and semantics of semantic networks with the use of predefined primitives as key structure elements. The presented solution is a part of Semantic Knowledge Base project, in which it is used to store complex information such as facts and rules. This approach aims at increased expressiveness of knowledge representation with the higher level of clarity of message. It introduces not only a unique duality of nodes types, namely operators and operands, but also provides mechanisms such as multiplicity, quantifiers or modifiers possible to apply to each and every of network elements.

Marek Krótkiewicz, Marcin Jodłowiec, Krystian Wojtkiewicz
Knowledge Integration in a Manufacturing Planning Module of a Cognitive Integrated Management Information System

One of the important functions of the operation of integrated management information systems, including multi-agent systems, is to properly planning the production. Due to the different production planning strategies (methods) and the company’s limited production capacity, the agents running in the system may generate different versions of the production plans. In other word, agents’ knowledge may differ. The final version may be selected by the system user, however, it should be noted that this is a time-consuming process, and there is a risk of the user choosing the worst version. The better solution is to automatically integrate the agents’ knowledge and to determine one version of the plan presented to the user. The aim of this paper is to develop a consensus algorithm that will allow integrating manufacturing plans generated by different agents, and present one solution (that is very close to these plans, but not necessarily one of them) to user.

Marcin Hernes, Andrzej Bytniewski
The Knowledge Increase Estimation Framework for Ontology Integration on the Relation Level

The task of integration of sets of data or knowledge (regardless the choice of its representation) can be very daunting procedure, requiring a lot of computational resources and time. Authors claim that it is beneficial to develop a formal framework which could be used to estimate the profitability of the integration, ideally even before the integration even occurs. Therefore, a set of algorithms for such estimation of the increase of knowledge concerning relation level of ontology integration is proposed.

Adrianna Kozierkiewicz-Hetmańska, Marcin Pietranik
Particle Swarm of Agents for Heterogenous Knowledge Integration

There is an ever increasing number of sources that may be used for knowledge processing. Often this requires dealing with heterogeneous knowledge and current methods become inadequate in these tasks. Thus it becomes important to develop better general methods and tools, or methods tailored to specific problems. In this paper we consider the problem of knowledge integration in a group of social agents. We use approaches based on particle swarm optimization – without the optimization component – to model the diffusion of information in a group of social agents. We present a short description of the theoretical model – a modification of PSO heuristics. We also conduct an experiment comparing this approach to previously researched models of knowledge integration in a group of social agents.

Marcin Maleszka
Design Proposal of the Corporate Knowledge Management System

This paper presents a proposal of the knowledge management system for managing and refining corporate knowledge. This information system is web-based to allow online immediate access. Corporate knowledge is organized to enable efficient extraction and use in business processes. It can be stored in a form of the course (organized by time sequence) or repository (i.e. collection of resources), depending on the characteristics of the particular knowledge. The system´s proposal will be demonstrated on the principle of several facilitators and many system users (employees). The proposed system can be just the same used for educational purposes, especially in lifelong learning.

Ivan Soukal, Aneta Bartuskova
Dipolar Data Integration Through Univariate, Binary Classifiers

Aggregation of large data sets is one of the current topics of exploratory analysis and pattern recognition. Integration of data sets is a useful and necessary step towards knowledge extraction from large data sets. The possibility of separable integration of multidimensional data sets by one dimensional binary classifiers is analyzed in the paper, as well as designing a layer of binary classifiers for separable aggregation. The optimization problem of separable layer designing is formulated. A dipolar strategy aimed at optimizing separable aggregation of large data sets is proposed in the presented paper.

Leon Bobrowski
Intelligent Collective: The Role of Diversity and Collective Cardinality

Nowadays, there appears to be ample evidence that collectives can be intelligent if they satisfy diversity, independence, decentralization, and aggregation. Although many measures have been proposed to evaluate the quality of collective prediction, it seems that they may not adequately reflect the intelligence degree of a collective. It is due to the fact that they take into account either the accuracy of collective prediction; or the comparison between the capability of a collective to those of its members in solving a given problem. In this paper, we first introduce a new function that measures the intelligence degree of a collective. Following, we carry out simulation experiments to determine the impact of diversity on the intelligence degree of a collective by taking into account its cardinality. Our findings reveal that diversity plays a major role in leading a collective to be intelligent. Moreover, the simulation results also indicate a case in which the increase in the cardinality of a collective does not cause any significant increase in its intelligence degree.

Van Du Nguyen, Mercedes G. Merayo, Ngoc Thanh Nguyen
RuQAR: Querying OWL 2 RL Ontologies with Rule Engines and Relational Databases

We present RuQAR, a tool that supports the ABox reasoning as well as query answering with OWL 2 RL ontologies. RuQAR provides a non-naive method of transforming such ontologies into rules which can be executed by a forward chaining rule engine. Thus, query answering can be performed using functions available in a rule engine. Moreover, RuQAR supports a relational database access which extends reasoning scalability. We evaluate our tool using the LUBM benchmark ontology and data stored in relational databases. We describe our approach, RuQAR’s implementation details as well as future research and development.

Jarosław Bąk, Michał Blinkiewicz
The Efficiency Analysis of the Multi-level Consensus Determination Method

The task of processing large sets of data which are stored in distributed sources is still a big problem. The determination of a one, consistent version of data could be very time- and cost-consuming. Therefore, the balance between the time of execution and the quality of the integration results is needed. This paper is devoted to a multi-level approach to data integration using the Consensus Theory. The experimental verification of multi-level integration methods has proved that the division of integration task into smaller subproblems gives similar results as the one-level approach, but improves a time performance.

Adrianna Kozierkiewicz-Hetmańska, Mateusz Sitarczyk
Collective Intelligence Supporting Trading Decisions on FOREX Market

The aim of the paper is to present an approach to support decision-making on financial markets using an idea of collective intelligence implemented as a multi-agent system, called A-Trader. A-Trader is integrated with the Meta Trader system which provides online data, including ticks of any securities, goods or currency pairs. Many of the implemented agents apply AI methods, communicate their trading advices to the supervisor that integrates all information and suggests the trading decision. The first part of the paper presents the architecture and functionalities of A-Trader. The structure and functionality of the agents and approach to the building of the trading strategies are detailed. The last section describes the results of the performance evaluation of selected trading strategies on FOREX.

Jerzy Korczak, Marcin Hernes, Maciej Bac

Social Networks and Recommender Systems

Frontmatter
Testing the Acceptability of Social Support Agents in Online Communities

This paper describes the first steps towards development and evaluation of an ‘artificial friend’, i.e., an intelligent agent that provides support via text messages in social media in order to alleviate the stress that users experience as a result of everyday problems. The agent consists of three main components: (1) a module that processes text messages based on text mining and classifies them into categories of problems, (2) a module that selects appropriate support strategies based on a validated psychological model of emotion regulation, and (3) a module that generates appropriate responses based on the output of the first two modules. The application has been tested in a pilot study involving 33 participants that were asked to interact with different variants of the agent via the social network Telegram. The results provide hints that the agent is appreciated over a baseline version that generates random support messages, but also point at some possibilities to further improve the agent.

Lenin Medeiros, Tibor Bosse
Enhancing New User Cold-Start Based on Decision Trees Active Learning by Using Past Warm-Users Predictions

The cold-start is the situation in which the recommender system has no or not enough information about the (new) users/items, i.e. their ratings/feedback; hence, the recommendations are not accurate. Active learning techniques for recommender systems propose to interact with new users by asking them to rate sequentially a few items while the system tries to detect her preferences. This bootstraps recommender systems and alleviate the new user cold-start. Compared to current state of the art, the presented approach takes into account the users’ ratings predictions in addition to the available users’ ratings. The experimentation shows that our approach achieves better performance in terms of precision and limits the number of questions asked to the users.

Manuel Pozo, Raja Chiky, Farid Meziane, Elisabeth Métais
An Efficient Parallel Method for Performing Concurrent Operations on Social Networks

This paper presents our approach to optimize the concurrent operations on a large-scale social network. Here, we focus on the directed, unweighted relationships among members in a social network. It can then be illustrated as a directed, unweighted graph. With such a large-scale dynamic social network, we face the problem of having concurrent operations from adding or removing edges dynamically while one may ask to determine the relationship between two members. To solve this challenge, we propose an efficient parallel method based on (i) utilizing an appropriate data structure, (ii) optimizing the updating actions and (iii) improving the performance of query processing by both reducing the searching space and computing in multi-threaded parallel. Our method was validated by the datasets from SigMod Contest 2016 and SNAP DataSet Collections with the good experimental results compared to other solutions.

Phuong-Hanh Du, Hai-Dang Pham, Ngoc-Hoa Nguyen
Simulating Collective Evacuations with Social Elements

This work proposes an agent-based evacuation model that incorporates social aspects in the behaviour of the agents and validates it on a benchmark. It aims to fill the gap in this research field with mainly evacuation models without psychological and social factors such as group decision making and other social interactions. The model was compared with the previous model, its new social features were analysed and the model was validated. With the inclusion of social aspects, new patterns emerge organically from the behaviour of each agent as showed in the experiments. Notably, people travelling in groups instead of alone seem to reduce evacuation time and helping behaviour is not too costly for the evacuation time as expected. The model was validated with data from a real scenario and demonstrates acceptable results and the potential to be used in predicting real emergency scenarios. This model will be used by emergency management professionals in emergency prevention.

Daniel Formolo, C. Natalie van der Wal
Social Networks Based Framework for Recommending Touristic Locations

Tourists need tools that can help them to select locations in which they can spend their holidays. We have multiple social networks in which we find information about hotels and about users’ experiences. The problem is how tourists can use this information to build their proper opinion about a particular location to decide if they should go to that place or not. We try in this paper to present a design of a solution that can be used to achieve this task. In this paper, we propose a framework for a recommender system that bases on opinions of persons on the one hand and on of users’ preferences on the other hand to generate recommendations. Indeed, opinions of tourists are extracted from different sources and analyzed to finally extract how the hotels are perceived by their customers in terms of features and activities. The final step consists in matching between these opinions and the users’ preferences to generate the recommendations. A prototype was developed in order to show how this framework is really working.

Mehdi Ellouze, Slim Turki, Younes Djaghloul, Muriel Foulonneau
Social Network-Based Event Recommendation

The number of events generated on social networks has been growing quickly in recent years. It is difficult for users to find events that most suitably match their favorites. As a solution, the recommender system appears to solve this problem. However, event recommendation is significantly different from traditional recommendations, such as products and movies. Social events are created continuously, and only valid for a short time, so recommending a past event is meaningless. In this paper, we proposed a new even recommendation method based on social networks. First, the behavior of users be detected in order to build the user’s profile. Then the users’ relationship is extracted to measure the interaction strength between them. That is a fundamental factor affecting a decision of a user to attend events. In addition, the opinions about attended events are taken into account to evaluate the satisfaction of attendees by using deep learning method. Twitter is used as a case study for the method. The experiment shows that the method achieves promising results in comparison to other methods.

Dinh Tuyen Hoang, Van Cuong Tran, Dosam Hwang
Deep Neural Networks for Matching Online Social Networking Profiles

This paper details a novel method for grouping together online social networking profiles of the same person extracted from different sources. Name ambiguity arises naturally in any culture due to the popularity of specific names which are shared by a large number of people. This is one of the main problems in people search, which is also multiplied by the number of different data sources that contain information about the same person. Grouping pages from various social networking websites in order to disambiguate between different individuals with the same name is an important task in people search. This allows building a detailed description and a consolidated online identity for each individual. Our results show that given a large enough dataset, neural networks and word embeddings provide the best method to solve this problem.

Vicentiu-Marian Ciorbaru, Traian Rebedea
Effect of Network Topology on Neighbourhood-Aided Collective Learning

This article is about multi-agent collective learning in networks. An agent revises its current model when collecting a new observation inconsistent with it. While revising, the agent interacts with its neighbours in the community, and benefits from observations that other agents send on a utility basis. When considering the learning speed of an agent with respect to all the observations within the community, it clearly depends on the neighbourhood structure, i.e. on the network topology. A comprehensive experimental study characterizes this influence, showing the main factors that affect neighbourhood-aided collective learning. Two kinds of informations are propagated in the networks: hypotheses and counter-examples. This study also weights the impact of these propagation by considering some variants in which one kind of propagation is stopped. Our main purpose is to understand how network characteristics affect to what extent the agents learn and share models and observations, and consequently the learning speed within the community.

Lise-Marie Veillon, Gauvain Bourgne, Henry Soldano
A Generic Approach to Evaluate the Success of Online Communities

The success of online communities depends on different aspects and has been subject of several evaluation works. Most of the existing works in this scope are not generalizable due to their strong dependence to the considered community characteristics. Face to this finding, our aim is to propose a generic approach to evaluate and to improve the success of online communities. This paper starts by an identification of the success determinants of online communities including the participants, the technology as well as the common goal. After that, we propose a generic evaluation approach based on two levels. The first level consists in a failure detection based on a quantification of the success determinants. The second level focuses on a failure explanation and an identification of the most plausible causes of the detected failures. This approach was applied to evaluate the success of a social network-based community; it gave reliable results.

Raoudha Chebil, Wided Lejouad Chaari, Stefano A. Cerri
Considerations in Analyzing Ecological Dependent Populations in a Changing Environment

Simulation is often used during the development of a system to study the performance of a trial design. On the basis of information gained through this process, the design may be modified and retested. Thus, software offers the convenience of introducing changes or fixes without major effort, whereas it could be quite difficult or even impossible to modify an already built hardware for instance. Moreover, scenarios exist where a simulation offers not only the cheaper, but also the safer and ethically more acceptable solution as is the case with nearly any experiment involving live organisms. This paper summarizes the work done on the subject of simulating the dynamics between populations of various organisms that share an environment. The main goal is to introduce an application that visualizes comprehensively their interaction while offering means to conduct experiments with ecological nature.

Kristiyan Balabanov, Robinson Guerra Fietz, Doina Logofătu
Automatic Deduction of Learners’ Profiling Rules Based on Behavioral Analysis

E-learning has become a more flexible learning approach thanks to the extensive evolution of the Information and Communication Technologies. A perceived focus was investigated for the exploitation of the learners’ individual differences to ensure a continuous and adapted learning process. Nowadays, researchers have been oriented to use learning analytics for learner modeling in order to assist educational institutions in improving learner success and increasing learner retention. In this paper, we describe a new implicit approach using learning analytics to construct an interpretative views of the learners’ interactions, even those made outside the E-learning platform. We aim to deduce automatically a learners’ profiling rules independently of the learning style models proposed in the literature. In this way, we provide an innovative process that may help the tutors to profile learners and evaluate their performances, support the courses’ designer in their authoring tasks and adapt the learning objects to the learners’ needs.

Fedia Hlioui, Nadia Aloui, Faiez Gargouri
Predicting the Evolution of Scientific Output

Various efforts have been made to quantify scientific impact and identify the mechanisms that influence its future evolution. The first step is the identification of what constitutes scholarly impact and how it is measured. In this direction, various approaches focus on future citation count or h-index prediction at author or publication level, on fitting the distribution of citation accumulation or accurately identifying award winners, upcoming hot research topics or academic rising stars. A plethora of features have been contemplated as possible influential factors and assorted machine-learning methodologies have been adopted to ensure timely and accurate estimations. Here, we provide an overview of the field challenges, as well as a taxonomy of the existing approaches to identify the open issues that are yet to be addressed.

Antonia Gogoglou, Yannis Manolopoulos

Data Mining Methods and Applications

Frontmatter
Enhanced Hybrid Component-Based Face Recognition

This paper presents a hybrid component-based face recognition. Can face recognition be enhanced by recognizing individual facial components: forehead, eyes, nose, cheeks, mouth and chin? The proposed technique implements texture descriptors Grey-Level Co-occurrence (GLCM) and Gabor Filters, shape descriptor Zernike Moments. These descriptors are effective facial components feature representations and are robust to illumination changes. Two classification techniques have been used and compared: Support Vector Machines (SVM) and Error-Correcting Output Code (ECOC). The experimental results obtained on three different facial databases, the FERET, FEI and CMU, show that component-based facial recognition is more effective than whole-face recognition.

Andile M. Gumede, Serestina Viriri, Mandlenkosi V. Gwetu
Enhancing Cholera Outbreaks Prediction Performance in Hanoi, Vietnam Using Solar Terms and Resampling Data

A solar term is an ancient Chinese concept to indicate a point of season change in lunisolar calendars. Solar terms are currently in use in China and nearby countries including Vietnam. In this paper we propose a new solution to increase performance of cholera outbreaks prediction in Hanoi, Vietnam. The new solution is a combination of solar terms, training data resampling and classification methods. Experimental results show that using solar terms in combination with ROSE resampling and random forests method delivers high area under the Receiver Operating Characteristic curve (AUC), balanced sensitivity and specificity. Without interaction effects the solar terms help increasing mean of AUC by 12.66%. The most important predictor in the solution is Sun’s ecliptical longitude corresponding to solar terms. Among the solar terms, frost descent and start of summer are the most important.

Nguyen Hai Chau
Solving Dynamic Traveling Salesman Problem with Ant Colony Communities

The paper studies Ant Colony Communities (ACC). They are used to solve the Dynamic Travelling Salesman Problem (DTSP). An ACC consists of a server and a number of client ACO colonies. The server coordinates the work of individual clients and sends them cargos with data to process and then receives and integrates partial results. Each client implements the basic version of the ACO algorithm. They communicate via sockets and therefore can run on several separate computers. In the DTSP distances between the nodes change constantly. The process is controlled by a graph generator. In order to study the performance of the ACC, we conducted a substantial number of experiments. Their results indicate that to handle highly dynamic distance matrixes we need a large number of clients.

Andrzej Siemiński
Improved Stock Price Prediction by Integrating Data Mining Algorithms and Technical Indicators: A Case Study on Dhaka Stock Exchange

This paper employs a number of machine learning algorithms to predict the future stock price of Dhaka Stock Exchange. The outcomes of the different machine learning algorithms are combined to form an ensemble to improve the prediction accuracy. In addition, two popular and widely used technical indicators are combined with the machine learning algorithms to further improve the prediction performance. To evaluate the proposed techniques, historical price and volume data over the past 15 months of three prominent stocks enlisted in Dhaka Stock Exchange are collected, which are used as training and test data for the algorithms to predict the 1-day, 1-week and 1-month-ahead prices of these stocks. The predictions are made both on training and test data sets and results are compared with other existing machine learning algorithms. The results indicate that the proposed ensemble approach as well as the combination of technical indicators with the machine learning algorithms can often provide better results, with reduced overall prediction error compared to many other existing prediction algorithms.

Syeda Shabnam Hasan, Rashida Rahman, Noel Mannan, Haymontee Khan, Jebun Nahar Moni, Rashedur M. Rahman
A Data Mining Approach to Improve Remittance by Job Placement in Overseas

Remittance or foreign currency transaction plays an important role in increasing a country’s financial growth. Bangladesh is a country with a reputation in manpower export and every year it receives a considerable amount of remittance. Yet the remittance can be improved further by providing the workers with the information of their future earnings. We propose a solution that will help the workers as well as the government to decide which country/countries will be best for workers in terms of earning, thus increasing the country’s annual remittance. The research outcome from this paper could help the government to export the manpower to the right country and the workers who are planning to move abroad with a vision to work for the best suitable job with respect to their skill. Besides, the findings could help in reducing the unexpected returns of the workers and stop the bad experience the workers endure abroad.

Ahsan Habib Himel, Tonmoy Sikder, Sheikh Faisal Basher, Ruhul Mashbu, Nusrat Jahan Tamanna, Mahmudul Abedin, Rashedur M. Rahman
Determining Murder Prone Areas Using Modified Watershed Model

In this paper, we present an algorithm for cluster detection using modified Watershed model. The presented model for cluster detection works better than the k-means algorithm. The proposed algorithm is also computationally inexpensive compared to the k-means, agglomerative hierarchical clustering and DBSCAN algorithm. The clustering results can be considered as good as the results of DBSCAN and sometimes the result obtained by the proposed model is better than the DBSCAN results. The presented algorithm solves the conflicts faced by the DBSCAN in case of varying density. This paper also presents a way to reduce high dimensional data to low dimensional data with automatic association analysis. This algorithm can reduce high dimensional data to even a single dimension. Using this algorithm the challenges faced in multidimensional clustering by different algorithms such as DBSCSN is solved. This dimensionality reduction with automatic association algorithm is then applied to the Watershed model to detect cluster in Homicide Data and finding out murder prone zones and suggest a person with murder avoiding areas.

Joytu Khisha, Naushaba Zerin, Deboshree Choudhury, Rashedur M. Rahman
Comparison of Ensemble Learning Models with Expert Algorithms Designed for a Property Valuation System

Three expert algorithms based on the sales comparison approach worked out for an automated system to aid in real estate appraisal are presented in the paper. Ensemble machine learning models and expert algorithms for real estate appraisal were compared empirically in terms of their accuracy. The evaluation experiments were conducted using real-world data acquired from a cadastral system maintained in a big city in Poland. The characteristics of applied techniques for real estate appraisal are discussed.

Bogdan Trawiński, Tadeusz Lasota, Olgierd Kempa, Zbigniew Telec, Marcin Kutrzyński

Multi-agent Systems

Frontmatter
Multiagent Coalition Structure Optimization by Quantum Annealing

Quantum computing is an increasingly significant area of research, given the speed up that quantum computers may provide over classic ones. In this paper, we address the problem of finding the optimal coalition structure in a small multiagent system by expressing it in a proper format that can be solved by an adiabatic quantum computer such as D-Wave by quantum annealing. We also study the parameter values that enforce a correct solution of the optimization problem.

Florin Leon, Andrei-Ştefan Lupu, Costin Bădică
External Environment Scanning Using Cognitive Agents

Very significant process in business organization is an external environment scanning. It is very important for decision makers to have an understanding of the competitive position of the company. Actual and reliable information is particularly important for corporate executives and helps decision makers make quick decisions in response to competitors’ actions. This knowledge will help in increasing efficiency and effectiveness of company functioning.The aim of this paper is to develop a method for external environment scanning by using cognitive agents. The research has been performed on the example of hotel industry.The first part of article presents the state on the art in the field. Next the problem of external environment scanning in hotel industry is presented. The method for environment scanning by using cognitive agent and the research experiment, are presented at the last part of the paper.

Marcin Hernes, Anna Chojnacka-Komorowska, Kamal Matouk
OpenCL for Large-Scale Agent-Based Simulations

NetLogo is a Java-based multi-agent programmable modeling environment. Our aim is to improve the execution speed of NetLogo models with large number of agents by means of heterogeneous computing. Firstly, we describe OpenCL as a suitable computing platform. Then we propose a new NetLogo-to-OpenCL extension (NL2OCL) which encapsulates functionality of OpenCL and enables NetLogo to undertake agents’ computations simultaneously on graphic processor units. The architecture of our extension is presented. An experimental flocking model with 40,000 agents is used for evaluation of NL2OCL functioning. When using NL2OCL the simulation runs more than 300-times faster than the original model which was created in NetLogo solely. It means that with NL2OLC, drawbacks in maximum size of the NetLogo model and the simulation speed are tackled. Our approach allows using standard PC configurations with suitable graphical cards for large agent-based simulations while preserving advantages of NetLogo. It is a good alternative for researchers who cannot afford high performance computational systems.

Jan Procházka, Kamila Štekerová
A Novel Space Filling Curves Based Approach to PSO Algorithms for Autonomous Agents

In this work the swarm behavior principles of Craig W. Reynolds are combined with deterministic traits. This is done by using leaders with motions based on space filling curves like Peano and Hilbert. Our goal is to evaluate how the swarm of agents works with this approach, supposing the entire swarm will better explore the entire space. Therefore, we examine different combinations of Peano and Hilbert with the already known swarm algorithms and test them in a practical challenge for the harvesting of manganese nodules on the sea ground with the use of autonomous robots. We run experiments with various settings, then evaluate and describe the results. In the last section some further development ideas and thoughts for the expansion of this study are considered.

Doina Logofătu, Gil Sobol, Daniel Stamate, Kristiyan Balabanov
Multiplant Production Design in Agent-Based Artificial Economic System

Management of production is a cornerstone of every economic system. This paper provides a formal description of a production unit (e.g. factory) represented by an agent in an artificial economic system. The concept of the production unit consists of several layers representing respective control processes from the operational up to the strategic level. For maintaining continuous production and optimization of the production unit performance, both the geographical (location of resources, the distance between factories) as well as economic (market structures, competition) context is important. Attention is focused primarily on the formal description of multi-plant production model with autonomous control with facilities situated in distributed geographical locations.

Petr Tucnik, Zuzana Nemcova, Tomas Nachazel
Role of Non-Axiomatic Logic in a Distributed Reasoning Environment

The aim of this paper is to introduce the design of a novel Distributed Non-Axiomatic Reasoning System. The system is based on Non-Axiomatic Logic, a formalism in the domain of artificial general intelligence designed for realizations of systems with insufficient resources and knowledge. Proposed architecture is based on layered and distributed structure of the backend knowledge base. The design of the knowledge base makes it fault-tolerant and scalable. It promises to allow the system to reason over large knowledge bases with real-time responsiveness.

Mirjana Ivanović, Jovana Ivković, Costin Bădică
Agent Having Quantum Properties: The Superposition States and the Entanglement

In agent-based simulation and modelling of intelligent complex systems, the problem of decision making by agents having incomplete, uncertain, local or global, exchanged or observed information is very common. Recent studies on quantum cognition introduce in the decision process modelling and analysis, quantum properties such as superposition state, non-locality, oscillation, interference or entanglement. This paper proposes a model of quantum-like agents able to implement quantum properties of superposition state and local or non-local entanglement. A case study based on an adaptation of the Takuzu game illustrates our proposed approach of quantum agents modelling. A discussion on the interest of decomposing or not components of a system in the intelligent complex systems modelling is also proposed.

Alain-Jérôme Fougères

Sensor Networks and Internet of Things

Frontmatter
A Profile-Based Fast Port Scan Detection Method

Before intruding into a system attackers need to collect information about the target machine. Port scanning is one of the most popular techniques for that purpose, it enables to discover services that may be exploited. In this paper we propose an accurate port scan detection method that can detect port scanning attacks earlier with higher reliability than the widely used Snort-based approaches. Our method is profile-based, meaning that it does not only set a threshold on the connection attempts in a given time interval, like most of the current methods, but builds an IP profile of four features that enables a more fine-grained detection. We use the Budapest node of the FIWARE Lab community cloud as a natural honeypot to identify malicious activities in it.

Katalin Hajdú-Szücs, Sándor Laki, Attila Kiss
Sensor Network Coverage Problem: A Hypergraph Model Approach

A sensor power schedule for a homogenous network of sensors with a limited battery capacity monitoring a set of points of interest (POI) depends on locations of POIs and sensors, monitoring range and battery lifetimes. A good schedule keeps the network operational as long as possible while maintaining the required level of coverage (not all POIs have to be monitored all the time). Searching for such a schedule is known as the Maximum Lifetime Coverage Problem. A new approach solving MLCP is proposed in this paper. First, in every time step, we try to achieve the required coverage level using sensors with the longest remaining working time monitoring the largest number of POIs that have not been covered yet. The resulting schedule is next used for generating a neighbour schedule by a perturbation algorithm. For experimental evaluation of our approach a new set of test cases is proposed. Experiments with these data show interesting properties of the algorithm.

Krzysztof Trojanowski, Artur Mikitiuk, Mateusz Kowalczyk
Heuristic Optimization of a Sensor Network Lifetime Under Coverage Constraint

Control of a set of sensors disseminated in the environment to monitor activity is a subject of the presented research. Due to redundancy in the areas covered by sensor monitoring ranges a satisfying level of coverage can be obtained even if not all the sensors are on. Sleeping sensors save their energy, thus, one can propose schedules defining activity for each of sensors over time which offer a satisfying level of coverage for a period of time longer than a lifetime of a single sensor. A new heuristic algorithm is proposed which searches for such schedules maximizing the lifetime of the sensor network under a coverage constraint. The algorithm is experimentally tested on a set of test cases and effectiveness of its components is presented and statistically verified.

Krzysztof Trojanowski, Artur Mikitiuk, Frédéric Guinand, Michał Wypych
Methods of Training of Neural Networks for Short Term Load Forecasting in Smart Grids

Modern systems of voltage control in distribution grids need load forecast. The paper describes forecasting methods and concludes that using of artificial neural networks for this problem is preferable. It shows that for the complex real networks particle swarm method is faster and more accurate than traditional back propagation method.

Robert Lis, Artem Vanin, Anastasiia Kotelnikova
Scheduling Sensors Activity in Wireless Sensor Networks

In this paper we consider Maximal Lifetime Coverage Problem in Wireless Sensor Networks which is formulated as a scheduling problem related to activity of sensors equipped at battery units and monitoring a two-dimensional space in time. The problem is known as an NP-hard and to solve it we propose two heuristics which use specific knowledge about the problem. The first one is proposed by us stochastic greedy algorithm and the second one is metaheuristic known as Simulated Annealing. The performance of both algorithms is verified by a number of numerical experiments. Comparison of the results show that while both algorithms provide results of similar quality, but greedy algorithm is slightly better in the sense of computational time complexity.

Antonina Tretyakova, Franciszek Seredynski, Frederic Guinand
Application of Smart Multidimensional Navigation in Web-Based Systems

In this paper we further develop a new method for implementing multidimensional navigation. This technique is based on advantages of web-based techniques such as site maps, vertical menus and tag clouds. It combines high informational density of tags with structural quality of traditional hierarchical navigation. The result is organization by all org. schemes at once, increased information density and reduced interaction and attention-switching cost. Multidimensional navigation is expected to enhance efficiency of information retrieval especially on websites and web-based systems. It can be however applied also on information systems such as knowledge management or educational systems.

Ivan Soukal, Aneta Bartuskova
WINE: Web Integrated Navigation Extension; Conceptual Design, Model and Interface

Limitations in the area of web navigation and organization usually lead to poor usability, which further leads to user disorientation and dissatisfaction. In this paper we present WINE (Web Integrated Navigation Extension) as the novel alternative solution for efficient information retrieval. With combining website’s data and local user data, WINE can offer useful navigation options in a consistent stable environment. In the case of missing support, partial solution is offered. One of the main goals of this interface is reducing interaction cost and user disorientation and provide means of personalization in the scope of each website or system.

Ivan Soukal, Aneta Bartuskova
Real-Life Validation of Methods for Detecting Locations, Transition Periods and Travel Modes Using Phone-Based GPS and Activity Tracker Data

Insufficient physical activity is a major health concern. Choosing for active transport, such as cycling and walking, can contribute to an increase in activity. Fostering a change in behavior that prefers active transport could start with automated self-monitoring of travel choices. This paper describes an experiment to validate existing algorithms for detecting significant locations, transition periods and travel modes using smartphone-based GPS data and an off-the-shelf activity tracker. A real-life pilot study was conducted to evaluate the feasibility of the approach in the daily life of young adults. A clustering algorithm is used to locate people’s important places and an analysis of the sensitivity of the different parameters used in the algorithm is provided. Our findings show that the algorithms can be used to determine whether a user travels actively or passively based on smartphone-based GPS speed data, and that a slightly higher accuracy is achieved when it is combined with activity tracker data.

Adnan Manzoor, Julia S. Mollee, Aart T. van Halteren, Michel C. A. Klein
Adaptive Runtime Middleware: Everything as a Service

The Internet of Things applies and has a large impact on a multitude of application domains, such as assistive technologies and smart transportation, by bringing together the physical and virtual worlds. Due to the large scale, the extreme heterogeneity and the dynamics of the IoT there are huge challenges for leveraging the IoT within software applications. The management of devices and the interactions with software services poses, if not, the greatest challenge in IoT, so as to support the development of distributed applications. This paper addresses this challenge by applying the service-oriented architecture paradigm for the dynamic management of IoT devices and for supporting the development of distributed applications. A service-oriented approach is a natural fit for both communication and management of IoT devices, and can be combined logically with software services, since it is currently the paradigm that excels and dominates the virtual domain. Building on our past and ongoing work on middleware platforms, this work reviews middleware solutions and proposes a service-oriented middleware platform to face IoT heterogeneity, the interactive functionality of IoT and promote modular-based development to scale as well as provide flexibility in the development of IoT-based distributed applications.

Achilleas P. Achilleos, Kyriaki Georgiou, Christos Markides, Andreas Konstantinidis, George A. Papadopoulos

Decision Support & Control Systems

Frontmatter
Adaptive Neuro Integral Sliding Mode Control on Synchronization of Two Robot Manipulators

Designing a new adaptive synchronization controller for multiple robot manipulators is main purpose of this study. But, this synchronization between robots are considered without having direct communication between robots. The adaptive synchronization method is consisted of the integral sliding mode controller improved with adaptive neural network controller. In order to analyze the performance of the proposed method, four different situations are considered. Also, the result are compared with the ANFIS method. The proposed method is guaranteed by Lyapunov stability method.

Parvaneh Esmaili, Habibollah Haron
Ant-Inspired, Invisible-Hand-Controlled Robotic System to Support Rescue Works After Earthquake

A bridge/scaffolding system, based on ants-like robots is proposed. Collective Intelligence of the system is derived from Adam Smith’s Invisible Hand phenomena (ASIH). Such a bridge system will be air/truck delivered to a particular earthquake location. Next, the bridge-robots, initialized by trained supervisors, will collectively assemble a bridge by means of transporting a chain or scaffolding structure similar to that of bridges made by ants joining their bodies together. Additionally, bridge-robots, task-controlled by supervisors, should also provide the functions of light crane or conveyor belt systems to manipulate heavy debris. Different rescue scenarios have been presented with the help of 3D graphics.

Tadeusz Szuba
Estimation of Delays for Individual Trams to Monitor Issues in Public Transport Infrastructure

Open stream data on public transport published by cities can be used by third party developers such as Google to create a real–time travel planner. However, even a real data based system examines a current situation on roads. We have used open stream data with current trams’ localisations and timetables to estimate current delays of individual trams. On that base, we calculate a global coefficient that can be used as a measure to monitor a current situation in a public transport network. We present an use case from the city of Warsaw that shows how a critical situation for a public transport network can be detected before the peak points of cumulative delays

Marcin Luckner, Jan Karwowski
Novel Effective Algorithm for Synchronization Problem in Directed Graph

An effective algorithm for solving synchronization problem in directed graph is presented. The system is composed of vertices and edges. Entities are going through the system by given paths and can leave the vertex if all other entities which are going through this vertex have already arrived. The aim of this research is to create an algorithm for finding an optimal input vector of starting times of entities which gives minimal waiting time of entities in vertices and thus in a whole system. Asymptotic complexity of a given solution and using of brute-force method is discussed and compared. This algorithm is shown on an example from a field of train timetable problem.

Richard Cimler, Dalibor Cimr, Jitka Kuhnova, Hana Tomaskova
Bimodal Biometric Method Fusing Hand Shape and Palmprint Modalities at Rank Level

Person identification becomes increasingly an important task to guarantee the security of persons with the possible fraud attacks, in our life. In this paper, we propose a bimodal biometric system based on hand shape and palmprint modalities for person identification. For each modality, the SIFT descriptors (Scale Invariant Feature Transform) are extracted thanks to their advantages based on the invariance of features to possible rotation, translation, scale and illumination changes in images. These descriptors are then represented sparsely using sparse representation method. The fusion step is carried out at rank level after the classification step using SVM (Support Vector Machines) classifier, in which matching scores are transformed into probability measures. The experimentation is performed on the IITD hand database and results demonstrate encouraging performances achieving IR = 99.34% which are competitive to methods fusing hand shape and palmprint modalities existing in the literature.

Nesrine Charfi, Hanene Trichili, Basel Solaiman
Adaptation to Market Development Through Price Setting Strategies in Agent-Based Artificial Economic Model

The paper is focused on the incorporation of costs in relation to the selection of the price setting strategies which, in general, represent crucial part of economic agents’ decision making. Study and research of efficient decision making related to the price setting are important especially in agent-based economic systems which are intended application area for obtained results. This paper provides detailed description of various cost types used in traditional economic analysis and an effort has been made to identify constant (i.e. stable) and/or dynamic factors, typically related to the volume of production) in the cost calculation. Simulation part supports provided discussion about these design questions of artificial economic models in several scenarios.

Petr Tucnik, Petr Blecha, Jaroslav Kovarnik
Efficacy and Planning in Ophthalmic Surgery – A Vision of Logical Programming

Different variables should be considered in order to identify the critical aspects that influence ophthalmologic surgery and, in particular, the patient’s conditions that can become the key factor in this process, i.e., in situations that can influence the stability and surgery of the patient. Protocol of ophthalmologic surgery has as main concern Glycemic Index, Maximum Blood Pressure, Abnormal Cardiac Index, and Cardiac-Respiratory Insufficiency. Such variables will be used to construct a dynamic virtual world of complex and interacting entities that map real cases of surgical planning situations, understood here as the terms that make the extensions of mathematical logic functions that compete against one another in a rigorous selection regime in which fitness is judged by one criterion alone, its Quality-of-Information. Indeed, one focus is on the development of an Evolutionary Clinical Decision Support System to evaluate patient stability and assist the physicians in the decision of doing or postponing surgery, once cataract is the leading cause of blindness in the world.

Nuno Maia, Manuel Mariano, Goreti Marreiros, Henrique Vicente, José Neves
A Methodological Approach Towards Crisis Simulations: Qualifying CI-Enabled Information Systems

Low probability high impact events (LoPHIEs) disrupt organizations’ processes severely. Existing methods used for the anticipation and management of such events, suffer from common limitations resulting in a huge impact to the quantification of probability, uncertainty and risk. Continues studies in the field of Crisis Informatics, present an opportunity for the development of a framework that fits the uncertainty related properties of LoPHIEs.The paper identifies the need for the development and conduction of a series of experiments, aiming to address the factors that qualify Collective Intelligence-enabled Information Systems with respect to their applicability towards support for LoPHIEs; and aims to propose an experiment framework as a methodology for scenario design in LoPHIEs settings.

Chrysostomi Maria Diakou, Angelika I. Kokkinaki, Styliani Kleanthous
Multicriteria Transportation Problems with Fuzzy Parameters

In the classical transportation problem, it is assumed that the transportation costs are known constants. In practice, however, transport costs depend on weather, road and technical conditions. The concept of fuzzy numbers is one approach to modeling the uncertainty associated with such factors. There have been a large number of papers in which models of transportation problems with fuzzy parameters have been presented. Just as in classical models, these models are constructed under the assumption that the total transportation costs are minimized. This article proposes two models of a transportation problem where decisions are based on two criteria. According to the first model, the unit transportation costs are fuzzy numbers. Decisions are based on minimizing both the possibilistic expected value and the possibilistic variance of the transportation costs. According to the second model, all of the parameters of the transportation problem are assumed to be fuzzy. The optimization criteria are the minimization of the possibilistic expected values of the total transportation costs and minimization of the total costs related to shortages (in supply or demand). In addition, the article defines the concept of a truncated fuzzy number, together with its possibilistic expected value. Such truncated numbers are used to define how large shortages are. Some illustrative examples are given.

Barbara Gładysz
Backmatter
Metadaten
Titel
Computational Collective Intelligence
herausgegeben von
Ngoc Thanh Nguyen
Prof. George A. Papadopoulos
Prof. Piotr Jędrzejowicz
Dr. Bogdan Trawiński
Prof. Dr. Gottfried Vossen
Copyright-Jahr
2017
Electronic ISBN
978-3-319-67074-4
Print ISBN
978-3-319-67073-7
DOI
https://doi.org/10.1007/978-3-319-67074-4