Skip to main content

2016 | Buch | 1. Auflage

Transactions on Computational Collective Intelligence XXII

herausgegeben von: Ngoc Thanh Nguyen, Ryszard Kowalczyk

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

These transactions publish research in computer-based methods of computational collective intelligence (CCI) and their applications in a wide range of fields such as the semantic Web, social networks, and multi-agent systems. TCCI strives to cover new methodological, theoretical and practical aspects of CCI understood as the form of intelligence that emerges from the collaboration and competition of many individuals (artificial and/or natural). The application of multiple computational intelligence technologies, such as fuzzy systems, evolutionary computation, neural systems, consensus theory, etc., aims to support human and other collective intelligence and to create new forms of CCI in natural and/or artificial systems.

This twenty-second issue contains 11 carefully selected and revised contributions.

Inhaltsverzeichnis

Frontmatter
Pairwise Comparisons Rating Scale Paradox
Abstract
This study demonstrates that incorrect data are entered into a pairwise comparisons matrix for processing into weights for the data collected by a rating scale. Unprocessed rating scale data lead to a paradox. A solution to it, based on normalization, is proposed. This is an essential correction for virtually all pairwise comparisons methods using rating scales. The illustration of the relative error, currently taking place in numerous publications, is discussed.
W. W. Koczkodaj
On Achieving History-Based Move Ordering in Adversarial Board Games Using Adaptive Data Structures
Abstract
This paper concerns the problem of enhancing the well-known alpha-beta search technique for intelligent game playing. It is a well-established principle that the alpha-beta technique benefits greatly, that is to say, achieves more efficient tree pruning, if the moves to be examined are ordered properly. This refers to placing the best moves in such a way that they are searched first. However, if the superior moves were known a priori, there would be no need to search at all. Many move ordering heuristics, such as the Killer Moves technique and the History Heuristic, have been developed in an attempt to address this problem. Formerly unrelated to game playing, the field of Adaptive Data Structures (ADSs) is concerned with the optimization of queries over time within a data structure, and provides techniques to achieve this through dynamic reordering of its internal elements, in response to queries. In earlier works, we had proposed the Threat-ADS heuristic for multi-player games, based on the concept of employing efficient ranking mechanisms provided by ADSs in the context of game playing. Based on its previous success, in this work we propose the concept of using an ADS to order moves themselves, rather than opponents. We call this new technique the History-ADS heuristic. We examine the History-ADS heuristic in both two-player and multi-player environments, and investigate its possible refinements. These involve providing a bound on the size of the ADS, based on the hypothesis that it can retain most of its benefits with a smaller list, and examining the possibility of using a different ADS for each level of the tree. We demonstrate conclusively that the History-ADS heuristic can produce drastic improvements in tree pruning in both two-player and multi-player games, and the majority of its benefits remain even when it is limited to a very small list.
Spencer Polk, B. John Oommen
Identification of Possible Attack Attempts Against Web Applications Utilizing Collective Assessment of Suspicious Requests
Abstract
The number of web-based activities and websites is growing every day. Unfortunately, so is cyber-crime. Every day, new vulnerabilities are reported and the number of automated attacks is constantly rising. In this article, a new method for detecting such attacks is proposed, whereas cooperating systems analyze incoming requests, identify potential threats and present them to other peers. Each host can then utilize the knowledge and findings of the other peers to identify harmful requests, making the whole system of cooperating servers “remember” and share information about the existing threats, effectively “immunizing” it against them.
The method was tested using data from seven different web servers, consisting of over three million of recorded requests. The paper also includes proposed means for maintaining the confidentiality of the exchanged data and analyzes impact of various parameters, including the number of peers participating in the exchange of data. Samples of identified attacks and most common attack vectors are also presented in the paper.
Marek Zachara
A Grey Approach to Online Social Networks Analysis
Abstract
Facebook is one of the largest socializing networks nowadays, gathering among its users a whole array of persons from all over the world, with a diversified background, culture, opinions, age and so on. Here is the meeting point for friends (both real and virtual), acquaintances, colleagues, team-mates, class-mates, co-workers, etc. Also, Facebook is the land where the information is spreading so fast and where you can easily exchange your opinions, feelings, travelling information, ideas, etc. But what happens when one is reading the news feed or is seeing his Facebook friends’ photos? Is he thrilled, excited? Is he feeling that the life is good? Or contrary: he is feeling lonely, isolated? Is he doing a comparison with his friends? These are some of the questions this paper in trying to answer. For shaping some of these relationships, the grey system theory will be used.
Camelia Delcea, Liviu-Adrian Cotfas, Ramona Paun, Virginia Maracine, Emil Scarlat
ReproTizer: A Fully Implemented Software Requirements Prioritization Tool
Abstract
Before software is developed, requirements are elicited. These requirements could be over-blown or under-estimated in a way that meeting the expectations of stakeholders becomes a challenge. To develop a software that precisely meets the expectations of stakeholders, elicited requirements need to be prioritized. When requirements are prioritized, contract breaches such as budget over-shoot, exceeding delivery time and missing out important requirements during implementation can be totally avoided. A number of techniques have been developed but these techniques do not addresses some of the crucial issues associated with real-time prioritization of software requirements such as computational complexities and high time consumption rate, inaccurate rank results, inability of dealing with uncertainties or missing weights of requirements, scalability problems and rank update issues. To address these problems, a tool known as ReproTizer (Requirements Prioritizer) is proposed to engender real-time prioritization of software requirements. ReproTizer consist of a WS (Weight Scale) which avails project stakeholders the ability to perceive the influence, different requirements weights may have on the final results. The WS combines a single relative weight decision matrices to determine the weight vectors of requirements with an aggregation operator (AO) which computes the global weights of requirements. The tool was tested for scalability, computational complexity, accuracy, time consumption and rank updates. Results of the performance evaluation showed that the tool is highly reliable (98.89 % accuracy), scalable (prioritized over 1000 requirements), less time consumption and complexity ranging from 500–29,804 milliseconds (ms) of total prioritization time and able to automatically update ranks whenever changes occurs. Requirements prioritization, a multi-criteria decision making task is therefore an integral aspect of the requirements engineering phase of the development life cycle phases. It is used for software release planning and leads to the development of software systems based on the preferential requirements of stakeholders.
Philip Achimugu, Ali Selamat, Roliana Ibrahim
A Consensus-Based Method for Solving Concept-Level Conflict in Ontology Integration
Abstract
Ontology reuse has played an important role in developing the shared knowledge in Semantic Web. The ontology reuse enables knowledge sharing more easily between ontology-based intelligent systems. In meanwhile, we are still facing the challenging task of solving the conflict potentials in the ontology integration at syntactic and semantic levels. On one aspect of considering knowledge conflicts during the integration process, we try to find the meaningfulness of the conflicting knowledge that means a consensus among conflicts in integrating ontologies. This paper presents a novelty method for finding the consensus in ontology integration at the concept level. Our approach is based on the consensus theory and distance functions between attributes’ values.
Trung Van Nguyen, Hanh Huu Hoang
Enhancing Collaborative Filtering Using Implicit Relations in Data
Abstract
This work presents a Recommender System (RS) that relies on distributed recommendation techniques and implicit relations in data. In order to simplify the experience of users, recommender systems pre-select and filter information in which they may be interested in. Users express their interests in items by giving their opinion (explicit data) and navigating through the web-page (implicit data). The Matrix Factorization (MF) recommendation technique analyze this feedback, but it does not take more heterogeneous data into account. In order to improve recommendations, the description of items can be used to increase the relations among data. Our proposal extends MF techniques by adding implicit relations in an independent layer. Indeed, using past preferences, we deeply analyze the implicit interest of users in the attributes of items. By using this, we transform ratings and predictions into “semantic values”, where the term semantic indicates the expansion in the meaning of ratings. The experimentation phase uses MovieLens and IMDb database. We compare our work against a simple Matrix Factorization technique. Results show accurate personalized recommendations. At least but not at last, both recommendation analysis and semantic analysis can be parallelized, alleviating time processing in large amount of data.
Manuel Pozo, Raja Chiky, Elisabeth Métais
Semantic Web-Based Social Media Analysis
Abstract
With the on growing usage of microblogging services, such as Twitter, millions of users share opinions daily on virtually everything. Making sense of this huge amount of data using sentiment and emotion analysis, can provide invaluable benefits to organizations trying to better understand what the public thinks about their services and products. While the vast majority of now-a-days researches are solely focusing on improving the algorithms used for sentiment and emotion evaluation, the present one underlines the benefits of using a semantic based approach for modeling the analysis’ results, the emotions and the social media specific concepts. By storing the results as structured data, the possibilities offered by semantic web technologies, such as inference and accessing the vast knowledge in Linked Open Data, can be fully exploited. The paper also presents a novel semantic social media analysis platform, which is able to properly emphasize the users’ complex feeling such as happiness, affection, surprise, anger or sadness.
Liviu-Adrian Cotfas, Camelia Delcea, Antonin Segault, Ioan Roxin
Web Projects Evaluation Using the Method of Significant Website Assessment Criteria Detection
Abstract
The research presented in the article consists of an examination of the applicability of feature selection methods in the task of selecting website assessment criteria, to which weights are assigned. The applicability of the chosen methods was examined against the approach in which the weightings of website assessment criteria are defined by users. The research shows a selection procedure concerning significant choice criteria and reveals undisclosed user preferences based on the website quality assessment models. Results concerning undisclosed preferences were verified through a comparison with those declared by website users.
Paweł Ziemba, Jarosław Jankowski, Jarosław Wątróbski, Mateusz Piwowarski
Dynamic Database by Inconsistency and Morphogenetic Computing
Abstract
Since Peter Chen published the article EntityRelationship Modeling in 1976, Entity-Relationship database has become a hot spot for research. With the advent of the big data, it appears that Entity-Relationship database is substituted for attribute reduction map structure. In the big data we have no evidence of the relationship but only of attributes and maps. In this paper we give an attribute representation of the relationship. In fact we assume that any entity can be in two different attributes (states) with two different values. One is the attribute that sends a message that we denote as e1 and the other is to receive the message that we denote as e2. The values of the attributes are the names of the entities. A relationship is a superposition ae1 + be2 of the two states. When we change the values of the states we change the database. When we change the two states in the same way we have isomorphism among database, and when we change the two states in different way we have isomorphism with distortion (homotopic transformation). Given a set of independent data base we can generate (compute) all the other data base in a dynamical way. In this way we can reduce the database that we must memorize. Because we are interested in the generation of the form (morphology) of database we denote this new model of computation as morphogenetic computing.
Xiaolin Xu, Germano Resconi, Guanglin Xu
A Method for Size and Shape Estimation in Visual Inspection for Grain Quality Control in the Rice Identification Collaborative Environment Multi-agent System
Abstract
Computer vision methods have so far been applied in almost every area of our lives. They are used in medical sciences, natural sciences, engineering, etc. Computer vision methods have already been used in studies on the search for links between the quality of raw food technology and their external characteristics (e.g. color, size, texture). Such work is also conducted for cereals. For the analysis results to meet the expectations of users of the system, it should include not only the attributes describing the controlled products, materials or raw materials, but should also indicate the type of material or species/variety of raw material. However existing solutions are very often implemented as closed source software (black box) therefore the user has no possibility to customize them (for example the enterprise cannot integrate these solutions into its management information system). The high cost of automated visual inspection systems are also a major problem for enterprises. The aim of this paper is to develop a method of estimating the size and shape of a rice grains using visual quality analysis, implemented in the multi-agent system named Rice Identification Collaborative Environment. Using this method will allow statistical analysis of the characteristics of the sample, and will be one of the factors leading to the identification of species/varieties of cereals and determining the percentage of the grains that do not meet quality standards. The method will be implemented as an open source software in Java. Consequently it can be easily integrated into enterprise’s management information system. Because it will be available for free, the cost of automated visual inspection systems will be reduced significantly. This paper is organized as follows: the first part shortly presents the state-of-the-art in the considered field; next, a developed method for size and shape estimation implemented in the Rice Identification Collaborative Environment is characterized; the results of a research experiment for verification of the developed method are presented in the last part of paper.
Marcin Hernes, Marcin Maleszka, Ngoc Thanh Nguyen, Andrzej Bytniewski
Backmatter
Metadaten
Titel
Transactions on Computational Collective Intelligence XXII
herausgegeben von
Ngoc Thanh Nguyen
Ryszard Kowalczyk
Copyright-Jahr
2016
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-662-49619-0
Print ISBN
978-3-662-49618-3
DOI
https://doi.org/10.1007/978-3-662-49619-0