Skip to main content

2014 | Buch

Transactions on Rough Sets XVII

herausgegeben von: James F. Peters, Andrzej Skowron

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

The LNCS journal Transactions on Rough Sets is devoted to the entire spectrum of rough sets related issues, from logical and mathematical foundations, through all aspects of rough set theory and its applications, such as data mining, knowledge discovery and intelligent information processing, to relations between rough sets and other approaches to uncertainty, vagueness, and incompleteness, such as fuzzy sets and theory of evidence. Volume XVII is a continuation of a number of research streams which have grown out of the seminal work by Zdzislaw Pawlak during the first decade of the 21st century. The research streams represented in the papers cover both theory and applications of rough, fuzzy and near sets as well as their combinations.

Inhaltsverzeichnis

Frontmatter
Three-Valued Logics, Uncertainty Management and Rough Sets
Abstract
This paper is a survey of the connections between three-valued logics and rough sets from the point of view of incomplete information management. Based on the fact that many three-valued logics can be put under a unique algebraic umbrella, we show how to translate three-valued conjunctions and implications into operations on ill-known sets such as rough sets. We then show that while such translations may provide mathematically elegant algebraic settings for rough sets, the interpretability of these connectives in terms of an original set approximated via an equivalence relation is very limited, thus casting doubts on the practical relevance of truth-functional logical renderings of rough sets.
Davide Ciucci, Didier Dubois
Standard Errors of Indices in Rough Set Data Analysis
Abstract
The sample variation of indices for approximation of sets in the context of rough sets data analysis is considered. We consider the γ and α indices and some other ones – lower and upper bound approximation of decision classes. We derive confidence bounds for these indices as well as a two group comparison procedure. Finally we present procedures to compare the approximation quality of two sets within one sample.
Günther Gediga, Ivo Düntsch
Proximity System: A Description-Based System for Quantifying the Nearness or Apartness of Visual Rough Sets
Abstract
This article introduces the Proximity System, an application developed to demonstrate descriptive-based approaches to nearness and proximity within the context of digital image analysis. Specifically, the system implements the descriptive-based intersection, compliment, and difference operations defined on sets of pixels representing regions of interest. These sets of pixels can be considered visual rough sets, since the results of the descriptive-based operators are always defined with respect to a set of probe functions, which induce a partition of the objects (pixels) being considered. The contribution of this article is an overview of the Proximity System, its use of visual rough sets as description-based operands, its ability to quantify the nearness or apartness of visual rough sets, and a practical application to the problem of human visual search.
Christopher J. Henry, Garrett Smith
Rough Sets and Matroids
Abstract
We prove the recent result of Liu and Zhu [1] and discuss some consequences of that and related facts for the development of rough set theory.
Victor W. Marek, Andrzej Skowron
An Efficient Approach for Fuzzy Decision Reduct Computation
Abstract
Fuzzy rough sets is an extension of classical rough sets for feature selection in hybrid decision systems. However, reduct computation using the fuzzy rough set model is computationally expensive. A modified quick reduct algorithm (MQRA) was proposed in literature for computing fuzzy decision reduct using Radzikowska-Kerry fuzzy rough set model. In this paper, we develop a simplified computational model for discovering positive region in Radzikowska-Kerry’s fuzzy rough set model. Theory is developed for validation of omission of absolute positive region objects without affecting the subsequent inferences. The developed theory is incorporated in MQRA resulting in algorithm Improved MQRA (IMQRA). The computations involved in IMQRA are modeled as vector operations for obtaining further optimizations at implementation level. The effectiveness of algorithm(s) is empirically demonstrated by comparative analysis with several existing reduct approaches for hybrid decision systems using fuzzy rough sets.
P. S. V. S. Sai Prasad, C. Raghavendra Rao
Rough Sets in Economy and Finance
Abstract
The Rough Set Theory makes it possible to represent and infer knowledge from incomplete or noisy data, and has attracted much focus of the research community and applications have been found in a wide range of disciplines where knowledge discovery and data mining are indispensable. This paper provides a detailed review of the currently available literature covering applications of rough sets in the economy and finance. The classical rough set model and its important extensions applied to the economic and financial problems in crucial areas of risk management (business failure, credit scoring), financial market prediction, valuation and portfolio management are described, showing that the rough set theory is an interesting and increasingly popular method employed alongside traditional statistical methods, neural networks and genetic algorithms to support resolution of the most difficult problems in economy and finance.
Mariusz Podsiadło, Henryk Rybiński
Algorithms for Similarity Relation Learning from High Dimensional Data
Abstract
The notion of similarity plays an important role in machine learning and artificial intelligence. It is widely used in tasks related to a supervised classification, clustering, an outlier detection and planning. Moreover, in domains such as information retrieval or case-based reasoning, the concept of similarity is essential as it is used at every phase of the reasoning cycle. The similarity itself, however, is a very complex concept that slips out from formal definitions. A similarity of two objects can be different depending on a considered context. In many practical situations it is difficult even to evaluate the quality of similarity assessments without considering the task for which they were performed. Due to this fact the similarity should be learnt from data, specifically for the task at hand. This paper presents a research on the problem of similarity learning, which is a part of author’s PHD dissertation. It describes a similarity model, called Rule-Based Similarity, and shows algorithms for constructing this model from available data. The model utilizes notions from the rough set theory to derive a similarity function that allows to approximate the similarity relation in a given context. It is largely inspired by the idea of Tversky’s feature contrast model and it has several analogical properties. In the paper, those theoretical properties are described and discussed. Moreover, the paper presents results of experiments on real-life data sets, in which a quality of the proposed model is thoroughly evaluated and compared with the state-of-the-art algorithms.
Andrzej Janusz
Backmatter
Metadaten
Titel
Transactions on Rough Sets XVII
herausgegeben von
James F. Peters
Andrzej Skowron
Copyright-Jahr
2014
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-54756-0
Print ISBN
978-3-642-54755-3
DOI
https://doi.org/10.1007/978-3-642-54756-0

Premium Partner