Skip to main content
Top

2015 | Book

Transactions on Rough Sets XIX

Editors: James F. Peters, Andrzej Skowron, Dominik Ślȩzak, Hung Son Nguyen, Jan G. Bazan

Publisher: Springer Berlin Heidelberg

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

The LNCS journal Transactions on Rough Sets is devoted to the entire spectrum of rough sets related issues, from logical and mathematical foundations, through all aspects of rough set theory and its applications, such as data mining, knowledge discovery, and intelligent information processing, to relations between rough sets and other approaches to uncertainty, vagueness, and incompleteness, such as fuzzy sets and theory of evidence.

Volume XIX in the series focuses on the current trends and advances in both the foundations and practical applications of rough sets. It contains 7 extended and revised papers originally presented at the Workshop on Rough Set Applications, RSA 2012, held in Wroclaw, Poland, in September 2012. In addition, the book features 3 contributions in the category of short surveys and monographs on the topic.

Table of Contents

Frontmatter
A Uniform Framework for Rough Approximations Based on Generalized Quantifiers
Abstract
The rough set theory provides an effective tool for decision analysis in the way of extracting decision rules from information systems. The rule induction process is based on the definitions of lower and upper approximations of the decision class. The condition attributes of the information system constitute an indiscernibility relation on the universe of objects. An object is in the lower approximation of the decision class if all objects indiscernible with it are in the decision class and it is in the upper approximation of the decision class if some objects indiscernible with it are in the decision class. Various generalizations of rough set theory have been proposed to enhance the capability of the theory. For example, variable precision rough set theory is used to improve the robustness of rough set analysis and fuzzy rough set approach is proposed to deal with vague information. In this paper, we present a uniform framework for different variants of rough set theory by using generalized quantifiers. In the framework, the lower and upper approximations of classical rough set theory are defined with universal and existential quantifiers respectively, whereas variable precision rough approximations correspond to probability quantifiers. Moreover, fuzzy rough set approximations can be defined by using different fuzzy quantifiers. We show that the framework can enhance the expressive power of the decision rules induced by rough set-based decision analysis.
Tuan-Fang Fan, Churn-Jung Liau, Duen-Ren Liu
PRE and Variable Precision Models in Rough Set Data Analysis
Abstract
We present a parameter free and monotonic alternative to the parametric variable precision model of rough set data analysis. The proposed model is based on the well known PRE index \(\lambda \) of Goodman and Kruskal. Using a weighted \(\lambda \) model it is possible to define a two dimensional space based on (Rough) sensitivity and (Rough) specificity, for which the monotonicity of sensitivity in a chain of sets is a nice feature of the model. As specificity is often monotone as well, the results of a rough set analysis can be displayed like a receiver operation curve (ROC) in statistics. Another aspect deals with the precision of the prediction of categories – normally measured by an index \(\alpha \) in classical rough set data analysis. We offer a statistical theory for \(\alpha \) and a modification of \(\alpha \) which fits the needs of our proposed model. Furthermore, we show how expert knowledge can be integrated without losing the monotonic property of the index. Based on a weighted \(\lambda \), we present a polynomial algorithm to determine an approximately optimal set of predicting attributes. Finally, we exhibit a connection to Bayesian analysis. We present several simulation studies for the presented concepts. The current paper is an extended version of [1].
Ivo Düntsch, Günther Gediga
Three Approaches to Deal with Tests for Inconsistent Decision Tables – Comparative Study
Abstract
We present three approaches to deal with tests (super-reducts) for inconsistent decision tables. In such tables, we have groups of rows with equal values of conditional attributes and different decisions (values of the decision attribute). Instead of a group of equal rows, we consider one row given by values of conditional attributes and we attach to this row: (i) the set of all decisions for rows from the group (many-valued decisions approach); (ii) the most common decision for rows from the group (the most common decision approach); and (iii) unique code of the set of all decisions for rows from the group (generalized decision approach). For many-valued decisions approach, we consider the problem of finding an arbitrary decision from the set of decisions. For the most common decision approach, we consider the problem of finding the most common decision from the set of decisions. For generalized decision approach, we consider the problem of finding all decisions from the set of decisions. We present experimental results connected with the cardinality of tests and comparative study for the considered approaches.
Mohammad Azad, Igor Chikalov, Mikhail Moshkov, Beata Zielosko
Searching for Reductive Attributes in Decision Tables
Abstract
Most decision support systems based on rough set theory are related to the minimal reduct calculation problem, which is NP-hard. This paper investigates the problem of searching for the set of useful attributes that occur in at least one reduct. By complement, this problem is equivalent to searching for the set of redundant attributes, i.e. the attributes that do not occur in any reducts of the given decision table. We show that the considered problem is equivalent to a Sperner system for relational data base system and prove that it can be solved in polynomial time. On the base of these theoretical results, we also propose two different algorithms for elimination of redundant attributes in decision tables.
Long Giang Nguyen, Hung Son Nguyen
Sequential Optimization of $$\gamma $$ -Decision Rules Relative to Length, Coverage and Number of Misclassifications
Abstract
The paper is devoted to the study of an extension of dynamic programming approach which allows sequential optimization of approximate decision rules relative to length, coverage and number of misclassifications. Presented algorithm constructs a directed acyclic graph \({\varDelta }_{\gamma }(T)\) which nodes are subtables of the decision table T. Based on the graph \({\varDelta }_{\gamma }(T)\) we can describe all irredundant \(\gamma \)-decision rules with the minimum length, after that among these rules describe all rules with the maximum coverage, and among such rules describe all rules with the minimum number of misclassifications. We can also change the set of cost functions and order of optimization. Sequential optimization can be considered as a tool that helps to construct simpler rules for understanding and interpreting by experts.
Beata Zielosko
Toward Qualitative Assessment of Rough Sets in Terms of Decision Attribute Values in Simple Decision Systems over Ontological Graphs
Abstract
Approximation of sets is a fundamental notion of rough set theory (RST) proposed by Z. Pawlak. Each rough set can be characterized numerically by the coefficient called the accuracy of approximation. This coefficient determines quantitatively a degree of roughness. Such an approach does not take into consideration semantics of data. In the paper, we show that adding information on semantic relations between decision attribute values in the form of ontological graphs enables us to determine qualitatively the accuracy of approximation. The qualitative assessment of approximation should be treated as some additional characteristic of rough sets. The proposed approach enriches application of rough sets if decision attribute values classifying objects are symbolical (e.g., words, terms, linguistic concepts, etc.). The presented approach refers to a general trend in computations proposed by L. Zadeh and called “computing with words”.
Krzysztof Pancerz
Predicting the Presence of Serious Coronary Artery Disease Based on 24 Hour Holter ECG Monitoring
Abstract
The purpose of this study was to evaluate the usefulness of classification methods in recognizing a cardiovascular pathology. Based on clinical and electrocardiographic (ECG) Holter data we propose a method for predicting a coronary stenosis demanding revascularization in patients with a diagnosis of a stable coronary heart disease. A possible solution of this problem has been set in a context of rough set theory and methods. The rough set theory introduced by Zdzisław Pawlak during the early 1980s provides a foundation for the construction of classifiers. From the rough set perspective, classifiers presented in the paper are based on a decision tree calculated on a basis of a local discretization method, related to the problem of reducts computation. We present a new modification of a tree building method which emphasizes the discernibility of objects belonging to decision classes indicated by human experts. The presented method may be used to assess the need for the coronary revascularization. The paper includes results of experiments that have been performed on medical data obtained from Second Department of Internal Medicine, Collegium Medicum, Jagiellonian University, Kraków, Poland.
Jan G. Bazan, Sylwia Buregwa-Czuma, Przemysław Wiktor Pardel, Stanisława Bazan-Socha, Barbara Sokołowska, Sylwia Dziedzina
Interface of Rough Set Systems and Modal Logics: A Survey
Abstract
In this paper the relationship between rough set theory and modal logic has been discussed. Pawlakian rough set theory has obvious connection with modal logic system \(S_5\). With the introduction of various other lower and upper approximation operators, other modal systems come into picture. Besides, the possibility of new modal systems also crop up. Some of these issues are focused here.
Pulak Samanta, Mihir Kumar Chakraborty
A Semantic Text Retrieval for Indonesian Using Tolerance Rough Sets Models
Abstract
The research of Tolerance Rough Sets Model (TRSM) ever conducted acted in accordance with the rational approach of AI perspective. This article presented studies who complied with the contrary path, i.e. a cognitive approach, for an objective of a modular framework of semantic text retrieval system based on TRSM specifically for Indonesian. In addition to the proposed framework, this article proposes three methods based on TRSM, which are the automatic tolerance value generator, thesaurus optimization, and lexicon-based document representation. All methods were developed by the use of our own corpus, namely ICL-corpus, and evaluated by employing an available Indonesian corpus, called Kompas-corpus. The endeavor of a semantic information retrieval system is the effort to retrieve information and not merely terms with similar meaning. This article is a baby step toward the objective.
Gloria Virginia, Hung Son Nguyen
Some Transportation Problems Under Uncertain Environments
Abstract
Transportation problem (TP) is a very important area in operations research and management science. TPs not only involve with cost minimization, but also involve with many other goals such as profit maximization, time minimization, minimization of total deterioration of goods, etc. Also the available data of a transportation system such as transportation costs, resources, demands, conveyance capacities are not always crisp or precise but are uncertain. In this dissertation some transportation problems have been formulated and solved in different uncertain environments, e.g., fuzzy, type-2 fuzzy, rough and linguistic.
Section 1 is introductory. Some basic concepts and definitions of fuzzy set, type-2 fuzzy set, rough set and variable are introduced in Sect. 2. In Sect. 3, we have formulated and solved two solid transportation problems (STPs) with fuzzy parameters namely a multi-objective STP with budget constraints and a multi-objective multi-item STP. Section 4 presents some theoretical developments related to type-2 fuzzy variables (T2 FVs) - a defuzzification method of T2 FVs and an interval approximation method of continuous T2 FVs. In this section, three transportation models with type-2 fuzzy parameters have been formulated and solved. In Sect. 5, we have presented two transportation mode selection problems with linguistic evaluations represented by fuzzy variables and interval type-2 fuzzy variables respectively. Here we have developed two fuzzy multi-criteria group decision making methods and these methods are applied to solve the respective mode selection problems. Section 6 presents a practical solid transportation model considering per trip capacity for each type of conveyances. Also in this problem fluctuating cost parameters are represented by rough variables. Rough chance constrained programming model, rough expected value model and rough dependent-chance programming model are used to solve the problem with rough cost parameters.
Pradip Kundu
Backmatter
Metadata
Title
Transactions on Rough Sets XIX
Editors
James F. Peters
Andrzej Skowron
Dominik Ślȩzak
Hung Son Nguyen
Jan G. Bazan
Copyright Year
2015
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-662-47815-8
Print ISBN
978-3-662-47814-1
DOI
https://doi.org/10.1007/978-3-662-47815-8

Premium Partner