2020 | Book

# Transactions on Rough Sets XXII

Editors: Prof. James F. Peters, Prof. Andrzej Skowron

Publisher: Springer Berlin Heidelberg

Book Series : Lecture Notes in Computer Science

2020 | Book

Editors: Prof. James F. Peters, Prof. Andrzej Skowron

Publisher: Springer Berlin Heidelberg

Book Series : Lecture Notes in Computer Science

The LNCS journal Transactions on Rough Sets is devoted to the entire spectrum of rough sets related issues, from logical and mathematical foundations, through all aspects of rough set theory and its applications, such as data mining, knowledge discovery, and intelligent information processing, to relations between rough sets and other approaches to uncertainty, vagueness, and incompleteness, such as fuzzy sets and theory of evidence.

Volume XXII in the series is a continuation of a number of research streams that have grown out of the seminal work of Zdzislaw Pawlak during the first decade of the 21st century.

Advertisement

Abstract

We study decision trees as a means of representation of knowledge. To this end, we design two techniques for the creation of CART (Classification and Regression Tree)-like decision trees that are based on bi-objective optimization algorithms. We investigate three parameters of the decision trees constructed by these techniques: number of vertices, global misclassification rate, and local misclassification rate.

Abstract

Technology improves every day. In order for an established theory to maintain relevance, implementations of such theory must be updated to take advantage of the new improvements. ROSETTA, a framework based on Rough Set theory, was developed in 1994 to exploit Rough Set paradigms in Machine Learning. Since then, much has happened in the field of Computer Technology, and to fully exploit these benefits ROSETTA needed to evolve. We designed and implemented a multi-core execution process in ROSETTA, optimized for speed and modular extension. The program was tested using four datasets of different sizes for computational speed and memory usage, the factors considered the primary limitations of classification and Machine Learning. The results show an increase in computation speed consistent with expected gains. The scaling per thread of memory usage was less than linear after five threads with increases in memory based primarily on the number of objects in the dataset. The number of features in the data increased the base memory needed but did not significantly impact the memory scaling by threads. The multi-core implementation was successful, and ||-ROSETTA (pronounced Parallel-ROSETTA) is capable of fully exploiting modern hardware solutions.

Abstract

In this thesis, a generalization of the classical Rough set theory [83] is developed considering the so-called sequences of orthopairs that we define in [20] as special sequences of rough sets.

Mainly, our aim is to introduce some operations between sequences of orthopairs, and to discover how to generate them starting from the operations concerning standard rough sets (defined in [32]). Also, we prove several representation theorems representing the class of finite centered Kleene algebras with the interpolation property [31], and some classes of finite residuated lattices (more precisely, we consider Nelson algebras [87], Nelson lattices [23], IUML-algebras [73] and Kleene lattice with implication [27]) as sequences of orthopairs.

Moreover, as an application, we show that a sequence of orthopairs can be used to represent an examiner’s opinion on a number of candidates applying for a job, and we show that opinions of two or more examiners can be combined using operations between sequences of orthopairs in order to get a final decision on each candidate.

Finally, we provide the original modal logic \(SO_n\) with semantics based on sequences of orthopairs, and we employ it to describe the knowledge of an agent that increases over time, as new information is provided. Modal logic \(SO_n\) is characterized by the sequences \((\square _1, \ldots , \square _n)\) and \((\bigcirc _1, \ldots , \bigcirc _n)\) of n modal operators corresponding to a sequence \((t_1, \ldots , t_n)\) of consecutive times. Furthermore, the operator \(\square _i\) of \((\square _1, \ldots , \square _n)\) represents the knowledge of an agent at time \(t_i\), and it coincides with the necessity modal operator of S5 logic [29]. On the other hand, the main innovative aspect of modal logic \(SO_n\) is the presence of the sequence \((\bigcirc _1, \ldots , \bigcirc _n)\), since \(\bigcirc _i\) establishes whether an agent is interested in knowing a given fact at time \(t_i\).

The seminal work of Z. Pawlak [60] on rough set theory has attracted the attention of researchers from various disciplines. Algebraists introduced some new algebraic structures and represented some old existing algebraic structures in terms of algebras formed by rough sets. In Logic, the rough set theory serves the models of several logics. This paper is an amalgamation of algebras and logics of rough set theory.

We prove a structural theorem for Kleene algebras, showing that an element of a Kleene algebra can be looked upon as a rough set in some appropriate approximation space. The proposed propositional logic \(\mathcal {L}_{K}\) of Kleene algebras is sound and complete with respect to a 3-valued and a rough set semantics.

This article also investigates some negation operators in classical rough set theory, using Dunn’s approach. We investigate the semantics of the Stone negation in perp frames, that of dual Stone negation in exhaustive frames, and that of Stone and dual Stone negations with the regularity property in \(K_{-}\) frames. The study leads to new semantics for the logics corresponding to the classes of Stone algebras, dual Stone algebras, and regular double Stone algebras. As the perp semantics provides a Kripke type semantics for logics with negations, exploiting this feature, we obtain duality results for several classes of algebras and corresponding frames.

In another part of this article, we propose a granule-based generalization of rough set theory. We obtain representations of distributive lattices (with operators) and Heyting algebras (with operators). Moreover, various negations appear from this generalized rough set theory and achieved new positions in Dunn’s Kite of negations.

Abstract

Pawlakian spaces rely on an equivalence relation which represent indiscernibility. As a generalization of these spaces, some approximation spaces have appeared that are not based on an equivalence relation but on a tolerance relation that represents similarity. These spaces preserve the property of the Pawlakian space that the union of the base sets gives out the universe. However, they give up the requirement that the base sets are pairwise disjoint. The base sets are generated in a way where for each object, the objects that are similar to the given object, are taken. This means that the similarity to a given object is considered. In the worst case, it can happen that the number of base sets equals those of objects in the universe. This significantly increases the computational resource need of the set approximation process and limits the efficient use of them in large databases. To overcome this problem, a possible solution is presented in this dissertation. The space is called similarity-based rough sets where the system of base sets is generated by the correlation clustering. Therefore, the real similarity is taken into consideration not the similarity to a distinguished object. The space generated this way, on the one hand, represents the interpreted similarity properly and on the other hand, reduces the number of base sets to a manageable size. This work deals with the properties and applicability of this space, presenting all the advantages that can be gained from the correlation clustering.