2004 | OriginalPaper | Buchkapitel
Feature Subset Selection Based on Relative Dependency between Attributes
verfasst von : Jianchao Han, Xiaohua Hu, Tsao Young Lin
Erschienen in: Rough Sets and Current Trends in Computing
Verlag: Springer Berlin Heidelberg
Enthalten in: Professional Book Archive
Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
Feature subset selection is an importent component of knowledge discovery and data mining systems to help reduce the data dimensionality. Rough sets theory provides a mechanism of selecting feature subsets. In the rough set community, most feature subset selection algorithms are attributes reduct-oriented; that is, finding minimum reducts of the conditional attributes of a decision table. Two main approaches to finding attribute reducts are categorized as discernibility functions-based and attribute dependency-based. These algorithms, however, suffer from intensive computations of either discernibility functions for the former or positive regions for the latter. In this paper, we propose a new concept, called relative attribute dependency, and present a sufficient and necessary condition of the minimum conditional attributes reduct of a decision table represented with the relative attribute dependency. The relative attribute dependency can be calculated by counting the distinct rows of the sub-decision table, instead of generating discernibility functions or positive regions. Thus the computation efficiency of minimum reducts are highly improved. We develop two algorithms for finding minimum reducts of the conditional attributes, one brute-force algorithm and the other heuristic algorithm using attribute entropy as the heuristic function. We also show the results of these algorithms by an illustrative example.