2005 | OriginalPaper | Chapter
Orthogonally Rotational Transformation for Naive Bayes Learning
Authors : Limin Wang, Chunhong Cao, Haijun Li, Haixia Chen, Liyan Dong
Published in: Computational Intelligence and Security
Publisher: Springer Berlin Heidelberg
Activate our intelligent search to find suitable subject content or patents.
Select sections of text to find matching patents with Artificial Intelligence. powered by
Select sections of text to find additional relevant content using AI-assisted search. powered by
Naive Bayes is one of the most efficient and effective learning algorithms for machine learning, pattern recognition and data mining. But its conditional independence assumption is rarely true in real-world applications. We show that the independence assumption can be approximated by orthogonally rotational transformation of input space. During the transformation process, the continuous attributes are treated in different ways rather than simply applying discretization or assuming them to satisfy some standard probability distribution. Furthermore, the information from unlabeled instances can be naturally utilized to improve parameter estimation without considering the negative effect caused by missing class labels. The empirical results provide evidences to support our explanation.