Naive Bayes is one of the most efficient and effective learning algorithms for machine learning, pattern recognition and data mining. But its conditional independence assumption is rarely true in real-world applications. We show that the independence assumption can be approximated by orthogonally rotational transformation of input space. During the transformation process, the continuous attributes are treated in different ways rather than simply applying discretization or assuming them to satisfy some standard probability distribution. Furthermore, the information from unlabeled instances can be naturally utilized to improve parameter estimation without considering the negative effect caused by missing class labels. The empirical results provide evidences to support our explanation.
Weitere Kapitel dieses Buchs durch Wischen aufrufen
- Orthogonally Rotational Transformation for Naive Bayes Learning
- Springer Berlin Heidelberg