Swipe to navigate through the chapters of this book
This chapter examines some alternative strategies for selecting attributes at each stage of the TDIDT decision tree generation algorithm and compares the size of the resulting trees for a number of datasets. The risk of obtaining decision trees that are entirely meaningless is highlighted, pointing to the importance of a good choice of attribute selection strategy. One of the most widely used strategies is based on minimising entropy (or equivalently maximising information gain) and this approach is illustrated in detail.
Please log in to get access to your license.
Dont have a licence yet? Then find out more about our products and how to get one now:
- Decision Tree Induction: Using Entropy for Attribute Selection
Prof. Max Bramer
- Copyright Year
- Springer London