Decision tree induction using a fast splitting attribute selection for large datasets
作者:
Highlights:
•
摘要
Several algorithms have been proposed in the literature for building decision trees (DT) for large datasets, however almost all of them have memory restrictions because they need to keep in main memory the whole training set, or a big amount of it, and such algorithms that do not have memory restrictions, because they choose a subset of the training set, need extra time for doing this selection or have parameters that could be very difficult to determine. In this paper, we introduce a new algorithm that builds decision trees using a fast splitting attribute selection (DTFS) for large datasets. The proposed algorithm builds a DT without storing the whole training set in main memory and having only one parameter but being very stable regarding to it. Experimental results on both real and synthetic datasets show that our algorithm is faster than three of the most recent algorithms for building decision trees for large datasets, getting a competitive accuracy.
论文关键词:Decision trees,Large datasets,Gain-ratio criterion
论文评审过程:Available online 2 June 2011.
论文官网地址:https://doi.org/10.1016/j.eswa.2011.05.087