Symbolic and neural learning algorithms: An experimental comparison
作者:Jude W. Shavlik, Raymond J. Mooney, Geoffrey G. Towell
摘要
Despite the fact that many symbolic and neural network (connectionist) learning algorithms address the same problem of learning from classified examples, very little is known regarding their comparative strengths and weaknesses. Experiments comparing the ID3 symbolic learning algorithm with the perceptron and backpropagation neural learning algorithms have been performed using five large, real-world data sets. Overall, backpropagation performs slightly better than the other two algorithms in terms of classification accuracy on new examples, but takes much longer to train. Experimental results suggest that backpropagation can work significantly better on data sets containing numerical data. Also analyzed empirically are the effects of (1) the amount of training data, (2) imperfect training examples, and (3) the encoding of the desired outputs. Backpropagation occasionally outperforms the other two systems when given relatively small amounts of training data. It is slightly more accurate than ID3 when examples are noisy or incompletely specified. Finally, backpropagation more effectively utilizes a “distributed” output encoding.
论文关键词:Empirical learning, connectionism, neural networks, inductive learning, ID3, perceptron, backpropagation
论文评审过程:
论文官网地址:https://doi.org/10.1007/BF00114160