Minimum Generalization Via Reflection: A Fast Linear Threshold Learner
作者:Steven Hampson, Dennis Kibler
摘要
The number of adjustments required to learn the average LTU function of d features, each of which can take on n equally spaced values, grows as approximately n2d when the standard perceptron training algorithm is used on the complete input space of n points and perfect classification is required. We demonstrate a simple modification that reduces the observed growth rate in the number of adjustments to approximately d2(log (d) + log(n)) with most, but not all input presentation orders. A similar speed-up is also produced by applying the simple but computationally expensive heuristic ";don't overgeneralize" to the standard training algorithm. This performance is very close to the theoretical optimum for learning LTU functions by any method, and is evidence that perceptron-like learning algorithms can learn arbitrary LTU functions in polynomial, rather than exponential time under normal training conditions. Similar modifications can be applied to the Winnow algorithm, achieving similar performance improvements and demonstrating the generality of the approach.
论文关键词:LTU, Winnow, Perceptron, Reflection, generalization
论文评审过程:
论文官网地址:https://doi.org/10.1023/A:1007693109495