General Convergence Results for Linear Discriminant Updates

作者:Adam J. Grove, Nick Littlestone, Dale Schuurmans

摘要

The problem of learning linear-discriminant concepts can be solved by various mistake-driven update procedures, including the Winnow family of algorithms and the well-known Perceptron algorithm. In this paper we define the general class of “quasi-additive” algorithms, which includes Perceptron and Winnow as special cases. We give a single proof of convergence that covers a broad subset of algorithms in this class, including both Perceptron and Winnow, but also many new algorithms. Our proof hinges on analyzing a generic measure of progress construction that gives insight as to when and how such algorithms converge.

论文关键词:Winnow, Perceptron, on-line learning, mistake bounds, Bregman divergence

论文评审过程:

论文官网地址:https://doi.org/10.1023/A:1010844028087