Improved Bounds on the Sample Complexity of Learning
作者:
Highlights:
•
摘要
We present a new general upper bound on the number of examples required to estimate all of the expectations of a set of random variables uniformly well. The quality of the estimates is measured using a variant of the relative error proposed by Haussler and Pollard. We also show that our bound is within a constant factor of the best possible. Our upper bound implies improved bounds on the sample complexity of learning according to Haussler's decision theoretic model.
论文关键词:sample complexity,machine learning,empirical process theory,PAC learning,agnostic learning
论文评审过程:Received 24 January 2000, Revised 1 November 2000, Available online 25 May 2002.
论文官网地址:https://doi.org/10.1006/jcss.2000.1741