General Bounds on the Number of Examples Needed for Learning Probabilistic Concepts
作者:
Highlights:
•
摘要
Given a p-concept classC, we define two important functionsdC(γ),d′C(γ) (related to the notion ofγ-shattering). We prove a lower bound ofΩ((dC(γ)−1)/(εγ2)) on the number of examples required for learningCwith an (ε, γ)-good model of probability. We prove similar lower bounds for some other learning models like learning withγ-bounded absolute (or quadratic) difference or learning with aγ-good decision rule. For the class ND of nondecreasing p-concepts on the real domain,dND(γ)=Ω(1/γ). It can be shown that the resulting lower bounds for learning ND (within the models in consideration) are tight to within a logarithmic factor. In order to get the “almost-matching” upper bounds, we introduce a new method for designing learning algorithms: dynamic partitioning of the domain by use of splitting trees. The paper also contains a discussion of the gaps between the general lower bounds and the corresponding general upper bounds. It can be shown that, under very mild conditions, these gaps are quite narrow.
论文关键词:
论文评审过程:Received 27 September 1993, Revised 11 March 1994, Available online 25 May 2002.
论文官网地址:https://doi.org/10.1006/jcss.1996.0019