Computational Sample Complexity and Attribute-Efficient Learning
作者:
Highlights:
•
摘要
Two fundamental measures of the efficiency of a learning algorithm are its running time and the number of examples it requires (its sample complexity). In this paper we demonstrate that even for simple concept classes, an inherent tradeoff can exist between running time and sample complexity. We present a concept class of 1-decision lists and prove that while a computationally unbounded learner can learn the class from O(1) examples, under a standard cryptographic assumption any polynomial-time learner requires almost Θ(n) examples. Using a different construction, we present a concept class of k-decision lists which exhibits a similar but stronger gap in sample complexity. These results strengthen the results of Decatur et al. (1997, in “Proc. Tenth Ann. Conf. Comput. Learning Theory,” pp. 130–142) on distribution-free computational sample complexity and come within a logarithmic factor of the largest possible gap for concept classes of k-decision lists. Finally, we construct a concept class of decision lists which can be learned attribute-efficiently and can be learned in polynomial time but cannot be learned attribute-efficiently in polynomial time. This is the first result which shows that attribute-efficient learning can be computationally hard. The main tools used are one-way permutations, error-correcting codes and pseudorandom generators.
论文关键词:
论文评审过程:Received 19 October 1998, Revised 26 July 1999, Available online 25 May 2002.
论文官网地址:https://doi.org/10.1006/jcss.1999.1666