On the difficulty of approximately maximizing agreements
作者:
Highlights:
•
摘要
We address the computational complexity of learning in the agnostic framework. For a variety of common concept classes we prove that, unless P=NP, there is no polynomial time approximation scheme for finding a member in the class that approximately maximizes the agreement with a given training sample. In particular our results apply to the classes of monomials, axis-aligned hyper-rectangles, closed balls and monotone monomials. For each of these classes, we prove the NP-hardness of approximating maximal agreement to within some fixed constant (independent of the sample size and of the dimensionality of the sample space). For the class of half-spaces, we prove that, for any ε>0, it is NP-hard to approximately maximize agreements to within a factor of (418/415−ε), improving on the best previously known constant for this problem, and using a simpler proof. An interesting feature of our proofs is that, for each of the classes we discuss, we find patterns of training examples that, while being hard for approximating agreement within that concept class, allow efficient agreement maximization within other concept classes. These results bring up a new aspect of the model selection problem—they imply that the choice of hypothesis class for agnostic learning from among those considered in this paper can drastically effect the computational complexity of the learning process.
论文关键词:Machine learning,Computational learning theory,Neural networks,Inapproximability,Hardness,Half-spaces,Axis-aligned hyper-rectangles,Balls,Monomials
论文评审过程:Received 29 November 2000, Revised 25 October 2002, Available online 6 May 2003.
论文官网地址:https://doi.org/10.1016/S0022-0000(03)00038-2