Approximation and Learning of Convex Superpositions
作者:
Highlights:
•
摘要
We present a fairly general method for constructing classes of functions of finite scale-sensitive dimension (the scale-sensitive dimension is a generalization of the Vapnik–Chervonenkis dimension to real-valued functions). The construction is as follows: start from a classFof functions of finite VC dimension, take the convex hull coFofF, and then take the closurecoFof coFin an appropriate sense. As an example, we study in more detail the case whereFis the class of threshold functions. It is shown thatcoFincludes two important classes of functions: • neural networks with one hidden layer and bounded output weights; • the so-calledΓclass of Barron, which was shown to satisfy a number of interesting approximation and closure properties. We also give an integral representation in the form of a “continuous neural network” which generalizes Barron's. It is shown that the existence of an integral representation is equivalent to bothL2andL∞approximability. A preliminary version of this paper was presented at EuroCOLT'95. The main difference with the conference version is the addition of Theorem 7, where we show that a key topological result fails when the VC dimension hypothesis is removed.
论文关键词:
论文评审过程:Received 4 June 1996, Available online 25 May 2002.
论文官网地址:https://doi.org/10.1006/jcss.1997.1506