Robust Learning Is Rich

作者:

Highlights:

摘要

Intuitively, a class of objects is robustly learnable if not only this class itself is learnable but all of its computable transformations remain learnable as well. In that sense, being learnable robustly seems to be a desirable property in all fields of learning.  We will study this phenomenon within the paradigm of inductive inference. Here a class of recursive functions is called robustly learnable under the criterion I iff all of its images under general recursive operators are learnable under the criterion I. M. Fulk (1990, in “3lst Annual IEEE Symposium on Foundations of Computer Science,” pp. 405–410, IEEE Comput. Soc. Press, Los Alamitos, CA) showed the existence of a nontrivial class which is robustly learnable under the criterion Ex. However, several of the hierarchies (such as the anomaly hierarchies for Ex and Bc) do not stand robustly. Hence, up to now it was not clear if robust learning is really rich. The main intention of this paper is to give strong evidence that robust learning is rich.  Our main result proved by a priority construction is that the mind change hierarchy for Ex stands robustly. Moreover, the hierarchies of team learning for both Ex and Bc stand robustly as well. In several contexts, we observe the surprising fact that a more complex topological structure of the classes to be learned leads to positive robustness results, whereas an easy topological structure yields negative results. We also show the counterintuitive fact that even some self-referential classes can be learned robustly. Some of our results point out the difficulty of robust learning when only a bounded number of mind changes is allowed. Further results concerning uniformly robust learning are derived.

论文关键词:

论文评审过程:Received 2 June 1999, Revised 9 November 1999, Available online 25 May 2002.

论文官网地址:https://doi.org/10.1006/jcss.2000.1700