On Learning Multiple Concepts in Parallel
作者:
Highlights:
•
摘要
A class U of recursive functions is said to be finitely (a, b) learnable if and only if for any b tuple of pairwise distinct functions from U at least a of the b functions have been learned correctly from examples of their behavior after some finite amount of time. It is shown that this approach, called learning in parallel, is more powerful than nonparallel learning. Furthermore, it is shown that imposing the restriction (called parallel super learning) on parallel learning that the learning algorithm also identiy on which of the input functions it is successful is still more powerful than nonparallel learning, A necessary and sufficient condition is derived for (a, b) superlearning and (c, d) superlearning being the same power. Our new notion of parallel learning is compared with other, previously defined notions of learning in parallel. Finally, we synthesize our notion of learning in parallel with the concept of team learning and obtain some interesting trade-offs and comparisons.
论文关键词:
论文评审过程:Available online 25 May 2002.
论文官网地址:https://doi.org/10.1006/jcss.1995.1005