Proper learning of k-term DNF formulas from satisfying assignments

作者:

Highlights:

摘要

In certain applications there may be only positive examples available to learn concepts of a class of interest. Furthermore, learning has to be done properly, i. e. the hypothesis space has to coincide with the concept class, and without false positives, i. e. the hypothesis always has to be a subset of the real concept (one-sided error). For the well studied class of k-term DNF formulas it has been known that learning is difficult. Unless RP = NP, it is not feasible to learn k-term DNF formulas properly in a distribution-free sense even if both positive and negative examples are available and even if false positives are allowed.This paper constructs an efficient algorithm that, for fixed but arbitrary k and q, if examples are drawn from q-bounded distributions, it properly learns the class of k-term DNFs without false positives from positive examples alone with arbitrarily small relative error.

论文关键词:Algorithmic learning,Learning from positive examples,q-bounded distributions,k-term DNF formulas

论文评审过程:Received 14 July 2017, Revised 25 June 2019, Accepted 9 July 2019, Available online 18 July 2019, Version of Record 8 August 2019.

论文官网地址:https://doi.org/10.1016/j.jcss.2019.07.004