On the limits of proper learnability of subclasses of DNF formulas

作者:Krishnan Pillaipakkamnatt, Vijay Raghavan

摘要

Bshouty, Goldman, Hancock and Matar have shown that up to % MathType!MTEF!2!1!+-% feaafeart1ev1aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn% hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr% 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq-Jc9% vqaqpepm0xbba9pwe9Q8fs0-yqaqpepae9pg0FirpepeKkFr0xfr-x% fr-xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaWaaOaaaeaaci% GGSbGaai4BaiaacEgacaWGUbaaleqaaaaa!39CA!\[\sqrt {\log n} \] term DNF formulas can be properly learned in the exact model with equivalence and membership queries. Given standard complexitytheoretical assumptions, we show that this positive result for proper learning cannot be significantly improved in the exact model or the PAC model extended to allow membership queries. Our negative results are derived from two general techniques for proving such results in the exact model and the extended PAC model. As a further application of these techniques, we consider read-thrice DNF formulas. Here we improve on Aizenstein, Hellerstein, and Pitt's negative result for proper learning in the exact model in two ways. First, we show that their assumption of NP ≠ co-NP can be replaced with the weaker assumption of P ≠ NP. Second, we show that read-thrice DNF formulas are not properly learnable in the extended PAC model, assuming RP ≠ NP.

论文关键词:Models of learning-exact, PAC, proper learning

论文评审过程:

论文官网地址:https://doi.org/10.1007/BF00114011