The complexity of exact learning of acyclic conditional preference networks from swap examples
作者:
摘要
Learning of user preferences, as represented by, for example, Conditional Preference Networks (CP-nets), has become a core issue in AI research. Recent studies investigate learning of CP-nets from randomly chosen examples or from membership and equivalence queries. To assess the optimality of learning algorithms as well as to better understand the combinatorial structure of classes of CP-nets, it is helpful to calculate certain learning-theoretic information complexity parameters. This article focuses on the frequently studied case of exact learning from so-called swap examples, which express preferences among objects that differ in only one attribute. It presents bounds on or exact values of some well-studied information complexity parameters, namely the VC dimension, the teaching dimension, and the recursive teaching dimension, for classes of acyclic CP-nets. We further provide algorithms that exactly learn tree-structured and general acyclic CP-nets from membership queries. Using our results on complexity parameters, we prove that our algorithms, as well as another query learning algorithm for acyclic CP-nets presented in the literature, are near-optimal.
论文关键词:Conditional Preference Networks,Learning from membership queries,VC dimension,Teaching dimension,Recursive teaching dimension,Computational learning theory
论文评审过程:Received 13 January 2018, Revised 19 September 2019, Accepted 5 October 2019, Available online 10 October 2019, Version of Record 30 October 2019.
论文官网地址:https://doi.org/10.1016/j.artint.2019.103182