Sample complexity of rank regression using pairwise comparisons
作者:
Highlights:
• We propose an estimator for the parameters of a generalized linear parametric model, which encompasses classical preference models such as Bradley-Terry [13] and Thurstone [14].
• We overcome the violation of independence and prove a sample complexity guarantee on model parameters. In particular, assuming Gaussian distributed features, we characterize the convergence of the estimator to a rescaled version of the model parameters w.r.t. the ambient dimension d, the number of samples N, and the number of comparisons M presented to the oracle.
• We show that to attain an accuracy ϵ>0 in model parameters, it suffices to conduct Ω(dNlog3N/ϵ2) comparisons when the number of samples is Ω(d/ϵ2).
• Finally, we confirm this dependence with experiments on synthetic data.
摘要
•We propose an estimator for the parameters of a generalized linear parametric model, which encompasses classical preference models such as Bradley-Terry [13] and Thurstone [14].•We overcome the violation of independence and prove a sample complexity guarantee on model parameters. In particular, assuming Gaussian distributed features, we characterize the convergence of the estimator to a rescaled version of the model parameters w.r.t. the ambient dimension d, the number of samples N, and the number of comparisons M presented to the oracle.•We show that to attain an accuracy ϵ>0 in model parameters, it suffices to conduct Ω(dNlog3N/ϵ2) comparisons when the number of samples is Ω(d/ϵ2).•Finally, we confirm this dependence with experiments on synthetic data.
论文关键词:Sample complexity,Rank regression,Pairwise comparisons,Features
论文评审过程:Received 6 December 2020, Revised 24 February 2022, Accepted 2 April 2022, Available online 10 April 2022, Version of Record 4 June 2022.
论文官网地址:https://doi.org/10.1016/j.patcog.2022.108688