A cross-benchmark comparison of 87 learning to rank methods
作者:
Highlights:
•
• We propose a novel way to compare learning to rank methods.
• We perform a meta-analysis on a large set of papers that report ranking accuracy on a benchmark dataset.
• LRUF, FSMRank, FenchelRank, SmoothRank and ListNet are the most accurate, with increasing certainty.
摘要
•We propose a novel way to compare learning to rank methods.•We perform a meta-analysis on a large set of papers that report ranking accuracy on a benchmark dataset.•LRUF, FSMRank, FenchelRank, SmoothRank and ListNet are the most accurate, with increasing certainty.
论文关键词:Learning to rank,Information retrieval,Evaluation metric
论文评审过程:Received 1 October 2014, Revised 14 May 2015, Accepted 7 July 2015, Available online 22 August 2015, Version of Record 22 August 2015.
论文官网地址:https://doi.org/10.1016/j.ipm.2015.07.002