Unbiased evaluation of ranking metrics reveals consistent performance in science and technology citation data
作者:
Highlights:
• We compare the ranking performance of 17 metrics in three distinct citation datasets.
• We use expert-selected items (e.g., seminal papers) to assess the metrics.
• Age biases of the evaluated metrics and the expert-selected items skew the results.
• We argue for an evaluation procedure that explicitly penalizes biased metrics.
• This allows us to uncover metric performance that is consistent across the datasets.
摘要
•We compare the ranking performance of 17 metrics in three distinct citation datasets.•We use expert-selected items (e.g., seminal papers) to assess the metrics.•Age biases of the evaluated metrics and the expert-selected items skew the results.•We argue for an evaluation procedure that explicitly penalizes biased metrics.•This allows us to uncover metric performance that is consistent across the datasets.
论文关键词:Citation networks,Network ranking metrics,Node centrality,Metrics evaluation,Milestone scientific papers and patents
论文评审过程:Received 14 May 2019, Revised 14 December 2019, Accepted 27 December 2019, Available online 4 February 2020, Version of Record 4 February 2020.
论文官网地址:https://doi.org/10.1016/j.joi.2019.101005