Dual constraints and adversarial learning for fair recommenders
作者:
Highlights:
•
摘要
Recommender systems, which are consist of common artificial intelligence technology, have a profound impact on the lifestyles of people. However, recent studies have demonstrated that recommender systems have fairness problems which means that some people with certain attributes are treated unfairly. A fair recommender means that users with different attributes achieve the same recommender accuracy. In particular, the recommender systems completely rely on users’ behavior data for preferences learning, which leads to a high probability of unfair problems because that the behavior data usually contains sensitive information of users. Unfortunately, there are a few studies exploring unfair problem in recommender systems. To alleviate this problem, we present a novel fairness-aware recommender with dual fairness constraints (FRFC) to improve fairness in recommendations and protect the user’s sensitive information from being exposed. This model has several advantages: one advantage is that an adversarial-based graph neural network (GNN) is proposed to prevent the target user being infected by sensitive features of neighbor users; another advantage is that two fairness constraints are proposed to solve the problems of adversarial classifier failures in whole data and unfair ranking losses. With this design, the FRFC model can effectively filter out users’ sensitive information and give users of different attributes the same training opportunities, which is helpful for making a fair recommendation. Finally, extensive experiments demonstrate that the proposed model can significantly improve the fairness of recommendation results.
论文关键词:Fair recommendation,Graph neural network,Recommender systems,Adversarial learning
论文评审过程:Received 11 April 2021, Revised 28 October 2021, Accepted 24 December 2021, Available online 3 January 2022, Version of Record 15 January 2022.
论文官网地址:https://doi.org/10.1016/j.knosys.2021.108058