Leveraging transfer learning in reinforcement learning to tackle competitive influence maximization
作者:Khurshed Ali, Chih-Yu Wang, Yi-Shin Chen
摘要
Competitive influence maximization (CIM) is a key problem that seeks highly influential users to maximize the party’s reward than the competitor. Heuristic and game theory-based approaches are proposed to tackle the CIM problem. However, these approaches consider a selection of key influential users at the first round after knowing the competitor’s seed nodes. To overcome the first round seed selection, reinforcement learning (RL)-based models are proposed to tackle the competitive influence maximization allowing parties to select seed nodes in multiple rounds without explicitly knowing the competitor’s decision. Despite the successful application of RL-based models for CIM, the proposed RL-based models take extensive training time to train the model for finding an optimal strategy whenever the networks or settings of the agent change. To address the RL model’s efficiency, we extend transfer learning in reinforcement learning-based methods to reduce the training time and utilize the knowledge gained on a source network to a target network. Our objective is twofold; the first one is the appropriate state representation of the source and target networks to efficiently avail the knowledge gained on a source network to a target network. The second is to find an optimal transfer learning (TL) in the reinforcement learning method, which is more suitable to tackle the competitive influence maximization problem. We validate our proposed TL methods under two different settings of the agent. Experimental results demonstrate that our proposed TL methods achieve better or similar performance compared with the baseline model while reducing significant training time on target networks.
论文关键词:Influence maximization, Transfer learning, Reinforcement learning, Social networks, Q-Learning
论文评审过程:
论文官网地址:https://doi.org/10.1007/s10115-022-01696-3