Bottom-up multi-agent reinforcement learning by reward shaping for cooperative-competitive tasks
作者:Takumi Aotani, Taisuke Kobayashi, Kenji Sugimoto
摘要
A multi-agent system (MAS) is expected to be applied to various real-world problems where a single agent cannot accomplish given tasks. Due to the inherent complexity in the real-world MAS, however, manual design of group behaviors of agents is intractable. Multi-agent reinforcement learning (MARL), which is a framework for multiple agents in the same environment to learn their policies adaptively by using reinforcement learning, would be a promising methodology for such complexity in the MAS. To acquire the group behaviors by MARL, all the agents are required to understand how to achieve the respective tasks cooperatively. So far, we have proposed “bottom-up MARL”, which is a decentralized system to manage real and large-scale MARL, with a reward shaping algorithm to represent the group behaviors. The reward shaping algorithm, however, assumes that all the agents are in cooperative relationships to some extent. In this paper, therefore, we extend this algorithm to allow the agents not to know the interests between them. The interests are regarded as correlation coefficients derived from the agents’ rewards, which are numerically estimated in an online manner. Actually, in both simulations and real experiments without knowledge of the interests between the agents, they correctly estimated their interests, thereby allowing them to derive their new rewards to represent the feasible group behaviors in the decentralized manner. As a result, our extended algorithm succeeded in acquiring the group behaviors from cooperative tasks to competitive tasks.
论文关键词:Distributed autonomous system, Reinforcement learning, Reward shaping, Interests between agents
论文评审过程:
论文官网地址:https://doi.org/10.1007/s10489-020-02034-2